Continuous benchmarking of serverless cloud providers 2

To date, there is no standard benchmarking methodology to quantita- tively compare the performance of different serverless cloud providers. This project aims to design a framework that regularly runs a set of var- ious microbenchmarks on multiple providers, including AWS Lambda, Azure Functions,...

全面介紹

Saved in:
書目詳細資料
主要作者: Min Kabar Kyaw
其他作者: Dmitrii Ustiugov
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2024
主題:
在線閱讀:https://hdl.handle.net/10356/175156
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:To date, there is no standard benchmarking methodology to quantita- tively compare the performance of different serverless cloud providers. This project aims to design a framework that regularly runs a set of var- ious microbenchmarks on multiple providers, including AWS Lambda, Azure Functions, Google Cloud Run, and Cloudflare. This project ana- lyzes cold start delays, including snapshots and boot-based techniques, and the implications of the language runtime on the cold delay by extending an open-source serverless benchmarking tool, Serverless Tail-Latency Analyzer (STeLLAR). STeLLAR’s compatibility has been expanded to include more Cloud Providers as well as provider-specific features, and the existing system of automated daily experiments using STeLLAR has been extended to encompass new Cloud Providers and experiments, and enhanced with fault-tolerant features in the event of experiment failure.