Continuous benchmarking of serverless cloud providers 2

To date, there is no standard benchmarking methodology to quantita- tively compare the performance of different serverless cloud providers. This project aims to design a framework that regularly runs a set of var- ious microbenchmarks on multiple providers, including AWS Lambda, Azure Functions,...

Full description

Saved in:
Bibliographic Details
Main Author: Min Kabar Kyaw
Other Authors: Dmitrii Ustiugov
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175156
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:To date, there is no standard benchmarking methodology to quantita- tively compare the performance of different serverless cloud providers. This project aims to design a framework that regularly runs a set of var- ious microbenchmarks on multiple providers, including AWS Lambda, Azure Functions, Google Cloud Run, and Cloudflare. This project ana- lyzes cold start delays, including snapshots and boot-based techniques, and the implications of the language runtime on the cold delay by extending an open-source serverless benchmarking tool, Serverless Tail-Latency Analyzer (STeLLAR). STeLLAR’s compatibility has been expanded to include more Cloud Providers as well as provider-specific features, and the existing system of automated daily experiments using STeLLAR has been extended to encompass new Cloud Providers and experiments, and enhanced with fault-tolerant features in the event of experiment failure.