Continuous benchmarking of serverless cloud providers
To date, there is no standard benchmarking methodology to quantitatively compare the performance of different serverless cloud providers. This project aims to design a framework that regularly runs a set of various microbenchmarks on multiple providers, including AWS Lambda, Azure Functions, and Go...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175287 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | To date, there is no standard benchmarking methodology to quantitatively compare the performance of different serverless cloud providers. This project aims to design a framework
that regularly runs a set of various microbenchmarks on multiple providers, including AWS Lambda, Azure Functions, and Google Cloud Run. To achieve this, the project extends the Serverless Tail Latency Analyzer (STeLLAR) framework by introducing automated deployment capabilities for Azure Functions and supporting the execution of image size experiments. This project analyses cold start delays related to image size and other characteristics of serverless functions, including the available network bandwidth and chunk sizes used during a cold start initialisation. |
---|