Continuous benchmarking of serverless cloud providers 2

To date, there is no standard benchmarking methodology to quantita- tively compare the performance of different serverless cloud providers. This project aims to design a framework that regularly runs a set of var- ious microbenchmarks on multiple providers, including AWS Lambda, Azure Functions,...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Min Kabar Kyaw
مؤلفون آخرون: Dmitrii Ustiugov
التنسيق: Final Year Project
اللغة:English
منشور في: Nanyang Technological University 2024
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/175156
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:To date, there is no standard benchmarking methodology to quantita- tively compare the performance of different serverless cloud providers. This project aims to design a framework that regularly runs a set of var- ious microbenchmarks on multiple providers, including AWS Lambda, Azure Functions, Google Cloud Run, and Cloudflare. This project ana- lyzes cold start delays, including snapshots and boot-based techniques, and the implications of the language runtime on the cold delay by extending an open-source serverless benchmarking tool, Serverless Tail-Latency Analyzer (STeLLAR). STeLLAR’s compatibility has been expanded to include more Cloud Providers as well as provider-specific features, and the existing system of automated daily experiments using STeLLAR has been extended to encompass new Cloud Providers and experiments, and enhanced with fault-tolerant features in the event of experiment failure.