Analysing cold start for serverless computing

More organizations are adopting serverless computing due to its simplicity. The developers only need to focus on their code developments while leaving the rest to their cloud service providers. However, the cold start problem is still a very prominent issue for cloud service providers. A cold start...

Full description

Saved in:
Bibliographic Details
Main Author: Chin, Zhi Hao
Other Authors: Tang Xueyan
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/171884
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:More organizations are adopting serverless computing due to its simplicity. The developers only need to focus on their code developments while leaving the rest to their cloud service providers. However, the cold start problem is still a very prominent issue for cloud service providers. A cold start occurs when there is an incoming request, but the cloud service providers are not ready to receive it, causing a latency. There were numerous algorithms designed to tackle the cold start problem, however, there has been little to no algorithms that uses real production workload in their evaluation. Insights from real production workloads can enable us to better understand the underlying operations of serverless platforms and develop a strategy to tackle the cold start problem. Therefore, in this paper, we analyzed the characteristics of a production trace from Microsoft Azure. We showed that the top 10 application counts resulted in 83.87% of the entire requests and 87.4% of the requests has a short execution duration of less than 1s. From these observations, we adopted a hybrid histogram model by Shahrad et al. to reduce the number of cold starts occurrences.