Serverless computing in clouds
In serverless computing, cold start refers to the delay that occurs when a serverless function is invoked for the first time after being idle for a period of time, or when it is first deployed. This delay in the start-up time can cause increased latency for the first request and can be a performance...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/165881 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In serverless computing, cold start refers to the delay that occurs when a serverless function is invoked for the first time after being idle for a period of time, or when it is first deployed. This delay in the start-up time can cause increased latency for the first request and can be a performance concern. Cold start can be mitigated by keeping the instances of serverless functions warm and ready to handle requests. However, keeping a container alive can increase costs for resources used or even cause the resources of the container to be depleted. Therefore, it is important to strike a balance between keeping the container alive long enough to avoid cold starts and keeping it alive for too long by using a well-designed keep-alive policy which adjusts the container’s lifetime or balances container’s priority accordingly.
Since resource management for serverless function is similar to object caching, we have implemented caching-inspired algorithm such as Greedy-Dual keep-alive policy and Weighted
Greedy-Dual keep-alive policy to evaluate their performance. By analyzing and utilizing the characteristic of FaaS workload, we also aim to design a more principled policy that efficiently reduces cold start overhead. |
---|