چكيده به لاتين
With the migration of the enterprise applications to micro-services and containers, providers of the cloud services starting with Amazon in 2014, announced a new computational model called "Function as a Service". In this model, developers create fine-grained functions with smaller execution times than developing coarse-grained software. Besides that, the management of system resources and servers is not an issue for developers anymore, but it would be a task for the cloud providers. After Amazon's AWS Lambda, corporates and open-source communities started to develop some other well-known FaaS systems like Azure Functions, OpenFaaS, OpenWhisk and so on. Although they have their own specific limits, but they have a lot of issues and challenges in common. Even though this model has many benefits like reducing costs, it still has many challenges like balancing cost and performance, programming-models and using current dev tools, scheduling challenges like execution time prediction, solving container cold start, saving data in caches, security issues, and privacy concerns. In this report, we talk about scheduling and cold start problems in particular.
Trade-off happens when we want to keep the warm containers on to reduce the cold start time, but this can increase the cost of the service. In this document we use a heuristic method by constructing a cooperation network between functions, tracking function call frequencies, and environment variables to optimize this trade-off using four different decisions at runtime. The method in this document performs 32% better than the static waiting approach (as in Amazon). This comparison is done using an aggregate measure of response-time, turnaround-time, cost, and utilization, and measured using a custom simulator written for this program in a functional programming language.