Serverless computing

Ilkin Isgandarov
4 min readMar 26, 2023

--

Photo by Taylor Vick on Unsplash

Serverless is a popular way of using the cloud nowadays where you don’t have to worry about managing servers. “Serverless” doesn’t actually mean there is no servers participating in backend, rather developers doesn’t care of its provisioning, scaling and maintenance, because all these are managed by cloud providers. Serverless technologies support different runtime stacks and allow developers to use their libraries/frameworks of choice with some limitations. Serverless model can be used in different use cases, such as creating REST APIs, asynchronous event processing, real-time data processing, Internet of Things, image processing etc.

It’s important to keep in mind that this model works best for stateless use cases. It means we can’t keep track of previous actions or states in the process memory or storage. While states can exist, they are typically stored in external resources. Developers don’t need to worry about the lifecycle of computing instances, rather need to consider limitations of external resources. The main difference between serverless and containerized solutions is that while designing the containers we do a similar level of abstraction and isolation, however, meanwhile there is a need also to manage deployment, dependencies and scaling of the containers.

Very often we come across with the term of FAAS (Function as a Service) which is actually a subset of Serverless computing. As known, the other cloud model - PAAS mostly covers Operation systems -> Virtualization -> Runtime layers and allows more control of resources. FAAS can be considered as a layer on top of PAAS. The smallest unit of execution, called a Function, is designed to perform a single business logic, like returning a response for an HTTP API request. However, it can also be a part of complex logic, where other cloud services may trigger functions and the function itself may invoke other functions or automatically send outputs to other cloud components. Example scenario might be as follows:

  1. The upload service adds some image into a cloud blob storage.
  2. The storage insertion event triggers the thumbnail function.
  3. The thumbnail function implements the image resizing operation and creates a new thumbnail entry in blob storage.
  4. At the end, the thumbnail function notifies other components about operation result. In case of a successful thumbnail creation, this function sends the event with ID of the thumbnail blob entry to the event grid component.
  5. The event grid may trigger another subscriber function, which performs database operations.

The scaling and cost model are the main benefits of serverless computing. There is always an orchestrator behind the scene responsible for autoscaling. It creates new instances of function process and disposes unnecessary instances. It is to consider that one function instance is able to handle many invocations. As soon as traffic is increasing, orchestrator makes decision on allocation of new instances and load distribution. The instance creation is an expensive operation, since at first it downloads artifacts, performs configuration and executes the process. It is called a cold start. If a function is not executed for an extended period of time, there may be no active instances available, which can result in increased latency. Cloud providers usually offer always-ready (pre-warmed) instances to avoid this issue, but of course, it does cost more.

We should be cautious while dealing with external resources, because we may sometimes reach some limitations due to autoscaling behavior of Serverless. Let’s assume, there is a function interacting with Redis. In case of Redis connection creation for each function execution, we add another latency, since connection initialization may take some time. Besides, we may reach maximum allowed connections limit from the Redis server side. To prevent this issue, it’s important to design functions to establish connections during the function bootstrap and share them across executions. Some situations may require to limit maximum amount of function instances, since cloud providers generally offer such option.

With serverless computing, you are charged only for what you use based on the number of invocations and running time of instances. Unlike other cloud models, you don’t need to pay when the CPU is idle. This also makes serverless computing a Green computing model. The other major benefit is a developer productivity, because this model allows to create powerful cloud-native solutions within short amount of time with the help of many integrations.

Now, let’s turn to some drawbacks of this model. The major one is a lack of control over the computing resources. There are also different limitations like maximum execution time. Testing automation isn’t straightforward as well. Vendor Lock-In is another major restriction. If after some time you decide to change cloud provider, you may need to rewrite from scratch a significant part of the code base.

Nowadays all major cloud providers offer serverless solutions (Azure Functions, AWS Lambda, Google Cloud functions etc.), the usage and popularity of this model are significantly growing.

--

--