Serverless Computing
Last modified on Tue 28 Nov 2023

As cloud services became more and more popular, they gave way to a new computing execution model: serverless. In this model the cloud provider allocates the necessary resources to run the code we want to run, taking care of everything behind the scenes and allowing the developers to focus more on the development of the business logic.

The term "serverless" might look like a misnomer at first glance - our code is, after all, executed on some kind of a server - but its point is to describe the developer's perspective. Using this model, the developers and DevOps engineers don't have to be concerned with all the usual server maintenance stuff like allocating CPU and memory, managing the network and its security, and planning the capacity, nor do they have to worry about having more resources than the application requires at a specific point in time. The application uses the resources provided by the cloud service only while it is running, meaning that it can be scaled up or down when needed, or be completely shut down when not in use.

The execution of our code can be triggered by various triggers like an HTTP request, a queue message, changes one the blob storage, or time triggers. All of them will have the same underlying infrastructure and setup, with the only difference being the entry point.

The main benefits of using serverless solutions are:

Even though the benefits might sound like a dream come true, before you start serverless-ing everything from your APIs to your toaster, consider some of the disadvantages of this approach:


Even though it is boasted as one of the main advantages of this hosting model, scalability is a double-edged sword. In real-world scenarios, we need to control the scaling of instances. Our code will often deal with databases or other services which may have scaling limits, so over-scaling serverless applications could put other resources under pressure. When building and deploying our application we must consider such circumstances.

Serverless vs other hosting models

Everything that can be done with serverless computing can also be done with other hosting models. Choosing a hosting model is not a matter of picking what is possible, but rather of going down a road that makes the most sense of our application's and the client's requirements. Some clients, like the ones from the banking industry, will require to have all their applications hosted locally for security reasons, while others might want to reduce hosting costs regardless of the potential disadvantages.

All hosting models have their pros and cons, but we don't necessarily have to fully commit to one of them - we can go with a hybrid approach by picking the right models for different application modules. For example, we could host the API in a container or a regular server, and offload background tasks to serverless functions, allowing communication between them using a message queue.