What is serverless computing?
Serverless computing is type of a cloud computing where the provider allocates and manages the web server execution environment. In this setting, a developer is only concerned with providing the code, given as functions, that run on the server while everything else—planning the server capacity, configuring, managing and maintaining the server, providing fault tolerance, scaling up or down—is handled by the cloud provider.
In this sense, the name serverless is a bit misleading since servers are still being used. However, developers do not need to manage them, and can instead concentrate on implementing the application logic.
How does it work?
In a typical use case, a developer implements a function, for instance one that takes an image and resizes it, and uploads the code to the serverless runtime environment; the environment is often called Function-as-a-Service platform, or FaaS.
Within FaaS, the developer we can implement a component of an application, like an image resizer module, or use multiple functions to implement an entire application.
For every function, the developer needs to define the trigger that specifies how the function is invoked. This can be when, for instance, an explicit HTTP request is received, or an event is triggered or by some other invocation. Usually the provided function only executes the application logic and does not store any data. If storage is required, additional cloud services need to be used.
Pros and cons of serverless
Serverless has certain benefits compared to traditional development and deployment approaches. Serverless computing has these benefits:
- Serverless simplifies application management. From the developer’s standpoint, applications become easier to deploy and manage. The developer is only concerned with delivering code: the application server infrastructure and everything below that is handled by the cloud vendor. This ideally shortens time-to-market.
- Serverless provides elasticity. The cloud vendor manages web server scaling. When the load is high, the vendor automatically scales up the capacity, and similarly, when the load is low, it scales it back down. Since the scaling can go in either direction, depending on the need, we refer to it as elasticity.
- Serverless may be more cost efficient. At least when compared to renting or buying dedicated infrastructure. Typically, servers have significant periods in which they idle or are under-utilized. If such infrastructure is being rented or has been purchased, funds are spent suboptimally. In serverless, developers only pay for resources that the functions actually consume. Idle time isn't billed.
However, serverless also has its limitations:
- Cloud vendor lock-in. While there are many FaaS providers, some of which offer the same development environment, it is not given that these environments are interoperable. You might find that a Java application that runs flawlessly on the infrastructure of one provider, has issues when moved to another—even though both vendors claim to support the same development environment. Compared to containers that run in standardized execution environments and therefore have portability guarantees, FaaS platforms can vary between vendors. This makes it more difficult to move from one provider to another.
- Opaque functioning. Since functions are executed in customized and often proprietary cloud environment, they are more difficult to inspect, debug, or profile. Similarly, replicating the execution runtime on a local developer machine, for the purpose of testing or debugging, may not always be possible.
- Increased response latency. Functions that are used infrequently may take longer to produce the result when compared to a classic deployment inside an always-on virtual machine or a container. This phenomena is known as the cold start. It occurs because the cloud vendor may decide to completely shut-down an infrequently used function to save resources. When such function is invoked, the infrastructure needs additional time to spin up the required runtime: for instance, compare the time needed to start the JVM and run a function to time needed to run a function within already running JVM.
Good serverless use-cases
Serverless is a good fit for these use-cases:
- Data Analytics. When one wants wants to analyze larger volumes of data on periodical basis; you pay only when the resource is needed while the idling is not billed.
- Continuous integration / continuous delivery pipelines (CI/CD). Certain operations in the CI/CD pipeline seem to fit perfectly the serverless uses-case: they are invoked on request and only on at specific occasions.
- Content conversion and generation. Any application that converts between content formats or generates content from provided information. For instance, applications that convert between image formats, or resize images. Or applications that generate cover images from provided information and similar.
In general, any application that occasionally or periodically requires a bit more computing resources, but otherwise has considerable idle periods, is a good use-case, since serverless allows for speedy development, easy deployment and is billed only when in use.
However, applications that have idle times, but when invoked require quick response, are usually not the best fit. The reason is that once the function is idling, the FaaS platform may decide to shut it down to save resources. When such shut-down function is invoked, it takes longer to respond due to the cold start issue.
Serverless is a cloud computing paradigm that allows developers to directly deploy code on vendor-managed server execution environments or function-as-a-service platforms.
Compared to other types of cloud computing, serverless allows renting the most fine-grained computing resources. At first cloud vendors offered bare-metal machines, then virtual machines, then containers. With serverless, developers can directly rent webserver runtimes to execute their code.