Part of the Orange Group

Expert's Voice Cloud
5 min read

Containers vs Serverless

Article written by:

Containers vs Serverless

Containers and Serverless are two hot topics nowadays. Some say, that serverless is a replacement for containers, but is it really? To find out, we will try to compare those two technologies and decide which one, if any, is better. But first, let’s start with some theory.

What the containers are?

Containers, or Linux Containers, is a generic term of on an implementation of operating system-level virtualization for Linux. It differs from the hardware virtualization, as it does not create virtual machines but only some isolated user-space instances called containers, partitions, virtualization engines or jails. From the outside, containers may look like virtual machines but they differ from the hardware virtualization. They do not replicate entire operation systems, but only components they need to operate. From the application’s running inside the container perspective, it looks like it is running on the real computer. It means that you can pack the applications with all the libraries and dependencies into a dedicated box and run anywhere as a single package. In overall, containers isolate applications from the operating system and applications inside containers can see only the container’s content and devices assigned to the container. What’s the best, when you put your application into a container, you can run it everywhere the container runtime is installed? And it lives only as long as the container does.

When it comes to containers, we have to mention an open source project called Docker. It is a command line tool that makes creating of containers easy. With Docker, one can easily define a container’s content in code and then version, reproduce, share and modify that code.

Containers also help to split complex applications into microservices working together. Each microservice is developed and run inside its own container, each microservice can scale independently, but eventually, all of them cooperate as a single application.

Containers? Or maybe serverless?

So, if you can run microservices in containers, why not to run them without containers? Not just outside of them, but totally without them. I mean serverless.

The serverless term became popular when Amazon launched Amazon Lambda service in 2014 and is a hot topic since then.

It’s kind of architecture in which applications run in an environment without visible processes, operating systems, servers or virtual machines. I mean not visible to end-user, as serverless computing is not in fact serverless. It requires some servers beneath. As any cloud-based service it leverages servers, but those are transparent for the end-user, and the responsibility for running them is on the service provider, and developers can focus on writing code.

So, how does serverless computing work? It’s quite simple. You write a code and upload with all dependencies to the Lambda (in case you use AWS). Then you can run it by calling your function from the AWS services like EC2, S3, API gateway and few more. Once called, the function is deployed by Lambda in a container and lives until its job is done.

Serverless is perfect for running stateless microservices, which you need just for a while, to do what they have to do, and then stop until needed again.

The Duel

It now looks, that in overall, both, serverless and containers have similar use cases. How to choose the right solution then? What is better? Well, as usual, the only answer is – it depends. Let’s then a look at pros and cons of both solutions to help you make the right choice.

What are the advantages of containers? There are many of them. First, with containers, you can make your applications as large as needed. It doesn’t mean, that you can’t do them big and serverless together. It just means that you may encounter some bottlenecks and size and memory constraints rather with serverless than with containers. It’s much easier to refactor your existing application to use containers in the cloud than to use serverless computing. Running containers gives you flexibility and full control over the environment you run them inside. You can manage and allocate resources, set policies, control security and versions. You also have the full ability to debug, test and monitor your application. All of this is impossible or at least not that easy with serverless computing. What else is on the pros side, and what is probably the most significant advantage of containers, is portability. You can move your containers and run them anywhere you have the container’s runtime, for instance, Docker, running. It doesn’t matter if it is in the cloud, other virtual environment or bare metal server. The rule is simple – you have Docker running, you can run Docker containers in there. This also makes containers vendor agnostic, which is another advantage over serverless.

Wherever there are pros, there also have to be conned. So it is with containers. Let’s start with the administration overhead. You need to run the containers on your own, apply security fixes and patches and react when needed. It’s all under your responsibility. It may also be harder to start with containers than with serverless computing. The learning curve is steep, and containerization is not easy to learn and even harder to master. And last, but not least – running containers in AWS can be quite expensive, as they run on EC2 instances. If you need powerful ones, you may expect high bills.

And what about serverless computing? What are advantages and disadvantages? Well, first of all, there is no administration overhead. You just upload your functions and run them without worrying about underlying architecture. That means you also don’t have to think about scalability, as cloud provider does it for you and serverless functions scale automatically. Time to market and software release time is also reduced with serverless. When using serverless computing you pay per function execution only, so you don’t have to pay for the idle time. If your customers, for instance, do not use your application during the night, you don’t pay at all, while you have to pay for the same idle time when running containers. However, serverless computing is not that good for long-running applications. In such cases, it can be more expensive than running virtual instances with containers on board.

And there are more on the cons side. First of all, serverless computing is a black box. You are totally out of control of what is inside. Your application runs in the multitenant environment, what means that your code runs on the same physical machine as other customers’ code. Your application is entirely dependent on the third-party provider. Moving out with your application to another provider can be and will be difficult, if not impossible without significant changes in the application. At the same side, you can put something called cold start. It is, that before your function can run, the platform has to initialize resources. For Lamba, it means that a container has to be started and it takes time. There is also no guarantee that your functions will run on the same server, which may introduce delays or bottlenecks.

Who’s the winner?

It looks like not a good choice. Both, containers and serverless computing, have pros and cons. Both have limits, and both can be better or worse solution, depending on the use case. So, what to do?

Containers are perfect if you need greater control over your runtime environment. If you require some specific versions of dependencies or applications, it is what you need. You can fully control underlying operation system, programming language, and runtime versions. You can also run containers with different software stacks inside. So if you need full control, containers could be a good choice. And it also refers to debugging, testing and monitoring.

If your application is big and complicated, running it in containers may be a better choice than refactoring it to be serverless. Especially, when you want to move your application from the on-premises environment to the cloud.

On the other hand, if your application leverages microservices and you are not afraid of vendor lock-in, you may think about going serverless. It is the perfect choice for handling backend tasks for a website or mobile applications, processing real-time streams and uploads. Basically, whenever it is possible to run your function as stateless microservice, it is worth to think about serverless computing. It is easier to deploy and also cost-effective. When an application is not in use, it is shut down, and you don’t pay for it. What else do you need if you run a startup and you are short on money? Serverless computing is what you want to use.

As you can see now, it’s difficult to say that serverless will overcome and replace containers. At least not yet. So far containers have some advantages over serverless computing and fit better for some tasks. Serverless, however, evolves quickly and we will see what the future brings. But now it’s time for you. Think, analyze and choose, what better fits your needs at the moment. And remember, it may change in the future.

What Can We Do For Your Business?

Contact Us!

You might also be interested in