Will Serverless Computing Kill Docker Containers?

Will Serverless Computing Kill Docker Containers?

Image painted in seconds by AI.
Try AI stories for employer branding

In the tech world, serverless computing is a hot topic at the moment. In fact, if you conduct a Google search, you’ll come across a lot of chatter about it (significantly more than Docker containers).

But what’s really going on here? Is the buzz surrounding serverless computing justified?

Before we answer these questions, let’s first properly define what serverless computing and Docker containers are.

Regardless of what the name suggests, serverless computing isn’t actually serverless. And Docker containers don’t actually contain anything. Sounds strange right? Let’s break it down even further.

What’s Serverless Computing?

According to TechTarget, serverless computing can be described as an event-driven application design and deployment paradigm where computing resources are offered as scalable cloud services.

This is a significant departure from traditional application deployments where the server’s computing resources represented a recurring fixed cost. This means that you had to pay regardless of how much work was actually being performed by the server.

When you go serverless, you only pay for what you use on the cloud and you won’t be charged for any costs associated with idle or downtime. It basically operates like a function-as-a-service (FaaS) platform that enables organisations to better manage their budgets and this cost component has been its biggest selling point.

However, while you’re only paying for what you use, you’re still using a server to deploy and run your applications.

The pricing is dependent on the following parameters:

  • Script duration
  • Number of requests
  • Memory required for the function

The total cost of ownership (TCO) of serverless computing is similar to that of virtual machines (VMs), However, with VMs, enterprises needed to keep it running before a function request was placed and that created an element of waste when capacity isn’t being used.

So this approach will end up costing a lot more than serverless computing. The serverless model eliminates this issue because it can scale immediately upon a request. So once you configure the request with the function, you’re all done.

Serverless services have existed since the mid-2000s when on-demand computing solutions like the Google App Engine and Zimki emerged. However, serverless computing didn’t go mainstream until Amazon launched AWS Lambda in 2014.

Since then, the competition followed suit by launching their own serverless offerings like Google’s Cloud Functions and Azure Functions.

When it comes to adoption, New Relic’s research suggests that the adoption of serverless computing is now on the rise with 70% of enterprises now migrating a significant portion of their workloads to a public cloud with 39% using serverless computing.

According to Saurabh Bhatia, Senior Software Engineer at Airtasker, “serverless technology allows us to build event-driven pieces of code that are fully decoupled and can run out of our core system on AWS Lambda infrastructure.”

He further stated that they use serverless technology at Airtasker as “an event-based data transport mechanism that bridges various systems. We have used it for a massive extract, transform, load (ETL) migration transferring over 200 million notifications from PostgreSQL to Elasticsearch in under four hours.”

What are Docker Containers?

Docker describes containers as an approach where software is packaged into standardised units for development, shipment, and deployment. Furthermore, a container image is a stand-alone, lightweight, executable package of a piece of software which has everything it needs to run it.

This means container images will come equipped with the following:

  • Code
  • Runtime
  • Settings
  • System tools
  • System libraries

Docker containers are available for both Windows and Linux based apps and the containerised software will run the same way regardless of the environment. When you use containers you can reduce conflicts between teams running different software on the same infrastructure by separating the development and staging environment.

Docker is only five years old and since it was first released as open-source software in March 2013, it has grown from strength to strength. Supported by Amazon’s EC2 Container Service that was launched the following year, Docker adoption among large enterprises is on the rise and according to New Relic, 40% are using containers, and 34% are using container orchestration (which makes it almost on par with serverless computing).

But the concept of container technology has been around much longer than serverless computing. The Docker technology we have today is also based on past container technology standards that fell off the radar.

What are the similarities?

While a lot of people often talk about serverless computing vs Docker, the two have very little in common. This is because both technologies aren’t the same thing and serve a different purpose.

First, let’s go over some the common benefits:

  • Minimum overhead
  • High performance
  • Requires minimum infrastructure provisioning

The list above is the primary reason why people are debating the benefits of AWS Lambda and Docker. However, it’s difficult to compare the two because they essentially solve different problems.

For example, serverless is better suited for new applications because it’s difficult to write to rewrite or refactor applications as sets of serverless functions. Serverless computing solutions like AWS Lambda also come with built-in restrictions on the time available for functions to run, size, and memory usage.

It can also be almost impossible to use most monitoring tools with serverless functions because there’s no access to the container management system of the function’s container. This will restrict you from conducting a performance analysis and make debugging a fairly primitive endeavour.

Serverless computing also demands you to keep functions small to prevent them from taking up too much of the system’s resources. This approach is necessary to stop a relatively small number of high-demand users from overloading the system and locking everyone else out.

Serverless falls short when performance is important as the speed and response time is often uneven. This approach also has a limited list of natively supported programming languages. While it’s not necessarily intrinsic to serverless computing on a fundamental level, it’s still a reflection of the practical restraints of the system.

On the other hand, Docker container-based applications can be highly complex, large, and enabled to containerise existing applications although you will still need to engage in some serious refactoring (but not as much as what you would have to do on a serverless computing platform).

Container-based deployments also come with the added benefit of complete control over individual containers, the overall system, and the virtualised infrastructure. This enables you to better allocate and manage resources, set policies, and have enhanced control over security.

Having complete control over the Docker container environment also allows you to have an in-depth understanding of what going on both inside and outside of the containers. This makes debugging and analysis seamless as you can access a full range of resources as well as in-depth performance monitoring at all levels.

This means that you have the power to analyse each problem and tweak it on a microservice-by-microservice basis to better meet the unique performance requirements of your system. As the whole system can be monitored, it’s also easy to implement full analytics at all levels.

The origins of the serverless vs. Docker debate can be traced back to the launch of Amazon Lambda. This event essentially made the term “serverless” quite prominent, but the popularity of each technology can be directly attributed to each individual enterprise use case. As the adoption of serverless computing accelerated, the questions started around whether it will kill off Docker containers.

So will Serveless kill Docker?

In reality, both serverless computing and Docker work best when they are deployed to work together.

For large-scale complex applications and application suites in an enterprise or Internet environment, a container-based application that’s combined with a full-featured system for the management and deployment of containers is the best choice.

Serverless computing is the right choice for individual tasks that can be accessed by outside services or run in the background. They work well together because Docker-based container systems can be set up to forward such tasks to serverless applications to avoid tying up the resources of the primary program.

Luke Bennett, GM Core Engineering at Telstra agrees that we’re not heading for a single solution: “We are cognisant of vendor lock-in, and a lack of portability across cloud platforms is inherent with serverless at the moment. For enterprises embracing multi-cloud, such as Telstra, serverless won’t overtake containers in the short to medium term; and we’ll see wider uptake as the serverless capabilities across cloud providers is more standardised.”

He added that “smaller businesses and start-ups running short-lived or just-in-time functions will see benefits with serverless whilst they build their utilisation and ecosystem. Serverless and containers are complementary. Serverless abstracts out container complexity from the programming model, however, serverless cannot exist without physical machines, VMs, and containers, and forms part of the wider technology toolkit for developers to consume. As we move to a state-aware and longer running container stack, we will see the two co-exist for the respective strengths.”

Ollie Brennan, Head of Development at the Iconic further supported this approach stating that “it has taken a few iterations of technology to get to where we wanted to be. I think the biggest learning for me was that without Docker, microservices are a pipe dream. It’s possible without it but unless you can containerise services cost quickly becomes unmanageable.”

The idea that serverless computing will kill Docker containers seems to be totally unfounded. While there might be many debates in the virtual world, container-based serverless computing will be the future.

Try AI-powered Employer Branding to attract & retain talent

72 AI-powered languages

Trusted by the world’s top brands

Dedicated Customer Success

What is Employer Branding?
Employer Branding is essential for any company looking to recruit or retain talent. Your employees now have the same expectation as customers - in other words they want to know 'why' they should work for you, not just 'what' they are doing.

What is your company story and what do you stand for as an employer? Employer Branding content builds trust with your employees, increases your marketplace reputation and turns you into an employer of choice.

In today's environment employers need to work hard to stay relevant and create environments where employees are engaged and motivated. A strong Employer Branding strategy -projecting a positive brand identity - can help attact and retain the right people.

Especially in times of recession it is important for companies to set themselves apart from the competition and create strong bonds with their existing and future employees.

The Martec's AI-powered Employer Branding content tool is the most powerful platform on the planet for Employer Branding strategy, content creation, distribution and reporting. Used by many of the worlds' top Employer Brands for scale, impact and precision.

And 100+ other world class employer brands across 30 countries