Modal Title
Containers / Kubernetes / Software Development

Container or VM? How to Choose the Right Option in 2023

A guide to when and where each technology is appropriate in today’s hybrid cloud environments.
May 17th, 2023 8:42am by and
Featued image for: Container or VM? How to Choose the Right Option in 2023

A few years back, most technology articles would have you thinking that Linux containers and virtual machines were diametrically opposed components in the data center. That’s natural when a new technology is adopted: The hype cycle can push such innovations into every nook and cranny of the industry, looking for new wins over old software and hardware combinations.

You may remember when JavaScript was going to take over the server side, or when virtual reality was going to revolutionize education. In truth, these technologies eventually found comfortable areas of use, rather than supplanting every other idea for themselves. Things settle over time, and it can be tricky to discern where a given technology will end up most useful, and where it will be supplanted by better options farther down the line.

Now that Linux containers and virtual machines are no longer brand new, they’ve become well-understood tools for the common software developer to consider for various scenarios. We’d like to provide a guide, now, to just when and where each technology is appropriate in today’s hybrid cloud environments.

Big or Small?

Perhaps the easiest way to make your decision is according to application size and complexity. Containers are, among other things, an application packaging technology. Containers can be — and there are often very good and valid reasons for using them this way — deployed without Kubernetes directly into an operating system. This is part of our edge strategy with Red Hat Enterprise Linux and Ansible too: Containers are an easy, replicable way to deploy software while minimizing drift and moving parts.

There are other similar and competing technologies that have many of the same capabilities, such as unikernel, Wasm etc. Thus, while containers might be the right way to deploy an application today, there may be some movement around this model in the future as it is optimized and takes on new types of deployment models.

Some applications are, quite simply, too big and complex to fit into a container as is. We colloquially refer to these as monoliths. It should be noted that there is no technical limitation here: There’s no CPU/memory threshold that you cross and end up disqualified. Rather, this is based on the value of investment. For example, a single installer that deploys a database plus middleware plus $thing1 and$thing2, etc. onto a single server can be challenging to containerize as is. “Modernization” of the application may be required to decouple the components and/or adopt application frameworks and/or runtimes that are more friendly to containerization. One example of this would be moving a Java application from SpringBoot to Quarkus.

For the Developers

Developers, and administrators, regardless of whether they’ve adopted new-fangled cloud native architectures and/or DevSecOps methodologies, should embrace containers for many reasons. Speed, security, portability and simplicity are among the benefits of application containerization. And yet, this does not mean dumping virtual machines completely overboard.

The real question becomes, “Do I want to deploy my containerized application to Kubernetes or directly to a (virtualized) operating system?” There are many factors here to consider. One is the application’s requirements. Does the application need to run constantly as a single node, without interruption? Kubernetes does not migrate application components between nodes non-disruptively. They are terminated and restarted. If this isn’t behavior your application can tolerate, then Kubernetes is not a good fit.

It’s also important to consider the state of the application’s various components. If the application in question relies on third-party components, those may limit the use of containers. Many third-party vendors, especially in more stoic VM-centric industries, are slow to create Kubernetes-ready/compatible versions of their software. This means you can either deploy a VM or take the onus of supporting their software in Kubernetes yourself.

And even before you evaluate these options, it’s important to take a serious look at the skills available inside your organization. Does your team possess the skills and competency to handle Linux containers? Do you have, or are willing to build and or acquire, the necessary expertise for Kubernetes? This extends to API-driven consumption and configuration. Do your application and development teams need/want the ability to consume and configure the platform using APIs?

This is possible with all of “private cloud,” public cloud and Kubernetes, but is often more complex and harder on-prem, requiring a lot of glue from specialized automation teams. When it comes to the public clouds, your team needs specific expertise in each public cloud it’s using, adding another layer of complexity to manage. This is an area where Kubernetes can homogenize and further enable portability.

Infrastructure Efficiency

In many/most cases, a “web scale” application that has tens to thousands of instances is going to be much more efficient running on a Kubernetes cluster than in VMs. This is because the containerized components are bin packed into the available resources and there are fewer operating system instances to manage and maintain.

Furthermore, Kubernetes facilitates the scaling up and down of applications more seamlessly and with less effort. While it’s possible to create new VMs to scale new instances of an application component or service, this is often far slower and harder than with Kubernetes. Kubernetes is focused on automating at the application layer, not at the virtualization layer, though that can be done as well with KubeVirt.

Infrastructure efficiency also implies cost impact. This is going to be different for each organization, but for some, reducing the number of VMs will affect what they’re paying to their operating system vendor for licenses, their hypervisor vendor and their hardware vendor. This may or may not be counteracted by the cost of Kubernetes and the talent needed to manage it, however.

And there are still other considerations when it comes to security. Kubernetes is a shared kernel model, where many containers, representing many applications run on the same nodes. This isn’t to say they’re insecure — Red Hat OpenShift and containers deployed to Red Hat operating systems make use of SELinux and other security features and capabilities.

However, sometimes this isn’t good enough for security requirements and compliance needs. This leaves several options for further isolation: Deploy many Kubernetes clusters (which a lot of folks do), use specialized technologies like Kata containers or use full VMs.

No matter what the requirements are for your organization, nor whether you choose containers or virtual machines for your applications, there is one fundamental rule that is always at play in the enterprise software world: Change is hard. Sometimes, if something is working, there’s no reason to move it, update it or migrate it. If your applications are running reliably on virtual machines and there’s no corporate push to migrate it elsewhere, perhaps it is fine where it is for as long as it can reliably be supported.

Sometimes, the best place for change inside an organization isn’t deep in the stacks of legacy applications, it’s out in the green fields where new ideas are growing. But even those green fields have to connect to the old barn somehow.

The actual technology being used doesn’t necessarily place something in those green fields, however. In this way, it is important to find a method of supporting both containers and virtual machines inside your environments, as the only real mistake you can make is to ignore one of these technologies completely.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.