Virtualization has changed the face of modern computing, improving system utilization, decoupling applications...
from the underlying hardware and enhancing workload mobility and protection. But hypervisors and VMs are just one approach to virtual workload deployment. Container virtualization is quickly emerging as an efficient and reliable alternative to traditional virtualization, providing new features and new concerns for data center professionals.
The difference between containers vs. VMs is primarily in the location of the virtualization layer and the way OS resources are used.
Containers vs. VMs: Understanding the differences
Containers and VMs are simply different ways of carving up and using compute resources -- usually processors, memory and I/O -- that are already present in a physical computer. Although the goal of virtualization is essentially the same as containers, the result is two notably different approaches that offer some unique characteristics and tradeoffs for enterprise workloads.
VMs. VMs rely on a hypervisor, which is a software layer that is normally installed atop the actual bare-metal system hardware, dubbed Type 1 or bare-metal hypervisors. This has led to hypervisors, such as VMware vSphere -- ESXi -- and Microsoft Hyper-V, being perceived as OSes in their own right. Once admins install the hypervisor layer, they can provision VM instances from the system's available computing resources. Each VM can then receive its own unique OS and workload. Thus, VMs are fully isolated from one another -- no VM is aware of or relies on the presence of another VM on the same system -- and malware, application crashes and other problems affect only that VM. Admins can migrate VMs from one virtualized system to another without regard for the system's hardware or OSes.
A system might ultimately be provisioned with numerous VMs. Often, the first VM is the host VM, used for system management workloads, such as Microsoft System Center. Subsequent VMs contain other enterprise workloads, such as database, ERP, customer relationship management, email server, media server, web server or other business applications.
Containers. The container environment is arranged differently. With containers, a host OS is installed on the system first, and then a container layer -- a container manager, such as Docker -- is installed atop the host OS, which is usually a Linux variant. Once admins install the container layer, they can provision container instances from the system's available computing resources and deploy enterprise applications in the containers. However, every containerized application shares the same underlying OS, the single host OS. By comparison, every VM gets its own unique OS. Although the container layer does provide a level of logical isolation between containers, the common OS can present a single point of failure for all containers on the system. As with VMs, containers are also easily migrated between physical systems with a suitable OS and container layer environment.
Containers vs. VMs: Comparing benefits and disadvantages
There are five primary considerations when comparing VMs and containers: resource efficiency, scalability, versatility, portability and security.
Resource efficiency. Resource efficiency is simply a matter of how much compute resources -- processors, memory and I/O -- it takes to operate a virtualized instance. Both VMs and containers can vary dramatically in their use of resources, depending on the demands of the workload being deployed. However, VMs generally demand more resources than containers running similar workloads because every VM requires its own OS, and this requires additional resources -- sometimes, substantial additional resources -- to support the OS' added overhead. Consider that a computer hosting 10 VMs must run 10 OSes. Because containers share the same underlying OS kernel, only one system OS is needed. The typical result is far smaller container instances and many more containers -- dozens and even hundreds of containers -- possible on a given computer.
Scalability. Because containers are typically much smaller than VMs, it's easily possible to host many more containers on a computer system than VMs. Containers have the edge in scalability. But time is also a factor in scalability. A VM can take minutes to start up and is consequently used for relatively long-duration tasks, running anywhere from hours to months. The comparatively small size -- resource efficiency -- of containers typically results in far faster load/startup times. Admins can deploy most containers in seconds and deploy large numbers of containers in short order, making containers ideal for highly scalable, on-demand workloads or workload components with relatively short operational durations, often running anywhere from seconds to minutes.
Versatility. VMs often get the edge in workload versatility because VMs have the benefit of using a wider proliferation of OSes; a more diverse array of enterprise workloads can be deployed and supported effectively. By comparison, containers typically rely on some distribution of Linux -- though this is changing -- and might limit the use of containers for workloads or workload components that don't use the same OS kernel. For example, if a server deploys Debian Linux 9.9 as the underlying OS, all of the containers deployed atop that OS must support the Debian Linux 9.9 kernel or risk performance, stability or other problems. Versatility limitations can usually be addressed by running containers within a VM -- the VM running the desired OS for those respective containers.
Portability. Portability is the act of moving a virtual instance from one system to another, and both containers and VMs provide ample portability or live migration between suitable environments -- that is, from system to system running compatible hypervisors or container engines. The issue here is speed. Containers can be much smaller than VMs, so containers can migrate faster between systems. However, the convenient scalability of containers often means that containers are deployed in groups, so ultimately, there is no compelling advantage of containers vs. VMs in migration.
Is the containers vs. VMs debate over?
Some users are converging containers and VMs to take advantage of the performance containers provide and the security that VMs offer. By packaging a container in a VM, you're getting another abstraction layer, which improves security by preventing a kernel breakout from affecting multiple containers.
Containers vs. VMs: Security issues
The last issue to be considered is security, but the discussion deserves its own section. It's no secret that workload and data security is a mission-critical issue for almost every business. Simply keeping a workload running properly is often a matter of business continuance and corporate compliance. And the ever-present threat of hackers, malware, intrusion and other malicious activities makes it vital to select hardened environments for enterprise applications, both to prevent and to contain any security flaws or issues that might arise.
VMs are generally regarded as the most secure and resilient platform for workloads. Hypervisor technologies are well-proven, and the logical isolation that hypervisors provide between VMs ensures that every VM exists as its own separate logical server with its own OS and drivers. However, all of the elements running in and around the VM -- the OS, application, drivers, authorization and authentication, and network traffic -- are indeed subject to security flaws that must be constantly addressed just as they would in any traditional physical deployment. When the highest level of isolation is required for security, VMs generally have the edge.
Containers are agile and fast, but all containers run atop a common OS. This is technically fine, but any bugs or security flaws in the OS can potentially expose all of the containers running through the common OS kernel. The underlying OS kernel poses a single point of vulnerability. As a minimum, systems used for containers typically employ a hardened well-proven OS. Administrators only apply OS security updates and patches after extensive testing and validation. And security tactics, such as intrusion detection and prevention, are typically implemented to guard the server. In actual practice, security can be augmented by running groups of containers in VMs, mixing the benefits of containers with the enhanced isolation of VMs.
Containers vs. VMs: Choosing the best option
The choice of containers vs. VMs is not a mutually exclusive one. Containers and VMs can readily coexist in the same data center environment, and even on the same server, so the two technologies are considered complementary, expanding the available tool set of application architects and data center administrators to provide unique advantages for the most appropriate workloads. The trick then is matching the virtual instance to the right workload.
Instance choice is largely driven by architectural goals. Traditional monolithic applications -- still quite common and appropriate for many enterprise applications -- generally work well in VMs and common high availability techniques, such as clusters. Such traditional applications are often intended to run for long durations and use the security that VM isolation can provide.
By comparison, small mobile applications can function quite well in containers where characteristics like fast startup, high scalability and potentially short operational times can work to the containers' advantage. But containers are increasingly driven by emerging software design models, such as microservices architectures. Microservices enable an application's design to be broken up, developed, deployed and supported as separate functional components, each in a fast, highly scalable container that can also be independently scaled and migrated. Thus, containers have become attractive for flexible and scalable application designs.
Expect containerization developments in 2019
Stay on top of cloud container technologies
Prove your application container IQ
Dig Deeper on Application virtualization
Related Q&A from Stephen J. Bigelow
Regression tests and UAT ensure software quality and both require a sizeable investment. Learn when and how to perform each one, and some tips to get... Continue Reading
Learn the meaning of functional vs. nonfunctional requirements in software engineering, with helpful examples. Then, see how to write both and build ... Continue Reading
Just because software passes functional tests doesn't mean it works. Dig into stress, load, endurance and other performance tests, and their ... Continue Reading