Many early adopters find containers and VMs pair well together -- like chocolate and peanut butter. But instead of chocolate-covered peanut butter, the future convergence of these two technologies may look entirely different and result in something more comparable to Nutella.
Both containerization and hypervisor-based virtualization offer the ability to abstract applications from the underlying server hardware, but organizations aren't looking at containers vs. VMs. More often, they're deciding how to best converge the two -- often in surprising ways.
Docker, the company whose technology renewed interest in containers, has said it plans to remain neutral on the question of exactly how to run containers. But other organizations are staking out firm stances to find a balance that emphasizes the advantages of both.
The premise behind the containers vs. VMs discussion stems from the theory that bare metal containers -- those that are created from operating systems running on physical hardware -- can maximize resource efficiency by reducing redundant operating system information. Unlike a VM, each container instance does not need its own independent operating system, reducing overhead and allowing administrators to pack more workloads onto less physical hardware. In practice, there are still many hurdles to running production workloads at scale on bare metal containers.
Because containers on the same physical host share an operating system kernel, a security breach of one container could compromise others that share the same physical host. Additionally, robust VM management tools, such as VMware's vSphere, offer complex production quality management functions and reliability features -- e.g., Live Migration and High Availability -- unavailable on containers.
A familiar façade
One way to address many of these challenges is to simply package a container within a VM. Administrators can manage each container separately with a one-container-per-VM model and use existing virtualization management software. And, since each container relies on a VM as an additional abstraction layer, administrators can avoid the security concerns of multiple containers sharing the same OS kernel.
Unsurprisingly, VMware, which has a vested interest in ensuring that VMs remain the focal point of tomorrow's data centers, claims that VMs and containers are better together. The company is currently developing two different approaches, both of which emphasize a container nested within a VM.
"The question we're trying to answer is how can you deliver this new technology -- allowing developers to go fast, but still [maintaining] that control, governance, resource isolation and SLAs [service-level agreements] in a way that's tractable," said Kit Colbert, vice president and general manager of the cloud-native apps business group at VMware.
"Once you've created this virtual container host, everything else is normal vSphere from there on out," Colbert said. "The goal is to enable our core vSphere audience to be able to leverage what they already have without significant retooling."Last year, VMware introduced a technology preview, vSphere Integrated Containers (VIC), which allows administrators to deploy and manage containers from VMware's familiar vSphere interface. VSphere Integrated Containers allows for the creation of what the company calls a virtual container host -- a VM running a lightweight Linux OS on which a container can be rapidly provisioned.
While it's still in the technology preview stage and not yet generally available, it shows promise as an option to bring containers to their existing infrastructures, VMware customers say.
Containers, specifically vSphere Integrated Containers, are on the near horizon for Arc Innovations, a New Zealand electrical utility provider, said system engineer Darran Provis.
Arjan van de Ven, Intel
"Containerization on VMware, especially VIC, will help with distributing memory and CPU resources by allowing us to move various site components to balance the workload," Provis said. "If you are using a heavy application -- for example, Tomcat/Solr with Apache -- you are often stuck with a single VM with large resource requirements. By containerizing Tomcat/Solr separately from Apache, we can balance that load across the estate."
Provis, who also recently worked for a hosting provider, said the technology has a strong draw for service providers.
"Being able to auto-deploy a container host into a resource pool and expose the container host to a customer is a great advantage and all billable on the resource pool usage. The rest is then up to the customer -- to deploy their containers -- which can be done remotely," he said.
The hospitable host
Another advocate of the container within a VM approach is Intel's Open Source Technology Center. Intel's Clear Containers project approaches the containers vs. VMs conversation from a different angle, asking the question: How can VMs serve as better container hosts?
Arjan Van de Ven's team at Intel talked with container users about their performance requirements and found that boot time was the primary concern, followed by memory consumption. Start-up time and density are key for containers because most containerized workloads don't live long, he said. In a microservices architecture, containers are spawned to perform a specific task and then removed once they're finished. His team then set out to build tools to measure how a VM spent the first seconds of its infancy to see if they could find a way to improve start-up time.
"It turns out most of the time went to emulate the floppy drive," van de Ven said. "Two or three seconds for the BIOS to initialize it, and then the OS would try to find the floppy drive for two or three seconds. So a lot of time was spent on things that, for containers, we don't care about."
Both Clear Containers and VMware's vSphere Integrated Containers address management and security concerns while retaining container portability and ensuring faster boot-up time compared to a traditional VM. However, neither can match the pure efficiency of multiple containers sharing the same physical host, said Lars Herrmann, GM of the integrated solutions business unit at Red Hat.
"Containerization is an amazing application delivery methodology around which we can build application models and workflows -- basically getting to a DevOps world," Herrmann said. "However, the architectural paradigm would typically be that you don't have a single container running within a single virtual machine. That would leave a lot of money on the table."
Virtualization will continue to play an important role in the foreseeable future, Hermann said, but the advantages of running containers that aren't tied to individual VMs outweigh many of the drawbacks. Today, it's common for organizations to use different tools to monitor and manage in-house and cloud applications.
"Containerization can provide a standardized fabric around the application that works the same way across different environments," Hermann said.
Rather than develop new tools to manage and secure containers -- when robust tools already exist for VMs -- Intel's van de Ven said it may make more practical sense to evolve VMs to better serve as container hosts. That advice isn't likely to fly with the bouquet of budding startups looking to enter the container management scene. One of those startups has instead embraced the idea of flipping the container-within-a-VM construct on its head.
Containing the VM
Rancher Labs, a container management software provider based in Cupertino, Calif., offers customers a way to manage their VM-bound workloads alongside their containers from the company's existing platform -- instead of following a containers vs. VMs approach. RancherVM is an open source project the company developed that packages KVM images inside Docker images and manages these VM containers using familiar Docker commands.
Given the open source roots of containers, the technology has also seen smaller experiments from organizations and even groups of private users. Enteon, a cloud management and provisioning software provider based in St. Louis, Mo., developed an approach to lend VMs the portability advantages inherent to Docker containers.
The approach, which it calls the cloud-native VM, effectively allows users to run legacy applications designed for a VM within a container and -- with the help of other open source projects, including Weave and CRIU -- seamlessly migrate that workload across different platforms, including public cloud providers.
While the cloud-native VM isn't a true VM in the fullest sense of the word -- it still shares its host OS kernel, meaning you cannot install an independent Windows Server operating system, for example -- the net effect is that "it looks, acts and quacks like a VM from a user's perspective," said Jim McBride, chief cloud architect at Enteon.
Jim McBride, Enteon
"We thought the idea of cloud provider independence was strong, but we had a hard time getting people to bite on that," McBride said. "I think something like this has the ability to massively disrupt cloud hosting or the way vendors deploy and support cloud services."
Many of these open source projects, such as CRIU, blur the boundaries of what containers are capable of and how they're used, McBride said. Development continues on Intel's Clear Containers project, for example, with future updates such as the ability for a container to directly access hardware -- such as a network interface card -- or support for live migration, van de Ven said.
"Some people want a hybrid of a container and a virtual machine, and we're going to help support that idea with this technology," van de Ven said.
Some users can look past the containers vs. VMs debate and see this blend of the two technologies as the best of both worlds. Arc Innovations' Provis expects more innovations on both sides -- new tweaks to hypervisors to allow VMs to better serve as container hosts and updates to container technology that will address management and security concerns.
"Containerization is an exciting technology that we will see evolve over time," Provis said. "In what direction will be anyone's guess."
Nick Martin is executive editor of Modern Infrastructure. Contact him at firstname.lastname@example.org.
Learn more about Docker containers