Petya Petrova - Fotolia

Tip

Learn to integrate VMs and containers in your data center

The VMs vs. containers debate is popular among admins, but integrations of the two technologies have begun to emerge with the help of tools such as Kata Containers and Virtlet.

In modern IT, server-level virtualization takes the form of VMs and containers, but the two have radically different approaches to enabling software platforms. VMs and containers can exist together in the same data center environment and even on the same server, but integration options are often limited.

Virtualization is an essential technology for any large business. Virtualization works by abstracting processor, memory, I/O, storage and network resources from the hardware beneath using a hypervisor. This enables IT administrators to improve resource utilization, enhance management and maintain a flexible environment for their workloads and services.

VM and container technologies seek to achieve similar goals and offer similar benefits. Admins should consider VM-container integration options within their data centers.

VMs vs. containers

To appreciate some of the challenges with integrating VMs and containers, it's worth reviewing the differences between them.

VMs. Server virtualization relies on a hypervisor, which an admin typically installs on the server's bare-metal hardware. The hypervisor works to abstract all of the server's resources, essentially turning CPUs and memory into virtual representations. When admins create a VM, the hypervisor provisions virtualized resources to the VM to create an independent, fully isolated environment.

Virtual machines versus containers

Each VM constitutes a separate logical server and can hold a separate OS, libraries, drivers and applications. Each VM can run different OSes, enabling radically different OSes and environments to share server hardware. The hypervisor can create numerous VMs within a system, limited only by the physical resources available on the server and the level of performance demanded from each workload. VMs don't interact, and the hypervisor must maintain this strong level of isolation.

Admins generally use VMs to host large, traditional monolithic applications such as databases, email servers and other long-lived enterprise workloads that require strong isolation. As a result, VMs can consume large amounts of resources because of the OS and application content within, which require a significant amount of time to establish and load. Because of this, given servers can usually only host 10 or fewer VMs.

Containers. Similar to VMs, containers abstract and provision system resources to create virtual instances, but container architecture differs vastly from VM architecture. With containers, engineers install a container engine -- or hypervisor -- atop a single OS. This is because the container engine -- and the containers that admins create and run atop the container engine -- shares the services and features of the same underlying OS.

VM and container technologies seek to achieve similar goals and offer similar benefits, and admins should consider the options for integrating them within their data centers.

Since containers do not require separate OSes, each container instance is smaller than a VM and requires far fewer resources. This enables dozens -- or perhaps hundreds -- of containers to reside on the same physical system, and each container can load and move much faster than a VM. This makes containers far more dynamic than VMs.

Admins typically use containers for short-lived purposes. For example, an ephemeral container might exist for minutes or even seconds to accomplish a specific task, and then the admin can delete that container.

In addition, each container image file includes all the content and dependencies required to operate the related application. This enables admins to deploy the container on any system with a suitable container engine, which provides extensive mobility. However, containers are not as logically isolated as VMs. Any weaknesses or vulnerabilities in the common OS can jeopardize all containers running there.

Given the differences between VMs and containers, the two technologies can coexist, but they can only integrate indirectly.

Run containers in VMs to achieve desired isolation

One way for admins to integrate VMs and containers is to run containers within a VM. This is possible because a VM uses its own OS, which enables the OS to support a container engine, such as Docker, and run an array of containers within the VM instance.

Containers share a common OS, but running a container within a VM provides isolation to the containers and limits the scope of any vulnerability if a problem occurs. For example, if 100 containers share an OS kernel and the OS fails, all 100 containers can become compromised. However, if a VM that holds 10 or fewer containers becomes compromised, it only affects those 10 or fewer containers, and the failure does not affect the other VMs running different sets of containers within the system.

This kind of usage isn't "integration" in the normal sense. Rather, it's a means of using containers and VMs together to achieve the most desirable results for the business. Admins must still contend with deploying and managing two separate virtualization technologies, which can be time-consuming and error-prone.

Despite this, a few tools have emerged to help support containers running on VMs. Kata Containers is an open source project intended to amalgamate container flexibility into lightweight VMs using virtualization extensions -- such as Intel VT -- available in modern processors. This provides the speed and flexibility of containers, as well as the strong isolation of a hypervisor-run VM. Google's gVisor also offers an open source virtualized container environment for container isolation and security.

Attempts to run VMs in containers can pose challenges

VMs are generally large, resource-intensive entities intended for prolonged usage. Containers are small, nimble, resource-lean and often short-lived entities. Admins can deploy containers within a VM, but should not deploy a VM within a container. In addition, a VM image does not work with the container image structure and layer formats.

This poses a unique challenge for container technology adoption. VMs remain a vital technology for packaging and managing traditional monolithic workloads, yet containers provide the versatility, scalability and speed that cloud-first workload architectures -- such as microservices-based applications -- require. As a result, many admins have an interest in unifying or reconciling the two technologies so that VMs can function in container-based environments.

Containers require a strong management and orchestration environment, such as Kubernetes, and rarely work alone. Engineers generally organize them into pods where admins can use common Kubernetes workflows and services. Emerging tools such as KubeVirt, RancherVM and Virtlet take a broad view of the container-VM unification problem and aim to work at a higher orchestration level.

For example, rather than attempting to run VMs in a container, KubeVirt encapsulates each VM in a logical wrapper that enables the VM to operate in a Kubernetes pod. As a result, the system can treat VM applications like native Kubernetes applications, which enables VMs to work alongside container-based pods within the Kubernetes environment. This benefits management because admins can then manage and monitor VMs encapsulated and running with KubeVirt using the regular Kubernetes CLI -- called kubectl -- as if the VMs were pods.

Next Steps

What IT admins should consider when licensing a VM

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close