Brian Jackson - Fotolia


Much more than hype: Container virtualization brings efficiency

Containers have caught the industry’s attention and for good reason. The potential efficiency advantages are real.

Today, virtualization is a well-understood approach to sharing server resources, allowing systems administrators a great deal of flexibility in building on-demand virtual instances. However, there are some performance and resource utilization efficiency problems associated with hypervisor virtualization, and a new approach called container virtualization is aimed at resolving these.

In many ways, the current hypervisor-based approach was conceived around delivering the ultimate flexibility. Each instance can run any of the sanctioned guest operating systems, irrespective of what other instances are doing. It is becoming clear that the industry built a trap for itself when it offered such a broad capability. With a hypervisor approach, each instance needs a full copy of the guest OS, as well as any of the applications running in it. From an operating perspective, this adds considerable burdens that reduce efficiency and performance.

First, each OS and app stack uses DRAM. For small instances running simple apps, this can be a tremendous overhead. But there is a performance penalty involved that’s substantial too. Loading and unloading all those stack images takes time, and doing so loads up the network connections to the server. Taken to the extreme, this gives rise to scenarios like boot storms, where turning on thousands of virtual desktops at 9 a.m. gives staffers time for two cups of coffee, while effectively locking out any other traffic in the cluster.

One of the aims of a virtual server setup is the rapid creation of new instances. Copying an image from networked storage takes considerable time, which has to be added to the boot time. The longer boot effectively limits the elasticity of the system.

That brings us to containers. Based on a few observations that seem obvious in hindsight, containers aim to resolve the multiple OS/application stack issue:

  • Using the same OS for all the instances in a single server will not be a real limitation in most data centers. Orchestration can easily handle the change
  • Many application stacks are identical (LAMP for example)
  • Keeping a copy of the OS on a local hard drive complicates updates in large-scale clusters

Essentially, containers load the image of the OS, and potentially the apps, one time only into memory. This can be from a network disk, since the network and storage will not be loaded down by booting dozens of images. Further image creation just points to the common image and takes up very little memory.

Containers can more than double the number of instances in a given server, clearly giving a large cost improvement opportunity. But we have to tread carefully, since doubling the instance count also doubles the I/O load on the server running those instances.

We need to know if there are any other performance benefits other than the elimination of severe boot storms. Are disk IOPS improved? Is networking more efficient and lower latency with containers, offsetting the higher instance count?

The most definitive study to date is by IBM Research in Austin and shows significant improvements in key metrics for containers over hypervisors. The results show that containers are nearly as fast as a native platform in every area tested, though network latency testing still remains to be completed.

IBM’s research showed containers performed better than hypervisors in several areas. Containers win 2:1, and are very close to native in LINPACK benchmarks. Containers excelled in random disk read testing (84,000 vs. 48,000 IOPS for KVM) and in random disk write testing (110,000 vs. 60,000 IOPS for KVM) and better SQL performance with local solid state drives.

The high performance computing (HPC)community is also turning to virtualization and containers. A study by a Brazilian university, the Pontifical Catholic University of Rio Grande do Sul, shines some light on this.

"HPC will only be able to take advantage of virtualization systems if the fundamental performance overhead (such as CPU, memory, disk and network) is reduced," the study’s authors said. "In that sense, we found that all container-based systems have a near-native performance of CPU, memory, disk and network."

Even the hypervisor virtualization leader, VMware, has a set of Docker benchmark comparisons. These show the same trend to near-native performance, though the gains it reports over the VMware hypervisor are not as large as the IBM report. This is likely to be a result of a highly-tuned VMware approach, which lowers overhead. The VMware report didn’t address disk IO.

Containers still need fleshing out to be bulletproof from a security perspective in a broad spectrum of use cases, but it’s already clear that the approach resolves most of the performance issues seen with hypervisor virtualization. With deployment being both easier and faster, containers look set to take over the virtualization space, and will do so rapidly.

Next Steps

Container virtualization – what is it?

Container based virtualization vs. hypervisors – which should you use?

Should you buy into the container craze?

Dig Deeper on Server consolidation and improved resource utilization