How does application containerization compare to Linux Containers?
Virtualization has had a profound impact on modern computing, allowing organizations to vastly improve the utilization and flexibility of computing resources. But virtualization includes overhead like the hypervisor and guest operating systems -- each requiring vital memory and costly licensing -- which inflates the size of each virtual machine (VM) and limits the number of VMs a server can host. The resurgence of containerization seeks to virtualize applications without all of the baggage. This is not a new idea, and operating systems like OpenVZ, FreeBSD, Solaris Containers and Linux-VServer have supported this kind of functionality for years as a central element of cloud scalability. But it was the recent introduction of open platforms like Docker that focused new attention on containerization and its potential for scalable distributed applications.
Fundamental support for containerization was actually included in the Linux 2.6.24 kernel to provide operating system-level virtualization and allow a single host to operate multiple isolated Linux instances, called Linux Containers (LXC). LXC is based on the notion of Linux control groups (cgroups) where every control group can offer applications complete resource isolation (including processor, memory and I/O access) without the need for full-fledged virtual machines. Linux Containers also offer complete isolation for the container's namespace, so supporting functions like file systems, user IDs, network IDs and other elements usually associated with operating systems can be seen as "unique" by each container.
Application containerization platforms like Docker do not replace Linux Containers. Instead, the idea is to use LXC as a foundation and then add higher-level capabilities. For example, a platform like Docker allows portability between machines (also running Docker), allowing an application and its components to exist as a single mobile object. LXC alone does allow mobility, but the build is tied to the system's configuration, so moving the build to another machine can introduce differences that might prevent the application container from running the same way (if at all).
As other examples, Docker offers automatic build tools to help developers move from source code to containers more easily and employ companion tools like Chef, Maven, Puppet and others to automate or streamline the build process. Versioning helps developers track the evolution of container versions, understand differences and even revert versions if necessary. And since any container can serve as a base image for other containers, it is easier to reuse components that can easily be shared through a public registry.
So the goal of platforms like Docker is to aid in the rapid integration of applications into containers and maintain or update those application containers moving forward, not to support the existence of containers in the first place -- that's part of the Linux kernel.
Docker networking: How Linux containers will change your network
Using LXC to create virtual containers in SUSE Enterprise Linux
Dig Deeper on Open source virtualization
Related Q&A from Stephen J. Bigelow
Full virtualization and paravirtualization both enable hardware resource abstraction, but the two technologies differ when it comes to isolation ... Continue Reading
Organizations can cap their hyper-converged infrastructure costs when they deploy the Azure Stack HCI platform, but once they plug into the cloud, ... Continue Reading
You can implement ESXi on ARM -- or other RISC processors -- in micro and nano data centers. A nano data center is more specialized but also more ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.