shefkate - Fotolia
Containers might be an increasingly attractive alternative to VMs, but you shouldn't use containers for every workload deployment. Let's examine some considerations involved in container adoption.
The decision to adopt container technology will depend largely on what you plan to use containers for in the first place. Containers are a type of virtual instance, but while they share some similarities to traditional virtualization technology, containers aren't a direct replacement for VMs.
Containers vs. VMs
A quick review and comparison are in order. Containers are a type of virtualization. Traditional virtualization employs a hypervisor layer to isolate the workload and abstract compute resources -- CPU, memory, disk, I/O -- from the underlying host server. Every VM runs atop the hypervisor and provides a completely isolated instance for an OS and application to run inside.
By comparison, containers use OS virtualization where the abstraction is performed by a component of the OS -- often a version of Linux. The container engine provides namespace isolation that gives each container access to only the resources that it should use -- this is what provides isolation that keeps containers separate from each other. However, containers all share the same OS files, directories and services. This means that, to use containers, you don't need separate OS installations, so the containers are often much smaller and more resource-efficient than VMs. The host system can also limit the resources used by containers, ensuring that an application within a container can't overwhelm the host with excessive resource demands. This technology is called control groups, or cgroups.
Containers offer a mix of attractive benefits, including reliable isolation, small footprint (smaller size for lower resource demands) and fast startup -- since the underlying OS is already running. This makes containers ideal for computing situations that require fast iteration, high portability and high scalability.
It's important to point out that container technology is supported by the OS and has existed since early Linux versions, such as FreeBSD Jails, Solaris Zones and UNIX chroot. But actually using and working with containers has long lacked commonality. The renewed attention to containers as a practical technology was sparked by the emergence of a common tool set finally available in Docker. The presence of a common container platform greatly eased the packaging and distribution of applications, making containers ubiquitous entities that could run on any Linux platform. Docker is also involved with the Open Container Initiative to help ensure that container packaging and distribution evolve an open standard.
Taken together, containers offer a combination of characteristics that's proving ideal for test and development. Consider that a developer can create or update code, package the code to a container, deploy the container to a test system and then deploy the container to a production system. The container image is deployed seamlessly to any system. Fast startup also facilitates easy scaling because duplicate container instances can be created quickly, as needed. The easy portability of Docker-type container images has also fostered an enormous community of applications packaged for Docker's public community repository called Docker Hub.
Containers can host complete applications or services, but container characteristics and developer attention have also spawned an evolution in application design where applications are architected as functional modules rather than monolithic entities. The smaller modules can then be scaled more efficiently because only the affected function -- not the entire application -- would need more instances. It's an application development approach called microservices.
There are also some challenges when you use containers, including management, resilience and security. For example, the sheer volume of containers in the environment can be difficult to manage, including performance monitoring, scaling up and down according to performance, orchestration and automation and so on. The higher number of container instances on any given system can increase risk -- if a server or the shared OS fails, all of the containers on that system might be impacted. This will influence the way organizations deploy container-based applications to ensure resilience, such as greater use of distributed clusters.
Ultimately, the choice to use containers will be based on the way that tomorrow's applications will be designed and deployed. Application designs that are suited to small, highly scalable components -- such as microservices -- and embrace the rapid iteration of DevOps or other agile development models will probably justify adding container technology to the environment. But containers, VMs and even physical systems aren't mutually exclusive, and they can all be deployed together in the data center.
Compare container-as-a-service providers
Dig Deeper on Application virtualization
Related Q&A from Stephen J. Bigelow
Full virtualization and paravirtualization both enable hardware resource abstraction, but the two technologies differ when it comes to isolation ... Continue Reading
Organizations can cap their hyper-converged infrastructure costs when they deploy the Azure Stack HCI platform, but once they plug into the cloud, ... Continue Reading
You can implement ESXi on ARM -- or other RISC processors -- in micro and nano data centers. A nano data center is more specialized but also more ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.