Dario Lo Presti - Fotolia
One big debate in the IT world over the last year has been whether or not containers are production ready. Now that companies have had the chance to use containers, the answer to the question is, they are ready. That's good news, because containers should be much cheaper if rented in the public cloud, potentially run faster and be packed much more densely on servers.
This happens because there is only a single OS stack in each server, compared with one in each virtual machine in the old system. This improved memory use saves DRAM per VM, usually enabling 2x to 6x the number of virtual instances per server. Saving memory makes containers attractive to large cloud service providers, avoiding the need to purchase hundreds of thousands of servers each year, with big impacts on data center space and power consumption, which is why Google is a big fan.
Although containers seem good now, they can be even better. When containers are applied to VDI configurations, the advantages to shrinking the instance size can be even larger, especially if the app stack image is also shared on the operating system.
With all the good news, where are we with production readiness? It has to be said that containers have been a bit overhyped and there's been a backlash. Any new technology needs some time to settle down. We are now a year into availability and the hype is subsiding. Container technology has been in development since 2006, so it's already seen some maturation.
A more sober assessment of where the technology is today is called for. There are a number of myths that need to be explored. One myth is that containers aren't useful for large instances. This may come from a misunderstanding of the size shrink with a container. There isn't any basis in fact to back this claim.
The hypervisor side of the industry has been surprised by the interest in containers. This led to a lot of fear, uncertainty and doubt about containers being suitable for enterprise loads, but huge companies like ING, Goldman-Sachs and Spotify are using containers extensively without major issues. So, this challenge to containers doesn't hold too much water. In fact, there have been over 200 million Docker downloads.
There are still a couple of rough spots though. Security in containers is somewhat weaker than traditional virtual machines, but it isn't a strong argument against deployment, since compromising a container still should not compromise the operating system hosting it. Security is still an area to be careful with, but it shouldn't stop you from considering containers in production.
The hypervisor vendors generally are tackling the issue with a two-tier approach of placing container structures inside VMs. It seems like this would add layers of complexity and reduce performance, so it's worth testing to see if bare-metal containers beat containers in VMs. The general belief in the industry is that containers on bare metal will outperform virtual machines.
There are still gaps in the tools users would like to deploy around containers. Building out at scale needs work, which Red Hat has road-mapped for 2015. Handling stateful instances needs some extensions, but ClusterHQ is building a service aimed at portability of databases and key-value stores in Docker.
The advantages in operating costs, ease and speed of deployment and faster overall performance are all playing strongly to the new agile IT paradigm. Remember though, not all implementations are equal. It's is definitely worth testing performance between container hosts before committing to volume.
Docker is taking the lead in container technology for cloud
Is container virtualization really worth all the hype?
The best way to utilize secure containers