Sergey Galushko - Fotolia
- Nick Martin, Editorial Director
Containers are enjoying a renewed interest within enterprise IT, courtesy of Docker. Some analysts have speculated...
they’re the next logical step in server consolidation to replace virtual machines.
The intriguing wrinkle in this new containerized approach is that it’s really not new. The idea of containers has been around since the early days of Unix with the chroot command. Linux containers, the technology upon which Docker’s software was originally built, were introduced in 2008. So, what’s with the sudden surge in container interest?
Containerized applications share a common operating system kernel, eliminating the need for each instance to run on its own separate operating system. An application can be deployed in a matter of seconds and using fewer resources than with hypervisor-based virtualization. However, since the applications all rely on a common OS kernel, this approach can work only for applications that share the exact OS version. Docker found a way to address this limitation.
Docker leads the way
Docker was released as an open source project by dotCloud, a platform as a service company, in 2013. Docker relies on Linux kernel features, such as namespaces and cgroups, to ensure resource isolation and to package an application along with its dependencies. This packaging of the dependencies enables an application to run as expected across different Linux operating systems—supporting a level of portability that allows a developer to write an application in any language and then easily move it from a laptop to a test or production server—regardless of the underlying Linux distribution. It’s this portability that’s piqued the interest of developers and systems administrators alike.
“Prior to Docker, the portability of an application or service was never guaranteed,” said David Messina, a marketing vice president at Docker. “Because of the way that Docker containers separate the application constraints from infrastructure concerns, we help solve that dependency hell.”
Almost immediately, developers started to notice how this new approach could solve one of their biggest frustrations. One month after launching an interactive tutorial in August 2013, Docker said 10,000 developers tried it out. Within a year, companies such as Red Hat and Amazon added commercial support for Docker—even as Docker executives cautioned users against production use. When Docker announced its 1.0 release in June 2014, the Docker Engine software had already been downloaded 2.75 million times. That number now stands at more than 100 million.
Analysts say Docker’s software is well-timed, arriving as more and more companies invest in cloud computing and in the midst of the current DevOps movement, said Jay Lyman, research manager at 451 Research.
“Docker provides an integrated user interface. It provides a greater level of simplicity. You don’t have to be a Linux kernel expert to use Linux container-based technology with Docker. It broadened the pool of potential developers,” Lyman said.
The intensified spotlight on Docker has also served to highlight its flaws, and it’s possible that it became too popular too soon. At least, that’s the thinking of Cal Leeming, a software engineer and Docker critic who’s voiced his concerns on his blog and through social media. During a six-month trial in a production environment, Leeming said he found Docker’s software and the Docker Hub Registry slow and frustrating.
“It’s seems clear to me that they were under pressure from the people giving them funding to get something out the door,” Leeming said. “The reason I wrote about Docker is not to destroy or get in the way of a project that’s going somewhere. But, so many people are trying to treat this like it’s going to be the next damn industry standard. When you see something like that—everyone is talking about it—and you know the solution is flawed, you’ve got to fight back.”
However, Docker counts some well-known names, including PayPal, Spotify, and Yelp, among its customers who are finding value in the software.
“We were very quickly able to use Docker to build development and test environments for various developers and become productive right away without interfering with production systems,” said Tom Chernetsky, CTO of Yik Yak, an Atlanta-based mobile application company. “In that way, Docker was a game-changer for us as a fast-growing company.”
The unexpected success of Docker has also brought attention to several competing approaches to container virtualization and spurred others to develop their own. Late in 2014, CoreOS CEO Alex Polvi introduced the company’s new container project called Rocket as a direct response to Docker’s “fundamentally flawed” approach. Docker’s technical approach is not secure, because it requires a central Docker daemon, Polvi said. Rocket, on the other hand, relies on the systemd daemon to create a container.
“It remains to be seen what the official standard for containers is going to be,” 451 Research’s Lyman said. “I think we’ll see something more like what we’ve seen with hypervisors. VMware is the most prominent and widespread, but it’s certainly not the standard, and we’re likely to see a similar thing with Docker and Rocket, and maybe others.”
Nick Martin is senior site editor for SearchServerVirtualization.com. Email him at [email protected].
Catch up on the full history of containers
- Expert methods and strategies for application virtualization –SearchDataCenter.com
- VDI on Hyperconverged Infrastructure –TierPoint
- VDI & Application Virtualization with Virtual Desktop and Cloud Platforms –Nutanix
- Buyer's Guide: Desktop and App Virtualization Products –TechTarget