VMware will not stand still when it comes to containers. The company has quickly embraced Docker and developed...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
a container strategy of its own.
Last year, VMware even rolled out a new business group for cloud-native applications, tapping 11-year VMware veteran Kit Colbert as the group's new CTO. The company has since unveiled several new projects focused on containers and cloud-native applications, including Project Photon (a lightweight Linux operating system) and Project Lightwave (an identity and access management suite for containers).
More recently, the Project Bonneville technology preview clarified the company's vision to fit containers into a vSphere infrastructure. With more cloud-native developments looming as VMworld 2015 approaches, SearchServerVirtualization spoke with Colbert about Bonneville, the company's broader plans for containers and what might be next.
Why would an organization want to use containers, and who's using them?
Kit Colbert: What we see containers being very valuable for is developer workflow. What Docker has done is really enable developers to embrace containers and plug containers seamlessly into their workflow. That allows them to move faster. Moving faster is great, but when we do look to move containers into production you still have to have the operational concerns hammered out. Containers are still a very nascent space. So, we see an opportunity to manage containers as well.
How does Project Bonneville work and where does Instant Clone, or VM forking, come into play?
Colbert: So the question around is, how do I very rapidly provision a new VM? There are different ways we're attacking that problem. One of those ways is with Project Photon. A Bonneville VM is running Photon and that OS is extremely small – it's about 20 MB in size. But we want to do it even better and that's where Instant Clone comes in.
What it does is starts with one running Photon VM and keeps it in a pristine state – there are no applications running on it, just a clean, freshly-booted VM. Then, we use Instant Clone to copy it. So, you have an identical copy of a freshly-booted Photon OS VM and then we put a container inside that new VM….The VM only lives as long as the container does – which is a really critical element.
What this gives us is the ability to really quickly start a new VM – about a half-second, or so – and there's very little memory usage because the new VM shares its memory with the old one. It's kind of a way of doing lighter-weight virtualization.
Your description about the VM and container being destroyed hints at the concept of immutable deployment [where instances aren't updated, but instead destroyed and redeployed]. Is this where we're headed?
Colbert: The term I hear is immutable infrastructure. I think we're talking about the same thing, which is the notion that once you provision or deploy something, you don't change it at all. It allows you to have very refined version control, and it changes what you need to do before deployment. It's a fundamental change, because typically what you do today is deploy something and then change its state over time. Tools have risen up over time, like Chef, Puppet, Ansible and SaltStack to manage the state of those provisioned systems. Whereas, immutable infrastructure turns that on its head and once it's provisioned, you don't touch it.
I think you're right, I think this is a direction that we're exploring. Rather than having a VM being a long-lived element – living for weeks or months – we can imagine a VM as being more transient. It's provisioned when we need it, it lasts just as long as you need it and then it's destroyed. That is a really powerful notion, and it's much easier to understand the state of the system at any given time. So, yes, that's exactly the sort of model that we're looking at.
In addition to Docker, Photon also supports Rocket and Garden. What about Bonneville? Are you planning to add support for additional container technologies?
Colbert: Docker is the first one, but we do want to support others as well. For example, we're also looking at [Google] Kubernetes, and whether we can expose a Kubernetes interface on Bonneville. We're looking at Garden from Pivotal. The Bonneville model is very powerful and can work for a lot of different container types.
Why take the approach of running a container within a VM?
Colbert: I get that question a lot. Taking a step back, you never really needed a VM to run anything, right? A VM is just an abstraction of a physical machine. But virtualization offered a tremendous number of benefits and this is why it grew so rapidly. Initially the use case around virtualization was consolidation, but the more important point now is less about consolidation and more about operational benefits. Virtualization offers a common platform to run many different technologies.
That's what we see for containers as well. Do you need VMs to run containers? No…You benefit from running containers on a VM by getting security aspects – containers don’t have the same level of multi-tenant security as VMs do – as well as the operational aspects. Remember that, even though containers are all the rage right now, people still have ton of traditional apps in their enterprises. Most new apps today will likely be part container, part traditional. So I think a lot of our customers are looking at running these applications across a common platform and that's where virtualization offers a tremendous number of benefits.
It's less about need and more about the benefits.
Is the container within a VM the long-term answer, or will we eventually see products that can manage containers the same way we manage VMs and provide the security we need?
Colbert: I think what's going to happen actually is that virtualization will evolve. We're seeing this already with things like Intel's Clear Containers.
What Intel has done with Clear Containers is start adding more capabilities within their processors to enable a more lightweight version of virtualization. With Clear Containers, every time you run a container it starts up a very lightweight VM – very much like Bonneville. So, in many ways, you don't even know you're running a VM but you get the benefits we already talked about. What I see as the future is containers still running inside VMs, except that the concept of virtualization will be a little different from the traditional methods of virtualization for traditional applications.
I think the high-level notion is that virtualization is a very durable concept and there's a lot we can do that offers a lot of value. The reality is there are a lot of challenges securing the container interface – you're basically securing Linux. That takes a long time and it's a very broad set of APIs and strange corner cases that can bite you. Virtualization is built from the ground up to be secure, so that path is much easier to walk down. I think we'll see virtualization evolve to be beneficial to container workloads.
What does the industry still need to tackle in order to make containers truly production ready?
Colbert: There's a bunch of stuff. Networking and storage are areas that are going to need a lot of innovation in the container space. Of course, management is another – lifecycle management and performance management. There are a couple areas that we really have only scratched the surface on, as an industry. VMware has a lot more exciting things to announce in this space over the coming months, at VMworld and beyond. Expect to see a lot more.
Nick Martin is Senior Site Editor for SearchServerVirtualization.com. Contact him at firstname.lastname@example.org.
Why cloud-native apps are changing cloud deployment
Will Docker containers kill traditional VMs?
Docker leading the container cloud charge