Virtualization isn’t just about servers anymore.
As x86 processors become more powerful, emerging product offerings are drawing disciplines beyond the servers and computing into the hypervisor. One such offering is Vyatta Inc.’s open source-based Network OS, which performs advanced routing functions in software running on a virtualized x86 host, instead of a physical switch.
SearchServerVirtualization.com senior news writer Beth Pariseau sat down with Vyatta CEO Kelly Herrell to find out more about the problem his company’s product is looking to solve, its relationship to incumbent networking and virtualization offerings, and his vision of the future for x86-based virtualized networks.
In a recent
Kelly Herrell: The reason the hypervisor blinds the customer to the applications is that there is very limited packet knowledge that goes on at the hypervisor level, and once it’s in the hypervisor, there is no awareness of how the packets are talking to each other.
Let’s use the example of a server where you’ve got an HR application running as a virtual machine, as well as another virtual machine running an engineering build server. Your engineering team can get in to manipulate their build server, and once they’re in, they can also get at the HR data, because it’s in the same box. That’s a no-no. That’s what layer 3 networks are designed to prevent.
How exactly are you solving that blinding problem with virtualized networking gear?
K.H.: Once the packet goes through the hypervisor, there’s no way of controlling -- without Vyatta sitting on that hypervisor -- which VMs get firewalled, which VMs have a VPN termination, which VMs are on different subnets. They’re all going into the same box on a single logical wire if you don’t have Vyatta in place.
With Vyatta, you can create different subnets inside of the server. When the packet goes through that hypervisor and comes into the Vyatta virtual machine, now you’ve got full enterprise-class networking controls that you can configure. So Vyatta ends up being the central communication control point between all the VMs. And it behaves and looks and feels like a traditional device. The target here was to basically do what Cisco or Juniper can do with this layer 3 networking, but just do it in software as opposed to hardware.
So you would consider your primary competitors to be Cisco and Juniper?
K.H.: No, I mention them because of the historic incumbency that’s out there. Right now, as a network virtual machine, Vyatta is way out in front in terms of industry adoption. Part of the reason for that is we use a community-based viral download approach to the market. We’ve been downloaded over a million times, so that gives us a really unique spread.
What we find is that in these new architectures, Vyatta’s not replacing the incumbent gear; it’s complementary to it… Vyatta takes the next step and says you need a good portion of functionality inside of the server itself. We play nice in the sandbox with the incumbents.
What about products like VMware’s vShield? Can’t users achieve some of the same functionality with VMware’s own products?
K.H.: There are some products out there that will do a small amount of what we do, maybe a little bit of firewall, but no layer 3 networking. No subnetting, no routed paths. Vyatta has the most feature-rich networking VM on the planet and part of the reason for that is we’ve been working on it for 80 man-years or so. If you look at the configurability of this system, it’s the range of configurability you’d expect if you sat down next to a Cisco or Juniper classic device. There are 6,000 commands available at your fingertips. That’s not a simple ACL list rule like you get with vShield.
What do you think the virtual data center is going to look like in a year, three years or five years, in terms of network virtualization?
K.H.: I think the future here is that data centers will become massive x86 pools, those pools will be running compute and networking, and that means that IT skill sets are going to evolve so that they’re not siloed. You can’t be the network architect without being virtualization- and compute- aware. In the past you could, but the skill sets are blending, necessarily.
I think the future is going to be large, fairly homogeneous pools of resource, but virtualized heterogeneous usage models within them. It bodes well for cost, for efficiency, for density, for greenness, for flexibility, but it’s definitely not your father’s Oldsmobile. This is definitely a new way of doing things.
So what kind of time frame are you envisioning for this?
K.H.: We have large customers that are rolling out entire data centers on this now. For Vyatta, from our downloads, we saw the interest in virtualization really start to heat up in 2010. We saw customers begin to deploy operationally in volume in 2011. The gun has just fired and the races are off. It’s not a 100 yard dash though; it’s definitely a marathon. I think that we’ll see the continued adoption of this through the next decade, but it’s going to burn hot and white here for the next two to three years.
Beth Pariseau is a senior news writer for SearchServerVirtualization.com. Write to her at firstname.lastname@example.org.