With VMware floating trial software that can manage Hyper-V virtual machines (VMs), it’s clear the distinctions between different hypervisors are beginning to blur. But in the case
Hyper-V: Winds of change become a hurricane
Microsoft, VMware’s closest competitor in market share and features, has been closing the gap between its Hyper-V virtualization product and vSphere, especially with Hyper-V 2008 R2 and R2 SP1, which added table-stakes features such as Live Migration and Dynamic Memory.
But Microsoft continues to play catch-up when it comes to advanced features for private cloud deployments such as virtual networking devices or a counterpart to VMware’s vCloud Director (though Microsoft’s System Center Virtual Machine Manager (SCVMM) 2012 will heat up competition there later this year). And while the number of independent software vendors supporting Hyper-V has grown in the last year or so, users still want more monitoring, reporting, backup and virtual networking products to support Hyper-V.
Meanwhile, Hyper-V users welcome new features, but deployment can be slowed by delays in support for new Hyper-V features across Microsoft’s virtualization management products. And the sheer pace of product changes, ironically, can also delay implementation.
For example, the current version of SCVMM doesn’t yet support Dynamic Memory in SP1, according to Robert McShinsky, a senior systems engineer for Dartmouth Hitchcock Medical Center in Lebanon, N.H. McShinsky manages 23 physical hosts and 400 VMs running on Hyper-V.
The new features are also rolling out faster than McShinsky’s organization is comfortable with. “We’re kind of in a consistent migration structure here,” he said. “We’re constantly migrating VMs up to the newest level with the newest integration agents on them, just about in time, usually, for either patches or a new service pack or version to come out.”
XenServer: Users work around HA and memory overcommit
XenServer surged in sales last year, according to IDC, but still has lots of ground to make up compared to its rivals.
Citrix partners with a third party, Marathon Technologies, for high availability (HA) and fault tolerance, but when Tom Golson, chief systems engineer for the infrastructure systems group at Texas A&M University’s Computing and Information Services (CIS) department, first wanted to design disaster recovery for his virtual environment in late 2009, “XenServer hadn’t nailed down the tools [for HA] -- they were either in beta or nonexistent at the time.”
Instead, the organization built out HA by investing in a storage system, Xiotech’s Emprise 7000. That product’s GeoRAID feature allows for active-active failover of VMs across a campus-wide distance. In other words, it provides a kind of home-grown distance vMotion.
The most challenging part of that project was coordinating the networking layer of the infrastructure, Golson said. While Xiotech’s GeoRAID allowed for active-active data access at either end of the wire, stretched Layer 2 was still required for it to work. This meant replacing some legacy Cisco routers with new Nexus switches. “NX-OS is similar to but not the same as Cisco’s iOS,” Golson said. “That’s meant a learning curve for our networking group, and in a large environment, it can be an unexpected stumbling block.”
As CIS gets further into virtualization, more networking knots emerge. For example, Golson is now trying to figure out why some load-balancers send traffic to smaller, resource-constrained VMs rather than physical machines in the same application cluster four times their size. “Virtual machines respond differently -- staying in sync with networking has been our most difficult challenge,” he said.
Users also say they want Citrix to continue developing XenServer's memory management. Today, Citrix’s dynamic memory control increases the number of VMs on a host by compressing the memory pages used by existing VMs. It's not quite the same as VMware's memory overcommitment, which essentially allows memory to be "thin provisioned" so guests behave as though they have more memory available than they do in reality. VMware also has transparent page sharing, which "single instances" memory pages, and with vSphere 4.1, VMware added memory compression similar to XenServer's.
XenServer is also still catching up to VMware’s virtual networking features, having added its first distributed virtual switch with XenServer 5.6 SP1 (“Project Cowley”) last October.
Red Hat KVM: A mandatory migration without strong migration tools
The open-source kernel-based virtual machine (KVM), is arguably the furthest behind VMware on the feature curve. For example, Red Hat’s KVM, unlike competitive hypervisors, can’t hot-add CPU and memory. Red Hat’s support for KVM began with version 5.4 of its Red Hat Enterprise Linux (RHEL) distribution, but it committed exclusively to that hypervisor over open-source Xen only last year, with version 6.0.
With RHEL 6 and a revamped Red Hat Enterprise Virtualization (RHEV) management software suite, Red Hat is looking to persuade users to move to a new hypervisor, but physical-to-virtual (P2V) and virtual-to-virtual (V2V) conversions remain tricky. There is no native P2V tool yet within RHEV, and the virt-v2v conversion utility requires “scratch” storage space and some scripting to perform correctly.
Sander van Vugt, a Linux expert, independent trainer and consultant based in the Netherlands, said many users who’d already deployed Xen with RHEL 5 are holding back on migrating to KVM. “Given the fact that RHEL 6 is quite new, I get the feeling that that most people using RHEL 5 with Xen at the moment say, ‘Let’s at least wait for the first service pack to see that the solutions to migrate Xen to KVM have proven themselves and have more options, and let other people … pick out the hassles for us.’”
Beth Pariseau is a senior news writer for SearchServerVirtualization.com. Write to her at firstname.lastname@example.org.