Many enterprises have rolled out a second virtualization hypervisor installation to reduce costs and improve interoperability with applications. But introducing a second server virtualization platform into a data center is not without its risks, so careful planning and testing should take place, long before an actual deployment.
As with the current hypervisor, a second hypervisor must fully support the intended server hardware platforms, device drivers, the operating system versions—and, ultimately, the applications.
If a third-party virtualization hypervisor management tool is in place, it might be necessary to investigate its compatibility with the new hypervisor as well. If the same management tool can handle both hypervisors, this can significantly simplify the management process (see Figure 2).
The new hypervisor’s compatibility list is the place to start researching that information, but there is no substitute for hands-on testing in a lab environment. Once initial testing at the server level has been completed, expand the testing scope to verify network and storage compatibility under the application’s workload.
Navigating the learning curve
Testing a second virtualization hypervisor can provide an opportunity for IT staff to become proficient at installing and managing the new hypervisor. There are bound to be similarities, but understanding the differences and nuances among hypervisors can eliminate confusion and streamline the deployment. Some organizations may choose to certify administrators in the new hypervisor, but personnel who are already fluent with an existing hypervisor can usually achieve proficiency with some hands-on experience.
In the early days of virtualization, a hypervisor installation was usually ad hoc, with IT shops deploying the platforms on various servers on an as-needed basis. Today, the rollout process is far more formal in most cases. This is mostly because there is a hypervisor already in use, so knowing where to install a second is particularly important.
Another factor is that organizations seriously pursuing a second virtualization hypervisor installation are usually large enough to require a level of formality and documentation in new platform deployments. It’s definitely coming from higher level initiatives around dynamic data center, private and public clouds, said Gary Chen, research manager for enterprise virtualization software at IDC.. “We’re going to go through a vetting process step by step.”
When is the right time?
Knowing when to switch from one hypervisor to another can be nebulous. There is rarely a single trigger that precipitates a switch to the new platform. The “right time” to deploy a new hypervisor will depend on the unique needs and situation of each organization. For example, if cost is a primary driver, the time to switch may simply be the point at which administrators feel comfortable deploying and managing the new platform.
In other cases, the trigger may be the need for new features or services that the old hypervisor just doesn’t provide. Or maybe there’s a service-level agreement requirement that a new hypervisor can help resolve. What’s important is to understand the needs that are driving a second hypervisor and consider the timing or other situations that will push the new hypervisor into production.
Today, hypervisors are typically well developed, stable and reliable software platforms, but it’s important to remember that there are risks in moving to a new virtualization hypervisor. The biggest risk is organizational resistance. For example, suppose management forces IT to adopt a hypervisor that is radically different from the previous platform. If there isn’t enough time for administrators to test and master the new hypervisor or if IT gives up features and capabilities that the old hypervisor provided, there may be a backlash that undermines the expected benefits.
Another risk is the potential for isolated silos to develop. For example, a Windows part of the organization adopts one hypervisor, and a Linux part of the organization adopts another hypervisor. Organizations that manage the hypervisors separately often fail to optimize the use of technology and IT expertise. “Today people are really looking at moving toward a private cloud where things are abstracted and IT delivers a service,” Chen said. “Having isolated pools is counter to that.