Is now the time for a multiple-hypervisor approach?
|If one is good, then more must be better||Beware the hidden OpEx costs|
Jason Helmick: I've been told that any consultant that recommends the implementation of multiple hypervisors as a solution should be pummeled with solid-state drives, but inside the IT shop, this is exactly what's happening -- and for a good economic reason. The downfall for IT pros is not recognizing this and refusing to learn how to manage a multi-hypervisor environment.
We all remember implementing physical server hardware in the past and the economic decision we had to make. It went something like this:
As an IT pro, we all wanted the finest hardware and fastest redundant storage, but it doesn't make sense for every scenario. In fact, placing a rarely used Microsoft Access database on top-end hardware was somewhat of an embarrassment -- like Beluga caviar on top of Cheese Whiz. It was a waste of valuable resources and left a bad taste in your mouth.
So why should this be any different today? IT pros gladly consolidated their physical servers to a virtualization platform to improve manageability while the company enjoys the cost savings. VMware vSphere is clearly the current front-runner in virtualization, but many companies -- especially smaller ones -- find the expense in standardizing on vSphere much too high. IT pros have been sneaking in lower-cost hypervisors on cheaper hardware for those lower-tier applications.
The downfall for the IT pro is failing to understand that this is a growing trend rather than a bad mistake, and the key to surviving this heterogeneous virtualization world is through an understanding of different platforms and how to manage them cohesively.
Effectively managing a heterogeneous hypervisor shop requires both an understanding of the virtualization technologies and a good set of cross-platform management tools. Don't be the IT pro that is labeled as a one-trick pony in a circus of virtualization platforms.
Evaluate the cross-platform management capabilities built into Microsoft System Center Virtual Machine Manager and VMware vCenter along with third-party companies such as Hotlink. Consider how you can enhance management and automation with PowerShell, a tool that is also supported across multiple virtualization platforms.
The challenge of understanding and managing multiple-hypervisor environment is not insurmountable. Choose your weapon or collection of weapons and embrace the reality of your environment. Don't be a one-trick pony -- your company can't afford it, and it's boring.
Christian Mohn: For large enterprises, deploying multiple hypervisors might make sense from a pure capital expenditure (Capex) standpoint, since the arguments for implementing two or more hypervisors seems to be largely related to licensing and support contract costs. Running lighter and less critical workloads on less expensive, or even free, hypervisors makes sense from that standpoint, but the waters get very muddy when you start to account for the operating expenditure (Opex) costs related with those implementations. Midmarket or enterprise customers might already have the in-house IT skill available to pull it off, but as far as I can see, the reality is that the Opex costs far outweigh the Capex savings when deploying multiple hypervisors.
Even when implementing a multi-hypervisor management tool, you still need someone who really knows the underlying infrastructure, which a bolted-on-top management application does not give you.
Managing and deploying two hypervisors, regardless of which two, require two separate skill sets, and those come at a price -- a price that is often higher than the cost savings. Even in a test-and-development environment, using multiple hypervisors comes at a price. What do you do when you test on one hypervisor and the application or workload behaves differently when you move it into production? If you want a real test-and-development model, you need to test the actual hypervisor you would use in production and avoid introducing any added complexity and configuration issues. Testing should be done on identical systems -- identical down to just about every single bit in the system.
For small enterprises, multiple hypervisors is probably an even worse idea. Small businesses seldom have IT teams big enough to handle the complexities of running a multi-hypervisor infrastructure, let alone the free time to actually learn the required skills.
I don't see many use cases where multi-hypervisor deployments, both in production as well as in test-and-development environments, make sense now. Sure, deploy a secondary hypervisor and play around to get aquatinted with it, but for anything production-related, I would still advise most everyone to stick to one for the time being. Even if some people talk about virtualization and hypervisors as commodities, the reality is that we're still quite far away from that.
Standardizing on a single hypervisor reduces complexity and management headaches. Don't just look at the initial cost; look at the long-term costs involved with running it in production. Everything needs management, even if it is initially free to fire up.
This was first published in July 2013