If you compare today’s failure-resistant data center equipment and software with your company’s actual tolerance for downtime, you may find instances in which virtualization high availability is not necessary.
Many IT organizations that leaped on the virtualization bandwagon also embraced high-availability technologies such as VMware High Availability, which automatically restarts a VM in the event of a failure, and vMotion and Distributed Resource Scheduler, which automatically live migrate a virtual machine to another host in the event of degraded performance. The rationale stems directly from the increased potential for loss that comes with virtualization consolidation. A single host hardware failure, for example, could take down numerous virtual machines (VMs) and their workloads.
This adoption of virtualization and high-availability technologies led to more storage area networks (SANs) and networking infrastructure as well as bigger clusters to hedge against downtime. But the equipment is expensive – especially if high-availability technologies are not necessary or will make only a marginal difference to the bottom line.
Here are five situations in which virtualization high-availability technologies may be unnecessary and not worth the expense.
1. Servers that don’t serve end users
Every data center has a percentage of servers that do not directly affect end users, such as test or evaluation servers. And some machines just store documents or databases for IT staff.
These might seem highly critical in the eyes of IT, but these servers don’t directly affect business operations. So does virtualization high-availability protection make sense for these machines?
2. Servers that end users won’t notice
At least once in our careers, we’ve shut down that one server whose function we’re unsure of -- just to see who complains. Often, no one does. These low-criticality servers may not need virtualization high availability.
3. Servers that have a greater recovery time objective than their restore time
Good consultants can always tell which clients are technologically mature. Ask the immature ones how long their servers can go down, and they’ll give you a number. Ask the mature ones the same question, and you’ll find that their answer is far more complicated. Mature IT personnel understand that the server’s workload matters, not the server itself.
And servers with a long and well-understood recovery time objective (RTO) may not require high availability technologies.
4. Highly available servers
Savvy IT administrators also understand that high availability doesn’t just come from the hypervisor. These IT organizations with mature environments build high availability into multiple data center layers through redundant storage and networking as well as clustering VMs to ensure that services stay up. As such, they don’t have to only care about the servers.
5. Virtual desktops
Some virtual servers are actually virtual desktops. While there’s absolutely nothing wrong with hosting these workloads in a highly-available cluster, that decision often doesn’t net you any benefit. This is particularly true when virtual desktops are configured in pools, and users are randomly assigned the next available machine. When a virtual desktop comes back online after an HA event, it’ll probably get provisioned to another user anyway.
Virtualization high availability is an incredible technology, and it greatly reduces the issues associated with hardware failures and software glitches. It can decrease recovery time from “how long does it take to drive into the office on a Saturday and fix the machine” to a matter of minutes.
But that capability also comes with a cost. So pay attention to your needs for virtual machine high availability and also the situations where you may not need that functionality.
This was first published in January 2012