Sizing the right hardware for high-availability Hyper-V clusters

Because of RAM limits, Hyper-V clusters need multiple hosts as well as powerful servers so that virtualized environments don't waste space and can fail over if a host goes down.

At the recent a TechMentor conference, I presented a session on Hyper-V. An attendee asked, "Since Hyper-V is still relatively new, what are the best practices for the hardware I should buy?" That question was thought-provoking; so far, the advice I've heard has always been, "Buy the most powerful servers you can afford."

Now that isn't the answer the attendee was looking for. So during the session, we discussed more specific answers. We discovered that while the "bigger is better" mantra makes sense for individual Hyper-V hosts that aren't connected, the model changes dramatically when you add high availability and Windows failover clustering to the mix. This tip explores how to choose the right your hardware for Hyper-V high-availability clusters while also minimizing wasted RAM.

Individual Hyper-V host sizing
First, some guidance for buying Hyper-V hosts: Yes, buying the most powerful hardware you can afford help ensure that you get the highest number of virtual machines (VMs) per host. But that isn't necessarily the best approach, because Hyper-V tends to be constrained by its resources.

In contrast, VMware's vSphere today enjoys memory page table sharing and memory balloon driver features. The combination of these features allows more virtual RAM to be assigned and run than is actually available on a system. As a simple example, with these two features, 17 virtual machines of 1 GB each can run on a 16 GB server. Perhaps this isn't optimum for a high-performance production scenario, but absolutely helpful during a host failure.

Neither the first release of Hyper-V with Windows Server 2008 RTM (release to manufacturing), nor the second release -- of R2 – supports this capability. So a Hyper-V host cannot oversubscribe RAM assigned to VMs beyond the physical RAM that's installed into the box. As a result -- and again, this is a simplistic example -- if you have 16 GB of RAM, you'll never be able to power on a 17th 1 GB virtual machine (VM). The management interface simply won't allow for it.

High-availability Hyper-V hosts tend to be RAM-bound more than any other resource. Servers with 16 GB of RAM will have enough processing power in their four- or eight-way processing to handle most well-managed VM workloads. Obviously, high-processor-use workloads such as big Exchange or SQL servers will yield a different result. But for virtual machines that are good virtualization candidates, RAM isn't an available commodity in Hyper-V.

Thus, for individual Hyper-V hosts, purchase server hardware with as much RAM as you can afford. When it comes to RAM these days, the inflection point between additional hardware and price hovers around the 32 GB mark. Above that level, the jump to even more RAM becomes more cost inefficient and is not a good buy. But get as much memory as possible, and you'll be satisfied with the results.

Clustering complications
But when you join multiple Hyper-V hosts into a Windows failover cluster, your purchasing decisions become more complex. And again, the problem involves Hyper-V's RAM oversubscription limitation.

In short, clustered Hyper-V instances must be architected so that hosts in a cluster can support the loss of at least one node. Otherwise, virtual machines could go down completely when a host's motherboard dies. If a host in a cluster goes down, every virtual machine on that host must be migrated and rebooted onto one of the remaining hosts. Because of Hyper-V's RAM limitation, remaining hosts must have the right amount of residual and unused RAM so that the lost host's virtual machines can power on.

The best way to explain this is through the use of an example. Let's look at three potential clusters. In each example, each host is configured with four processors and 16 GB of RAM. Cluster one is made up of two hosts, cluster two has four hosts, and cluster three contains six hosts.

In this example, assume that you've planned for complete failover capability with Hyper-V. The remaining hosts on the cluster must be able to successfully power on virtual machines after a host is lost. As a disclaimer, in each of these examples, I recognize that some RAM needs to be reserved for host processing. I also recognize that VMs are often configured with more than 1 GB of RAM. But I'm using round numbers to make the math easy while illustrating my point.

In cluster one, only two hosts are available. This means that the maximum number of virtual machines that can be hosted by this cluster is 16, at 1 GB apiece. With two hosts that each support 16 GB of RAM, I have an effective waste percentage of 50%. Exactly half my cluster capacity must sit waiting for one of the cluster nodes to die and VMs to migrate over to a functioning host. This is true whether I host all 16 VMs on one host or evenly spread them -- eight and eight -- between the two nodes of the cluster. I must do so because I need sufficient capacity to support re-homing those virtual machines if a host is lost. That's a lot of waste.

In cluster two, four hosts are available. With four hosts, I have more locations where virtual machines could be re-homed in the case of a host loss. Specifically, I can support up to 48 VMs of 1 GB each. Sixteen GB of RAM must be reserved across the entire cluster to support the loss of a single host. Whether I home all 48 VMs on three servers, leaving one completely empty or I balance them across the cluster, my percentage of waste is 25%. Our waste percentages are improving. p>

Cluster three increases the number of hosts to six, which further decreases the waste percentage. Across six hosts, I now can support 81 hosts of 1 GB each, again leaving 16 GB of RAM reserved for a loss. In this cluster, I have reduced my overall waste percentage to just 17%. Still not great, but minimal in comparison with the others.

When it comes to Hyper-V clusters, count matters as much as size. Specifically, the number of hosts in a cluster is important to minimize the level of RAM waste. All of these calculations are necessary because Hyper-V does not yet support RAM oversubscription. Microsoft will not support this feature set in Windows Server 2008 R2, nor has the company predicted when this capability will arrive, although this independent author believes that we'll see such a capability very soon.

So my guidance regarding Hyper-V hosts in a cluster is not only to buy as much hardware as you can but also to buy as many hosts as you can. If you can trade a slightly beefier set of systems for a set that includes more systems, you'll waste less space.

About the author

Greg Shields
Greg Shields is an independent author, instructor, Microsoft MVP and IT consultant based in Denver. He is a co-founder of Concentrated Technology LLC and has nearly 15 years of experience in IT architecture and enterprise administration. Shields specializes in Microsoft administration, systems management and monitoring, and virtualization. He is the author of several books including Windows Server 2008: What's New/What's Changed , available from Sapien Press.


This was first published in August 2009

Dig deeper on Microsoft Hyper-V management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close