Problem solve Get help with specific problems with your technologies, process and projects.

Four less-known mistakes that can kill Hyper-environments

With each Hyper-V release, virtualization best practices evolve. Here are four lesser-known mistakes that can kill virtual machine performance in Hyper-V environments.

My article "Four mistakes that can kill VM performance" seems to have struck a chord with readers. Shortly after it was published, several readers visited my website's question-and-answer forum to offer up other common mistakes in Hyper-V environments.

While no less important, these four gaffes are probably less obvious to the casual observer. Here we review some lesser-known mistakes that can kill Hyper-V virtual machine (VM) performance.

Easy mistake one: Configuring Failback on Hyper-V virtual machines (VMs) 

Hyper-V's reliance on Windows Failover Clustering allows VMs to migrate from one host to another when a failure occurs. This traditional clustering support also enables VMs to remain running when you have to patch and reboot the host -- a task that can arise frequently.

While Windows Failover Clustering is a great tool to create a failover framework, it was originally designed as a general-purpose clustering utility. Because of this, many of its management functions were designed for other workloads, so some of the resources may not make sense for a Hyper-V environment.

While one such function, failback, isn't a bad thing, it should be used with caution. Failback relocates a VM's processing to a new host when a failure occurs and then back to its original host once the issue is resolved.

Problems can occur when failover happens repeatedly. This may cause the cluster to show alternative failure/success states on the original host -- and, with each state change, attempt to move VMs back and forth. This creates a condition known as "bounce" that can lead to a full failure of the VM resource.

In Hyper-V, failback is disabled by default for all cluster resources, and initially it's a good idea to leave it that way . If you want to enable this feature, however, delay failback by a couple of hours. This limits the opportunities for bounce to occur in your virtual environment.

Easy mistake two: Juggling RAM availability

If you're in the process of architecting a clustered Hyper-V instance, you should read "Sizing the Right Hardware for High-Availability Hyper-V Clusters."

Unlike other virtualization platforms, Hyper-V does not come equipped with memory overcommit capabilities. As a result, you can never power on more VMs than the sum total of the host's physical RAM. Trying to power on even one more VM after committing the available RAM presents an error message instead of a successful boot.

This isn't a major problem in single-server scenarios because a system administrator will likely power on VMs. In clustered situations with automated failover goodies, however, this can be a huge hindrance. Therefore, it is important to architect Hyper-V clusters with enough available RAM so that a host can successfully fail over its resources to the surviving cluster nodes. This means that for a two-node cluster, for example, half your total RAM must stay unused. In a four-node configuration, a quarter must remain available, and so on.

Obviously, this means that clusters with larger node counts are preferable. A cluster with 16 nodes, for instance, needs only 6.25% of the total RAM in reserve for a potential node failure. That's a much better buy than a two-node cluster, which must waste 50% of its total RAM in reserve.

Easy mistake three: Lack of backup option for Cluster Shared Volumes (CSV) 

In Windows Server 2008 R2, Microsoft implemented a new feature called Cluster Shared Volumes. This feature layers atop the existing NT file system to provide cluster awareness for Hyper-V VMs. You might know it best as the" R2 feature that makes it feasible to run multiple VMs in a single LUN across clustered servers." Whew.

This useful feature eliminates the restrictions that forced cluster disk resources to fail over as a complete unit, enabling a VM to migrate within a logic unit number (LUN) when necessary.

CSV poses a hidden challenge, though: Few products support VM host-based backups. Today, even Microsoft's Windows Server Backup doesn't support it. Nevertheless, there are products in the works, such as Data Protection Manager 2010 (but it is still in beta).

Easy mistake four: Assigning too many virtual processors

Nowadays, it is common to buy servers with more than one physical processor. Having additional CPUs allows multithreaded applications to load-balance workloads across several processors. Also, when processor utilization spikes for a single-threaded application, other programs can run on the remaining CPUs. That's why virtually every server today runs with at least two processors and sometimes four or more. Today's data center workloads demand extra processors for performance as well as availability.

In Hyper-V's virtual world, however, the assignment of virtual processors is quite different. With Hyper-V VMs, a best practice is to assign only a single virtual processor to a VM at first, only adding additional processors when necessary.

While adding virtual processors might seem like a good idea, it can reduce performance because of scheduling conflicts. When multiple virtual processors are configured for a virtual machine, those processors must be scheduled to use their physical counterparts at roughly the same time. If two virtual processors are attached to a VM, the hypervisor's scheduler must wait for two physical processors to become available to schedule both virtual processors. With multiple VMs vying for attention, each request could take longer than usual. When the number of physical processors is less than the number of configured virtual processors across every colocated VM (you can find more details on why this occurs at this Microsoft Developers Network site under the Measuring Processor Performance heading), the problem worsens.

Always remember, the hypervisor schedules virtual processor needs across the available physical processors on the server. As a result, a virtual processor can and will make use of whatever physical processors are available.

Greg ShieldsGreg Shields is an independent author, instructor, Microsoft MVP and IT consultant based in Denver. He is a co-founder of Concentrated Technology LLC and has nearly 15 years of experience in IT architecture and enterprise administration. Shields specializes in Microsoft administration, systems management and monitoring, and virtualization. He is the author of several books, including Windows Server 2008: What's New/What's Changed, available from Sapien Press.

This was last published in October 2009

Dig Deeper on Virtual machine performance management

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close