Top six server load balancing gotchas

Virtual machine configurations can affect server load balancing. If you want resources balanced among your hosts, consider disk drives, affinity, storage and other factors.

Server load balancing is essential to keep resources properly distributed in a virtual infrastructure. If you're expanding your infrastructure to a private cloud, which is an automated environment, virtual machine load balancing becomes even more critical.

With any virtualization platform, a private cloud requires virtual machines (VMs) that can live-migrate anywhere to balance resource loads. The most common load-balancing services are Microsoft System Center Virtual Machine Manager's Performance and Resource Optimization feature and VMware's Distributed Resource Scheduler (DRS).

Most virtualization administrators already rely on some degree of server load balancing in their infrastructure, so you're probably closer to private cloud computing than you may think.

But when server load balancing doesn't work correctly, a virtual infrastructure can suffer from painful performance problems. The following six VM configurations can cause server load balancing to fail in both private cloud and traditional virtual infrastructures.

Connected disk drives
Have you ever wondered why there's a checkbox with a Connected option next to the disk drives inside your VM configuration screen? It's rarely a good idea to select the box unless you have disk data that you want quickly transferred to a VM.

But connecting the disk drive creates a dependency between a VM and the physical disk, which can in turn cause load balancing to fail. When you're not using disk drives, disconnect them, or server loads may not be balanced.

Affinity and anti-affinity
Affinity in the virtual world refers to how VMs can be configured to always (or never) colocate on the same virtual host. By configuring affinity rules, you prevent both your domain controllers from residing on the same host and, if a host experiences a failure, both from going down.

VMware and Microsoft allow you to configure VMs to follow (or not follow) one another as they live migrate. But you shouldn't use these features unless they're absolutely necessary, because affinity rules create dependencies between VMs that can affect server load balancing. My advice: Steer clear of affinity unless you absolutely need it.

Resource restrictions
Resource restrictions protect virtual machines from others that overuse resources. You can limit the resources that a VM is allowed to consume. You can also reserve a minimum quantity of resources that a VM must always have available. Both settings are great when resources are tight, but they also create dependencies that can cause server load balancing to fail -- or make it more difficult for a load-balancing service to do its job.

Unnecessarily powerful VMs
This one's a rookie mistake. Most of us are used to the notion of nearly unlimited physical resources for Windows. It's been years since servers lacked the processing power or RAM to support a workload. The idea of "Just give it lots of RAM and plenty of processors" tends to seep into our virtual infrastructure as well.

The problem with this line of thinking is that unnecessarily powerful VMs consume lots of resources. When machines use too many processors or too much RAM, target host servers aren't powerful enough to support the VM's configuration. As a result, the machine can't fail over or is limited to specific targets where it can fail over.

Start with one processor per virtual machine and as little RAM as possible, then work upward. That way your server load-balancing service can allocate resources only where they're most needed -- and none go to waste.

Unavailable storage at target host
I don't encounter this problem as frequently as I used to, but it is still out there. Remember that a cluster of virtual hosts represents potential targets for migrating virtual machines. For a VM to live-migrate to the target host, though, the machine's required storage must be available.

Most of us remember that it's necessary to have storage for VM files themselves, but we sometimes forget about the other storage requirements: Raw Device Mappings for a VMware virtual machine or pass-through drives for a Hyper-V machine. Storage connections are always on a per-host basis, which means that every host must be correctly masked and zoned so VMs can see their storage. If not, server load balancing suffers, because VMs and their resources can't migrate to the target host.

Disabling load balancing
You might laugh, but I see this problem more often than I care to admit. Some admins don't realize that VM load balancing is still considered an advanced capability. As a result, they haven't created a cluster in their vSphere data center or haven't enabled DRS.

For a Hyper-V infrastructure, both System Center Virtual Machine Manager and System Center Operations Manager are required for automated server load balancing to work.
My final and somewhat tongue-in-cheek recommendation: If you intend to use server load balancing, don't forget to turn on the capability!

Greg Shields

About the expert
Greg Shields is an independent author, instructor, Microsoft MVP and IT consultant based in Denver. He is a co-founder of Concentrated Technology LLC and has nearly 15 years of experience in IT architecture and enterprise administration. Shields specializes in Microsoft administration, systems management and monitoring, and virtualization. He is the author of several books, including Windows Server 2008: What's New/What's Changed, available from Sapien Press.


 

This was first published in December 2010

Dig deeper on Capacity planning for virtualization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close