Virtualization administrators must strike a delicate balance between achieving the highest practical virtual machine density and ensuring that each virtual machine delivers an acceptable level of performance. This balance is not always easy to attain, but it is easy to determine why VM performance might be suffering. Take a look at five of the most common causes of performance bottlenecks.
Hardware resource contention
You can trace the vast majority of VM performance problems to
When a VM experiences performance problems, you should first make sure that it is not using hardware emulation. Ideally, you should assign physical hardware resources to a VM; however, hypervisors such as Microsoft Hyper-V and VMware vSphere provide emulation features that offer support for older operating systems.
VMware and Hyper-V also offer a collection of services that allow the hypervisor to interact with guest operating systems. In VMware, this collection of services is known as VMware Tools, while Microsoft calls these services Hyper-V Integration Services. Though not directly related to VM performance, performance may suffer if a VM does not have either service installed or if it runs the wrong version.
Resource contention-related performance problems often result from disk I/O complications. In my experience, issues have occurred when numerous VMs were configured to share a common storage array, but collectively, the virtual machines required a higher rate of disk I/O than the storage array could deliver.
Reducing the storage I/O burden might mean purchasing a higher-performance storage array or limiting the number of VMs sharing the array. In some cases, this might not seem like a tall order, but the following two aspects of virtual server storage are easy to overlook.
Virtual server clustering
Read more about VM performance
Boosting VM performance with disk partitions
What thin provisioning means for virtual machine performance
Tutorial: Securing and monitoring virtual machines
Production VMs are almost always part of a cluster. Both VMware and Microsoft used to require that cluster nodes be connected to a shared storage device. As such, you may be inclined to assume that the cluster's limitations are directly tied to the Cluster Shared Volume's limitations. Hyper-V clusters, however, can be attached to multiple Cluster Shared Volumes, which means a single storage array does not have to support the entire cluster. Windows Server 2012 Hyper-V completely eliminates the need for a Cluster Shared Volume, but Microsoft still recommends using one when possible.
The other easy-to-overlook part of virtual server storage is that you are not limited to a single host server cluster. VMware environments commonly have multiple clusters as a way to isolate workloads and reduce resource contention.
In addition to storage I/O, memory, CPU cores and network bandwidth can also cause hardware resource contention. By using performance monitoring, you can determine the specific cause of the bottleneck.
Issues related to hardware emulation and resource contention are the most common causes of virtual server performance problems, but they have company in that category. Simple configuration issues can also cause major performance issues.
A few months ago, I encountered a virtualized Exchange 2010 mailbox server that was painfully slow to the point of being unusable. The virtual server took 10 to 20 seconds to respond to a simple mouse click.
In this case, the VM's virtual network adapter had accidentally been connected to the wrong virtual switch, which connected it to the wrong virtual network. Exchange Server was then unable to contact a domain controller. Exchange mailbox servers depend strongly on the Active Directory, and its absence led to the performance problems.
If you monitor configurations and watch for hardware resource contention, then you'll avoid most VM performance problems.
This was first published in June 2013