Optimizing Hyper-V performance: Advanced fine-tuning

Hyper-V integration services and Microsoft's new synthetic network drivers require administrators to be proactive with hardware and network settings. In this tip, we outline how to fine-tune these settings to improve network and multicore processor performance.


When it comes to optimizing virtualization performance, there's a long list of best practices that can squeeze the most value out of your servers. Microsoft's virtualization platform, Hyper-V, is no exception.

In my previous tip on optimizing Hyper-V , I discussed the importance of understanding the requirements of applications and services in Hyper-V as well as how to monitor virtual machines (VMs) and manage CPU resource allocations. In this tip, I'll share more methods for optimizing Hyper-V performance with emphasis on hardware and network fine-tuning.

Hyper-V Integration Services
Let's start with a simple, common sense practice: Ensure that you use the latest version of Hyper-V's integration services. This simple setup program installs the latest available drivers for supported guest OSes (and some that are not officially supported). The result is improved performance when VMs make calls to hardware. This should generally be the first thing one does after installing a guest OS. Keep in mind that updated versions of integration services might be released to improve performance between major releases of Hyper-V.

Use synthetic network drivers
Hyper-V supports two types of virtual network drivers: emulated and synthetic. Emulated drivers provide the highest level of compatibility. Synthetic drivers are far more efficient, because they use a dedicated VMBus to communicate between the virtual network interface card (NIC) and the root/parent partitions physical NIC. To verify which drivers are used from within a Windows guest OS, you can use Device Manager.

The type of network adapter installed can be changed by adjusting the properties of the VM. For changes to take effect, in some cases a VM will need to be shut down or rebooted. The payoff is usually worth it, though: If synthetic drivers are compatible, you'll likely see lower CPU utilization and lower network latency.

Increasing network capacity
Network performance is important for various types of applications and services. Whether running one or a few VMs, you can often get by with just a single physical NIC on the host server. But if many VMs compete for resources and a physical network-layer security is to be implemented, consider adding multiple gigabit Ethernet NICs on the host server. Some NICs support port "teaming," which provides for load balancing and/or automatic failover. Also, NICs that support features such as TCP offloading can improve performance by managing overhead at the network interface level. Just be sure that this feature is enabled in an adapter's drivers in the root/parent partition.

Another key is, whenever possible, to segregate VMs onto separate virtual switches. Each virtual switch can be bound to a different physical NIC port on the host, allowing for compartmentalization of VMs for security and performance reasons. VLAN tagging can also be used to segregate traffic for different groups of VMs that use the same virtual switch.

Minimize OS overhead
A potential drawback of running a full operating system on virtual host servers comes in the form of OS overhead. You can deploy Hyper-V in a minimal, slimmed-down version of Windows Server 2008 by using the Server Core installation option. This configuration lacks the standard administrative tools, but it also avoids a lot of OS overhead. It also lowers the security "surface area" of the server and removes many services and processes that might compete for resources. It's really a stripped-down version of the Windows OS that's optimized for specific tasks. You'll need to use remote management tools from another Windows machine to manage Hyper-V, but the performance benefits often make it worth the effort.

Virtual CPUs and multiprocessor cores
Hyper-V supports up to four virtual CPUs for Windows Server 2008 guest OSes and up to two virtual CPUs for various other supported OSes. That raises the question: When should you use this feature? Many applications and services are designed to run in a single-threaded manner. This leads to the common issue of seeing two CPUs on a server both running at 50% utilization when a single application is cranking. From the level of the guest OS and the hypervisor itself, spreading CPU calls across processor cores can be expensive and complicated. The bottom line is that you should use multiple virtual CPUs only for those VMs that have applications and services that can benefit from them.

Memory matters
A rule of thumb is to allocate as much memory to a VM as you would for the same workload running on a physical machine; but that doesn't mean that you should waste physical memory. If you have a good idea of how much RAM is required for running a guest OS and all of the applications and services the workload requires, start there. You should also add a small amount of additional memory for overhead related to virtualization (an additional 64 MB is usually plenty.)

A lack of available memory can create numerous problems, such as excessive paging within a guest OS. This latter issue can be confusing, because it might initially seem as though the problem is disk I/O performance. The root cause is often because too little memory has been assigned to the VM. It's important to monitor the needs of your applications and services, which is most easily done from within a VM, before you make sweeping changes throughout a data center.

SCSI and disk performance
Disk I/O performance is a common bottleneck for many types of VMs. You can choose to attach virtual hard disks (VHDs) to a Hyper-V VM using either a virtual integrated development environment (IDE) or SCSI controllers. IDE controllers are the default because they provide the highest level of compatibility for a broad range of guest OSes. But SCSI controllers can reduce CPU overhead and enable a virtual SCSI bus to provide multiple transactions simultaneously. If your workload is disk-intensive, consider using only virtual SCSI controllers if the guest OS supports that configuration. If that's not possible, add additional SCSI-connected VHDs (preferably ones that are stored on separate physical spindles or arrays on the host server).

Snapshot management
Hyper-V's snapshot architecture is easy to use and convenient: it doesn't require any initial setup and a new snapshot is just a couple of mouse clicks away. But there's a downside to storing too many snapshots. When you create a large hierarchy of snapshots, Hyper-V has to do a lot of work to perform read operations. The hypervisor must check multiple physical disk files to find the latest version of data, and this can create a lot of physical I/O overhead. The problem is compounded if you have many VMs, each of which has multiple snapshots. Read my tip on Hyper-V snapshots for more information.

The solution to this problem is fairly easy: Just as you might get rid of that blurry picture of you with red eyes in a dark room, delete any snapshots that are no longer needed for your guest VMs.

This list is admittedly far from complete, but hopefully the discussion will help increase the efficiency of your Hyper-V host servers. While it might take some time and effort to ensure that you're following these performance best practices, they'll quickly become second nature.

About the author: Anil Desai is a Microsoft MVP and a Microsoft Certified Professional with numerous credentials including MCITP, MCSE, MCSD, and MCDBA. He is the author or coauthor of nearly 20 technical books, including several study guides for Microsoft Certifications.

Dig Deeper on Microsoft Hyper-V and Virtual Server