Configuration tweaks to boost Hyper-V networking performance

Hyper-V networking changes can benefit workload performance, but only when they're applied in a careful and systematic manner.

Virtualization and consolidation focus on core computing resources like processors, memory and storage, but the role of available network I/O is often overlooked. Network bandwidth and device configuration are also important in ensuring that client/server workloads operate efficiently -- especially with the proliferation of network-related technologies appearing in modern servers and network adapters. Let's examine some of the tactics that can boost Hyper-V networking performance.

Choose the correct networking technology

Normal networks rely on dynamic host configuration protocol to automatically assign IP addresses to network clients on a subnet, and DHCP depends on an available DHCP server. In a traditional network, the loss of a DHCP server will disable automatic IP addressing for new devices, and existing devices will lose connectivity once current IP leases expire. Automatic Private IP Addressing (APIPA) allows DHCP clients to obtain an IP address and subnet even when a DHCP server is unavailable. By default, APIPA will use a reserved IP address range from 169.254.0.1 through 169.254.255.254 with a subnet of 255.255.0.0. APIPA normally checks for DHCP every few minutes and will hand off control to DHCP when it becomes available. Generally speaking, APIPA is intended for small organizations with just a few clients. It is non-routable and isn't registered with the domain name system, so it normally should be disabled for Windows Server platforms running Hyper-V. Enterprise-class data centers will operate redundant DHCP servers to ensure continued operation.

Virtual machine queue (VMQ) is Intel's network hardware technology designed to allow a network interface card to transfer incoming frames directly to the NIC's receive buffer using direct memory access. This reduces the dependence on driver-based traffic exchanges and improves the transfer of common network traffic types (including TCP/IP, iSCSI and Fibre Channel over Ethernet) to a virtualized host system. Part of this improvement is due to the fact that different processors can process packets for different VMs -- rather than one processor handling all of the network data exchanges. In most cases, VMQ should be enabled if available on the NIC, and the adapter should also be bound to an external virtual switch for best results.

TCP offload engine (TOE) is designed to improve network performance by implementing the entire TCP/IP stack in hardware rather than in drivers or software. This reduces the amount of processing effort needed to prepare, form, transmit, receive, unpack and collect packets of network data. Offloading also reduces Peripheral Component Interconnect, or PCI, bus traffic, which can be inefficient for moving small bursts of network data to and from the host system. TCP chimney offload is similar, but allows control to remain with the OS while leaving the actual grunt work of data exchanges to the NIC. In general, offload features can be enabled on a virtualized system, though offload hardware may not be supported by software-based NIC teaming. If the virtualized server uses NIC teaming, disable offload features. Otherwise it is usually acceptable to leave offloads enabled.

Another popular tweak for virtualized servers is to enable jumbo frames for networks that depend on Cluster Shared Volumes, iSCSI and Live Migration activities. Jumbo frames move 9,000 or 9,014 bytes in the packet's data payload rather than the common 1,500 bytes (technically called octets). By moving more data in each packet, a file can be exchanged in fewer packets, requiring less work by the NIC and host system. However, it also means that every network element between both ends (NICs, switches and SANs) must support jumbo frames the same way.

When (and when not) to update NIC firmware and drivers

Computing devices are typically built using a stack paradigm: hardware (the chips and connections) is at the bottom, firmware (e.g., BIOS) is used to initialize and configure the hardware, and drivers connect the firmware to the OS. So, bugs and cumbersome coding techniques in the firmware or drivers can lead to performance problems and errors. This happens more often than you might imagine, and the fix is almost always to update the firmware and drivers.

However, the interrelationships among hardware, firmware, drivers and operating systems can be tenuous and error-prone. This sometimes leads to unexpected new problems or bugs, so firmware and driver updates can potentially cause more problems than they solve. Therefore, updates (assuming they are prepared and available in the first place) should not be applied indiscriminately.

First, it's important to verify that the updates will actually address a problem (or problems) that you can quantify. If the updates don't specifically address your particular problem, it's rarely worth applying them. For example, if a firmware upgrade will resolve a bug in the TOE that has kept the feature disabled on your particular NIC model, a firmware upgrade might be entirely appropriate in order to enable TOE and boost NIC performance. By comparison, if the firmware update fixes a bug for a chip that your NIC doesn't use, the upgrade can probably be avoided.

Second, test the updates in a lab environment before deploying them to production. Testing will help streamline the update process, identify unforeseen consequences and avoid potentially disruptive situations in a production environment.

How NIC teaming affects VM performance

NIC teaming can be an enormous benefit to virtualized servers. Teaming allows multiple network adapters on the same server to work cooperatively in order to aggregate bandwidth and handle traffic failover tasks. For example, two independent Gigabit Ethernet ports can be teamed to provide a workload with as much as twice the bandwidth, or to ensure that data can still be transferred if one of the two ports fail.

In general, NIC teaming works quite well with management traffic, production VM traffic and VM migration tasks; so it can certainly be enabled and configured wherever it is appropriate for the enterprise. One recommendation is to establish NIC teams before assigning workloads. Another popular tactic is to deploy single-root I/O virtualization, or SR-IOV, NICs for the guest VMs.

However, NIC teaming is not recommended for iSCSI storage traffic. Multi-path I/O, or MPIO, is the preferred technique for handling iSCSI storage traffic across the enterprise network under such operating systems as Windows Server 2012 and later.

Network resources and configurations can play a huge role in VM performance, so IT professionals should consider bandwidth, interface types, drivers and other factors in network performance. But networks can pose challenging configuration problems and present difficult interrelationships in complex network environments. Benchmark network behaviors before making any changes, alter only one factor at a time, and then establish new performance benchmarks to evaluate the effects of any network changes. Doing this helps to identify and resolve unexpected results and identify the impact of changes in an objective manner.

This was first published in May 2014

Dig deeper on Microsoft Hyper-V management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Related Discussions

Stephen J. Bigelow asks:

Do you have trouble with Hyper-V networking performance?

0  Responses So Far

Join the Discussion

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close