Sashkin - Fotolia
VMs don't operate in a vacuum. VMs only have value when they interact with storage, other VMs and users. This requires powerful and efficient network connectivity. Administrators can exercise broad control over the way network capabilities are configured and provisioned. There are several important considerations and technological capabilities that can help organizations get the best VM networking performance.
Use multiple NICs
When more than one VM shares the same network interface card, those VMs will each share the available NIC bandwidth. A potential VM networking performance impact can result when multiple VMs compete for limited NIC bandwidth. It's certainly possible for two or more VMs to share a NIC, but VMs with latency sensitivity or heavy network traffic demands are often best provisioned to a separate NIC port where network traffic won't conflict with other VMs. Extremely fast NICs, such as 10 Gb Ethernet (GbE) NICs, are generally capable of supporting multiple VMs with independent queues -- fast NICs are designed to support multiple simultaneous network "consumers," such as VMs. This means that a host server might not need multiple fast NICs.
Organize complementary VMs on the same host
VMs often exchange traffic with one another, such as a workload VM accessing a database VM. When these VMs are on different host servers, the network traffic must cross the physical LAN and consume LAN bandwidth. If possible -- or practical -- consider locating or migrating those VMs on the same physical host server so the VMs can connect and exchange traffic across the same virtual switch, allowing traffic to move within the same host system without having to go out onto the physical network wire.
Reserve network traffic by type
All network traffic isn't created equal, and one emerging means enhancing workload performance is to allocate or reserve network bandwidth based on traffic types. Technologies like VMware's Network I/O Control (NetIOC) allows administrators to provision network bandwidth into pools or shares, such as management traffic, Network File System traffic, vSAN traffic, replication traffic and so on. Then, allow workloads to access available bandwidth reserved for the corresponding pool. This basically segregates bandwidth and prevents one traffic type from consuming excessive bandwidth from other workloads. If a VM or pool doesn't utilize all its available shares, that unused bandwidth is available for other workloads on the same physical NIC. NetIOC is most useful for high-bandwidth NICs -- such as 10 GbE or faster -- where many VMs share a limited number physical NICs.
Use hardware support for virtualization
Intel Virtualization Technology (VT) and Advanced Micro Devices Virtualization (AMD-V) command sets allow the hypervisor to access hardware to accelerate virtualization performance. But hypervisors are increasingly using the later Intel VT for Directed I/O and AMD I/O Virtualization Technology command sets to allow guest OSes to access hardware directly -- including direct access to physical NICs rather than relying on more traditional emulation. For example, VMware dubs this DirectPath I/O. Although there are only small benefits to network throughput, there's less CPU overhead needed for network-intensive workloads.
This kind technology isn't always compatible across the entire spectrum hypervisor features, such as snapshots, live migration and so on. So, it's important to test and evaluate the impact this technology on CPU usage and LAN throughput versus compatibility with vital hypervisor feature sets. A more popular and established performance enhancement is often single root I/O virtualization, which also allows VM guests to access hardware directly.
Evaluate other VM networking performance features
It's often worthwhile for IT leaders to test and evaluate the effects of other technologies designed to enhance VM networking performance. One example includes receive side scaling, which allows incoming network traffic to be processed in parallel across multiple CPUs. This can sometimes boost network throughput but can increase CPU overhead. Another example includes virtual network interrupt coalescing. This reduces the number of interrupts generated from networking activity by gathering network tasks before sending interrupts to the CPU. The effect can increase network latency slightly because packets must wait longer before interrupts are produced -- traffic has to wait longer for a CPU to process it -- but can decrease CPU overhead for networking because the CPU is disrupted less often for network attention.
Not all possible features will benefit workload or VM networking performance and can pose compatibility problems with important features, such as snapshots. So, objective performance testing should take place before invoking any network features.
Follow this network performance checklist
Get up to speed on network virtualization
Dig Deeper on Virtual machine performance management
Related Q&A from Stephen J. Bigelow
Full virtualization and paravirtualization both enable hardware resource abstraction, but the two technologies differ when it comes to isolation ... Continue Reading
Organizations can cap their hyper-converged infrastructure costs when they deploy the Azure Stack HCI platform, but once they plug into the cloud, ... Continue Reading
You can implement ESXi on ARM -- or other RISC processors -- in micro and nano data centers. A nano data center is more specialized but also more ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.