KVM virtual machines generally offer good network performance, but every admin knows that sometimes good just doesn't...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
cut it. To optimize performance, you have two choices: VirtIO drivers or PCI pass-through disks. The method you choose will depend on the level of network performance you need, and the version of Red Hat Enterprise Linux you run.
Optimizing with VirtIO drivers
More info on optimized network performance
Using paravirtualized drivers with Windows
Comparing emulation software, hypervisor technology
Network performance begins with the virtual network card itself, but whether you use VirtIO drivers makes a significant difference. The VirtIO drivers offer paravirtualization at different levels, including networking. If you installed a Linux virtual machine (VM), you use VirtIO drivers by default. For other operating systems, you will need to install the VirtIO drivers yourself.
To verify if your VM is using VirtIO drivers, use the Ispci -v command from within the VM. Then browse the output and look for the Ethernet controller. It should show the virtio -pci kernel module and kernel driver in use, as shown in Listing 1.
Good KVM network performance starts by using the VirtIO driver
Ethernet controller: Red Hat, Inc VirtIO network device
Subsystem: Red Hat, Inc Device 0001
Physical Slot: 3
Flags: fast devsel, IRQ 10
I/O ports at c040 [size=32]
Memory at f2020000 (32-bit, non-prefetchable) [size=4K]
Expansion ROM at f2030000 [disabled] [size=64K]
Capabilities:  MSI-X: Enable+ Count=3 Masked-
Kernel driver in use: virtio-pci
Kernel modules: virtio_pci
In older versions of KVM, even with a VirtIO driver, networking was handled by QEMU, the emulation layer that sits between the host and the VM. All recent versions of KVM use vhost -net instead. Red Hat began outfitting RHEL with this functionality beginning with version 6.1. It ensures that network packets are routed between the guest and the host using the Linux kernel rather than QEMU. With RHEL 6.1 and later versions, this functionality is automatically enabled. With older host platforms, be sure to update your software packages or your network performance may suffer.
Using dedicated network interfaces
If you already use the VirtIO network driver, and still suffer from poor performance, consider using PCI pass-through. With PCI pass-through you dedicate a physical network card to a VM. Only the VM will have direct access to this physical network card.
To set up PCI pass-through, you first need to disconnect the network device from the host machine. Find the ID of the network device, then use lspci -nn and look for the definition of the network card:
02:00.0 Network controller : Intel Corporation Centrino Advanced-N 6205 [8086:0082] (rev 34)
You now need to shut down the guest OS and edit the guest XML definition, using virsh edit. In the <devices> section that you'll find in the guest XML code, make sure the PCI device is defined. The example in Listing 2 shows how the definition should look. The important line is the line where the domain, bus, slot and function are defined to match the PCI ID (02:00.0) that you found using the lspci -vv command:
Adding a PCI pass-through device to a KVM virtual machine
<hostdev mode='subsystem' type='pci' managed='yes'>
<address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Now you can restart the VM and verify the PCI pass-through device support.
Never feel that you need to suffer through poor KVM network performance. VirtIO drivers and PCI pass-through disks are two ways to achieve better performance.
Dig Deeper on Network virtualization
Sander van Vugt asks:
Which method do you prefer to optimize KVM network performance?
0 ResponsesJoin the Discussion