Get started Bring yourself up to speed with our introductory content.

There's still room for improvement during a live VM migration

Live VM migration is a key part of virtualization, and it should continue to get better and become a bigger part of an IT admin's everyday operations.

Live virtual machine migrations have always been a critical part of virtualization, and there are still ways to improve this essential function.

With the latest hypervisor features, virtual machines can be easily moved from one server to another, but performance problems remain a challenge.

Now that virtual machine migration features have been around for a while, the discussion around the migration of running virtual machines (VMs) centers on performance. Following some best practices can help optimize live migrations of VMs.

Both VMware and Microsoft offer features that allow running virtual machines to be migrated from one host server to another without disruption. However, it should come as no surprise that VMware and Microsoft have different approaches -- there's no established list of best practices that both vendors accept.

Even so, VMware's vMotion feature and Hyper-V's Live Migration feature work similarly from an architectural standpoint. This means that there are general best practices that are relevant to both platforms.

How does virtual machine migration work?

Increasing the speed at which memory pages are sent across the network can improve the overall efficiency of the migration process.

The key to optimizing the performance of virtual machine migrations is to understand how the migration process works. With that knowledge, it becomes much easier to make adjustments to improve performance.

VMware and Microsoft offer various exotic forms of VM migrations. For instance, both platforms support migrations without the need for shared storage, and both environments allow for long-distance migrations. In this discussion, though, we'll focus on basic virtual machine migration in which shared storage is used. VMware and Microsoft each have their own nuances when it comes to the mechanics of VM migration, but the process is quite similar on both platforms.

In both environments, the migration process is based around copying VM memory pages from one host server to another. Because the migration occurs while the VM is in use, memory pages are modified while the migration is taking place.

The hypervisor keeps track of which memory pages are modified while the copy process is occurring, and it makes sure that any modified memory pages are re-copied. Once the two host servers reach a point at which they have identical copies of the virtual machine's memory, control of the VM is handed over to the destination host.

Although this explanation is a generalization, it reveals that the key to achieving optimal performance for virtual machine migrations is to speed up the rate at which memory pages are copied.

Physical memory considerations

Regardless of which hypervisor you use, it is a good idea to begin optimization efforts by evaluating the physical memory in your virtualization host servers. First, use physical memory that has error-correcting capabilities. Every once in a while, the memory page copy process might copy a small amount of data incorrectly.

Error-correcting memory helps protect virtual machines against these inconsistencies. In fact, when you run Microsoft's Best Practices Analyzer against a Hyper-V server, it checks to see whether or not error-correcting RAM is being used.

Next, make sure your virtualization host servers have matching NUMA (non-uniform memory access) architectures. Both Hyper-V and VMware are NUMA-aware. In fact, VMware 5 fully exposes the host's NUMA topology to virtual machines running on the host. This means that when the virtual machine gets powered up, it adopts a topology that is partially based on the host's hardware. This topology does not change as a result of the vMotion process. It is therefore very important for the destination host to have a matching physical NUMA topology.

Hyper-V offers an option that allows individual virtual machines to span multiple physical NUMA nodes. This setting can help an administrator achieve a higher overall VM density, and it can also be used to allocate more physical memory to a virtual machine than would be possible if the VM were limited to using single NUMA nodes. However, this option may decrease the virtual machine's performance.

Network considerations

In most situations, network throughput has the largest effect on the speed of virtual machine migrations. Increasing the speed at which memory pages are sent across the network can improve the overall efficiency of the process.

Both VMware and Microsoft recommend using a dedicated physical network connection for migration traffic, and they generally recommend using high-speed network adapters (at least 1 Gbps, but preferably 10 Gbps).

Dig Deeper on Virtual machine performance management

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

What has been the biggest improvement for live VM migration?