Live Migration vs. vMotion: A guide to VM migration
A comprehensive collection of articles, videos and more, hand-picked by our editors
Live Migration adds a new dynamic to Microsoft Hyper-V environments by allowing virtual machines (VMs) to migrate...
to different hosts at will. Adding System Center Virtual Machine Manager's PRO Tips further enhances Live Migration by automating the process of moving VMs to new hosts.
But as your VMs roam from host to host, pay careful attention to their storage and network configurations. VMs and their configuration files can seamlessly migrate among cluster hosts, but the same doesn't necessarily hold true for their storage and network configurations -- which can create future problems.
Problems with Live Migration network configurations
Consider a situation, for example, in which you've created a new VM with a special network configuration that needs to connect to a particular virtual local area network (VLAN). Creating this virtual network on a VM's host ensures that the VM connects to the right location.
The problem occurs down the road, when a VM needs to migrate to a new host or restart on a new host for high-availability reasons. If the new host doesn't have the same virtual network configuration as the VM's original host, the VM can become orphaned from the network after the migration -- or may not be able to migrate at all.
This situation may seem like a minor problem until your environment begins to scale outward. With a Hyper-V cluster containing two or four hosts, only a few locations can suffer from an omission or misconfiguration can occur. But as the number of cluster hosts increases, so does the number of areas that must be perfectly managed.
In this regard, VMware vSphere 4 holds a networking advantage over other virtualization platforms. A new feature in vSphere 4 called vNetwork Distributed Switches acts as "meta-switches," which layer atop hosts' virtual network configurations. Using a meta-switch, vSphere can manage the hosts' network configurations through a single interface, reducing the chance of an omission or error.
As Hyper-V adoption increases, the hope is that Microsoft or a third party will create something similar for Hyper-V hosts. Until then, pay careful attention to your network configurations and ensure that they're consistently applied to prevent future problems.
Live Migration options for storage configurations
In addition to network settings, there are some storage connection types that must also be carefully configured on Hyper-V hosts for Live Migration to run properly.
In my last article, I discussed three VM storage configurations. A Virtual Hard Disk (VHD) attachment, for instance, is arguably the simplest for Live Migration purposes. When VHDs are attached to a highly available VM, they must also exist on shared storage. This setup ensures that every cluster node can automatically access the disk when a VM migrates.
For pass-through disks, another storage configuration, additional care is necessary. These disks have a direct relationship with both VMs and their hosts, which must be considered before performing Live Migration.
A pass-through disk must be exposed to the host and then passed through to the VM. Pass-through disks are supported in a clustered configuration; but the cluster must be informed of any new pass-through disks by refreshing the VM configuration after it has been attached.
Pass-through disks must be managed like other cluster resources. The storage area network connections to the cluster must be exposed to every potential cluster host.
Also, beware Windows failover clustering, which has a tendency to unexpectedly add "dependencies." Chuck Timon, senior support escalation engineer in the Microsoft Enterprise Platform Support division, explains this behavior in on the Windows Server Setup/Core Team Blog:
After you modify a VM … and if that VM is not the only VM on a [logical unit number](LUN), if you were to add another VM to the same LUN and make it [highly available], when the operation completes, the disk corresponding to the pass-through disk will also be added as a "dependency" to the new VM simply because it is already in the group. The dependencies will have to be manually modified by editing the property of the Virtual Machine resource. This is a known issue and will not be fixed.
The third option, iSCSI direct attachment, is easier to use than pass-through disks are. These disks operate completely within a VM and don't require a special configuration on a Hyper-V cluster. In fact, a Hyper-V cluster doesn't even recognize when these types of disks are in use.
Such isolation from a cluster makes iSCSI direct-attached disks easy to work with, as long as their networking needs are met on each cluster host. Typically, iSCSI connections are> segregated on their own special VLANs. As with the earlier networking examples, those VLANs must be configured and available on every cluster host.
So what's the moral of this story? If you're using Live Migration in your Hyper-V environment, don't forget the storage and network configurations!
Greg Shields is an independent author, instructor, Microsoft MVP and IT consultant based in Denver. He is a co-founder of Concentrated Technology LLC and has nearly 15 years of experience in IT architecture and enterprise administration. Shields specializes in Microsoft administration, systems management and monitoring, and virtualization. He is the author of several books, including Windows Server 2008: What's New/What's Changed, available from Sapien Press.