Guidelines for moving multiple VMs with Hyper-V Live Storage Migration
Live Storage Migration is an important new feature that will help many Hyper-V users move
storage files of running VMs without downtime. In a previous tip, I explained when and how to
perform a Live Storage Migration. You can speed up your storage migration process with Hyper-V in
Windows Server 2012 by performing multiple storage migrations at once, but be cognizant of network
and hardware limitations before scheduling too many migrations at once.
From within Hyper-V Manager Console you can now set the number of storage migrations you want to
perform. Don't assume all you need to do is set this up to perform a large number of storage
migrations at the same time. Knowing the capabilities of a few key components of your
infrastructure will guide you to
the right configuration option and help you avoid potential performance
Here is what to watch for when determining how many VMs to live storage migrate at the same time
and what may happen if you perform too many migrations at once.
- Watch I/O on destination volume(s): As you increase the number of VMs you are moving to
a particular volume, the destination could experience heavy writes. This would slow down migrations
if the destination controller or disks could not keep up with incoming I/O write requests. This
would also affect other VMs that might already be running on the destination volume.
- Watch for movement between volumes on the same SAN/controller: Moving from one volume to
another volume on the same SAN or different volumes that share the same controller can exacerbate
problems. If too many VM storage migrations are happening at the same time, the controller's
cache will fill up and the whole SAN or disk subsystem will experience high queue lengths. I have
seen this even on robust HP EVA 8400 SANs if the Live Storage Migration configuration options were
set too aggressively.
- Watch network bandwidth: With the ability to now move VMs from traditional attached
storage to SMB
3.0 file server locations, there are new opportunities to introduce potential bottlenecks.
Moving multiple VMs from local storage to the SMB 3.0 storage location will mean the data will have
to be moved over the network to the new location. This might not cause complete failure of the
process or large degradations in performance, but trying to be overly aggressive with the number of
VMs to move at one time would slow this process to a crawl and might even take longer than doing
each VM individually, due to the change sampling process that goes on during the migration.
If you're using SCVMM
to perform your Live Storage Migrations, turn
off encrypted file transfers if your internal security policies allow. This can cut transfer
times in half.
Here are my recommendations for simultaneous Live Storage Migration settings based on my
experience with a basic VM under low to average disk I/O load and about 75 GB of storage.
Volume to volume migrations:
- Active/Passive Controller Architecture SAN: Four concurrent (two if reading and writing to the
- Active/Active Controller Architecture SAN: Four to eight (two to four if reading and writing to
the same SAN depending on the number of controllers)
- Locally attached disk shelf: Two concurrent (one or two if reading and writing to the same SAN
depending on the number of controllers)
Volume to SMB 3.0 storage migrations:
- Two to four concurrent, depending on size of VMs and network bandwidth availability between
source and destination
We all want to finish basic administrative tasks as soon as possible, but with the new power of
Live Storage Migration, knowing your infrastructure's capabilities and using a little common sense
will help you avoid performance bottlenecks. Remember, most SAN
architectures are used by many different workloads simultaneously. In most cases, VMs, database
and file servers work well together and have complementary disk I/O profiles, but if you attempt to
move too much at one time, you could experience a problem. Start conservatively with your
concurrent Live Storage Migrations and move up. Also remember that different times of day will
affect your performance. In many cases, disk infrastructures work hardest at night when many
organizations run backup routines. Look to exploit the new live nature of storage migrations and
schedule daytime migrations so you can finally sleep better at night.
This was first published in June 2013
Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.