Mounting Xen virtual machines with image-based storage back ends

By mounting Xen virtual machine disks located in image files, you have the ability to reactivate a failing VMwith greater ease. In this tip, an expert will explain how to do this for both Windows and Linux-based virtual host operating systems.

In my previous article on Xen virtual machine (VM) storage, you learned how to perform maintenance on a Xen virtual machine disk that uses a physical device as its storage back end. You also learned how to mount the partitions in the virtual disk, which makes it possible to change configuration files and parameters. In this article, you'll learn how to perform this same operation for a VM that uses a file as its storage back end.

Mounting an image-based virtual disk
Mounting a virtual disk that's in an image file is far more difficult than mounting one that's located on a physical storage device. Unlike a physical device, to mount an image file you need what is known as a loop device. The loop device consists of a kernel module (named loop) that ensures that for every file you want to mount, a loop device is created. These loop devices are numbered sequentially, with the name of the first of them in /dev/loop0.

You may already be familiar with the loop device process on a file system. For instance, you can use this technique to mount an .ISO file. To do this, you can use the following command:

mount -o loop /bestand.iso /mnt

Unfortunately, if you need access to the partitions that are in a Xen virtual disk file this procedure doesn't help you. You don't want to mount the file, you want access to the partitions inside of it first. To do this you need the losetup command to make a connection between the loop device and the image file that you want to access. Before making that connection, enter the following command to find out what loop devices are already in use:

losetup –a

Assuming that no loop devices are in use, you can use /dev/loop0 as the loop device that you can connect the Xen image file to. If the name of the image file is /var/lib/xen/images/vm1/disk0, the following command will make the connection for you:

losetup /dev/loop0 /var/lib/xen/images/vm1/disk0

If you use losetup again, you'll see that a loop device has been created and that there is a connection between the loop device and the image file. You can now start analyzing the partitions that exist in the image file by using the following command:

fdisk -l: fdisk -l /dev/loop0

Based on the information that fdisk -l shows you, you should be able to discern which device houses the root file system. Once you find it, you'll need to make sure that you have some device files for the partitions in the image file in order to mount them. If the multipath-tools package is installed, the following command will do this for you:

kpartx -a /dev/loop0

As the image file uses the loop0 device, the device files that have been created will have the names /dev/mapper/loop0p1, /dev/mapper/loop0p2 and so on. You can now use these files to mount the file system that the root of the virtualized operating system (OS) is installed on. This can be either a Linux or a Windows OS because you can mount both of them on the virtual host operating system. Once you have made all of the necessary modifications to this file system, un-mount everything properly by entering the following commands:

umount /mnt

kpartx -d /dev/loop0

losetup -d /dev/loop0

Handling logical volumes in Linux-based virtual hosts
In the procedure described above, I've assumed that your virtualized OS uses normal partitions. However, if the VM uses Linux this will not always be the case. You may use the logical volume manager (LVM) instead of partitions. This makes the situation slightly more complex because you will not activate the logical volumes by simply activating the partitions.

Normally logical volumes are scanned for on the available devices when your server boots. But as the devices in the VM disk file were not available while booting, you need to scan for the logical volumes yourself.

If the fdisk -l command on the storage back end file shows you a partition that is a type 8e, you will need to perform a specific procedure to proceed. For the remainder of this tip, I'll assume that this partition is available via the /dev/mapper/loop0p2 device.

You will need to make sure that the partition is known by the LVM subsystem as a physical device. Knowing that the partition is a type 8e is not enough; you need to tell the LVM subsystem that it is available as a physical device and that the LVM can use it. Use the following command to do this:

pvscan /dev/loop0p2

Next, you will be told that an LVM volume group has been found within the physical device, but you have to initialize this volume group manually by using this command:


To complete the reconfiguration of the LVM structure, you need to do the same for the logical volumes in the volume group which can be done with this command:


Although you now have access to the logical volumes again, you'll see that all of the logical volumes are inactive. You need to fix this before the logical volumes can be mounted. To do this, change the status of the volume group by using the vgchange command. This command will change the status of all volumes in the volume group vm1vg to active:

vgchange /dev/vm1vg
The LVM logical volumes are now active and ready to be mounted. For example, if you want to mount the logical volume with the name /dev/lvm1vg/root, you would use the following command:

mount /dev/vm1vg/root /mnt

At this point you have full access to all of the files in the logical volume. You now can make all of the changes that you need to make.

In this article we've covered how you can reach all of the files in a Xen virtual machine if the VM itself doesn't start up. This can help you to fix problems occurring in a virtual machine and, in the worst of scenarios, help you reactivate a failing VM.

ABOUT THE AUTHOR: Sander van Vugt is an author and independent technical trainer, specializing in Linux since 1994. Vugt is also a technical consultant for high-availability (HA) clustering and performance optimization, as well as an expert on SUSE Linux Enterprise Desktop 10 (SLED 10) administration.

Dig Deeper on Virtual server backup and storage