Server downtime is a major performance disruption that can result in data loss and long delays for users. High availability (HA) minimizes the effects of server downtime as well as the restart time for virtual machines
Previously, versions of the Xen hypervisor lacked solid high-availability solutions. But in Xen 4.0, Project Remus -- which was released last year -- enables admins to build an HA infrastructure in which VMs stay up and running when server downtime occurs.
Project Remus is a high-availability solution from the Department of Computer Science of the University of British Columbia. It provides high availability for VMs running on a Xen-based hypervisor by keeping an exact, up-to-date copy of a VM on a backup server.
Project Remus works only with Xen high availability, because the technology has been engineered on top of the Xen Live Migration feature. Integration with other virtualization solutions is still a work in progress.
Protecting against server downtime
To protect your infrastructure from server downtime, you need at least two copies of each VM, and you need to keep backup synchronized. That's the major challenge with high-availability technologies: How do you ensure you don't miss data during synchronization?
There are two basic solutions to this problem. You can use CPU lock-stepping to ensure that both CPUs run exactly the same commands, or you can assume that the machines involved don't need the exact same state. Project Remus is based on the second premise.
At the file-system level, Project Remus creates an exact copy of a VM. Once a VM has been activated, however, its complete state -- including CPU instructions, disk cache requests, memory events, network packets, etc. -- does not need an exact copy. The backup of a VM's state is almost exact, but a slight delay in copying means that the backup state is not identical in real time.
What's important, though, is that the user doesn't notice any difference between the active VM and its shadow, because Project Remus migrates data from the active VM to its backup in 200-millisecond intervals. To make this approach as efficient as possible, Project Remus synchronizes only dirty disk pages. That means that only the most recently used data blocks are synchronized, not the entire VM.
Network traffic is another challenge for high-availability solutions. The state of the packets to be moved across the network has to be identical on the active and backup VMs, which Project Remus accomplishes by buffering all network connections at 200-millisecond intervals. If that interval is too long for you, you can decrease it to as low as 25 milliseconds for even shorter delay.
Using these approaches, Project Remus ensures that, from both the VM perspective and the network perspective, the active and backup VMs are in the same state.
Testing Project Remus
The Project Remus Xen high-availability solution is already available for testing. To create a test setup, all you need is identical paths to the disk files on both machines. Because synchronization does not happen in real time, the active VM and its shadow copy should not access the same virtual disk. That means that you don't need to invest in a shared-storage solution, which is a requirement of most other high-availability solutions.
To test-drive Xen high availability with Project Remus, the recommended approach is to copy over a virtual disk file to both hosts. Then, you can start the VM with the Xen command xm create vm. Next, start Project Remus with the following command, replacing dom0-IP with the IP address of the dom0 machine:
remus vm dom0-IP
This command should activate a VM and its shadow state on both hosts. You can verify that this worked by using the xm list command on both machines.
Project Remus limitations
During the startup process, note that Project Remus is a work in progress, so you may have difficulty getting it to work properly. Ultimately, some limitations will be improved on, such as integration with Xen management tools,.
But by design, some problems are harder to fix. The most important is network performance loss, which makes Project Remus inappropriate for network-intensive virtual workloads. Project Remus uses heartbeat monitoring to check the availability of the VM's copy on the other host rather frequently, but this high frequency comes at a price. You'll notice some pretty obvious network performance issues, because the current state of a VM is synchronized at least five times per second.
The overhead for a kernel-compilation job is approximately 50%, and network performance can be at about 25% of the native speed, according to Project Remus developers. This performance loss, especially at the network level, is rather high.
In its current version at least, Project Remus is not ideal for applications that rely heavily on the network. To limit performance loss, you can use dedicated network cards for synchronization. Developers are investigating other options for optimization, such as compression of the replication stream, which would reduce the amount of data to synchronize.
Still, for a Xen infrastructure where server downtime is unacceptable and where the problem of network bandwidth can be confined, Project Remus is an important high-availability solution.
About the expert
Sander van Vugt is an independent trainer and consultant based in the Netherlands. Van Vugt is an expert in Linux high availability, virtualization and performance and has completed several projects that implement all three. He is also the writer of various Linux-related books, such as Beginning the Linux Command Line, Beginning Ubuntu Server Administration and Pro Ubuntu Server Administration.
This was first published in November 2010