The advantages to a high availability KVM environment are obvious: If one host in the virtual platform goes down,...
all virtual machines running on it will be automatically moved to one of the remaining hosts. High availability does not always work as intended, however; a critical error in the HA setup may lead to a situation where nothing is available anymore. A manually controlled HA environment can help you avoid this scenario.
Typically, HA for KVM virtual machines includes shared storage with a cluster solution, such as Pacemaker. To enable live migration and other advanced features, the environment requires an active/active cluster file system, which is exactly where things can go wrong. When the cluster file system bears the complete environment, it also becomes the single point of failure. To increase stability, the first item you should remove is the cluster-enabled file system.
Without an active/active cluster-enabled file system, you can still have automatic failover of VMs. When a host or a VM goes down, the cluster starts the VMs as quickly as possible on another cluster node. To maintain its integrity, a typical HA cluster uses STONITH (Shoot the Other Node in the Head). This mechanism automatically terminates host computers that cannot be controlled by the cluster anymore. Normally, STONITH prevents corruption within the cluster. If configured incorrectly, however, STONITH may auto-terminate all machines in the entire cluster, leaving you with nothing.
Manually controlled high availability requirements
Additional reading on high availability
Things to consider when implementing HA servers
Data center high availability made easy
Why you need HA for solid disaster recovery
To avoid incorrect configurations, some admins opt for a manually controlled high availability solution. These setups are based on two key principles. First, you'll need shared storage in the form of a storage area network (SAN) that has its LUNs configured in a way that only allows one VM to access them at a time. Two types of data need to reside on the SAN: The VM configuration files, which, in KVM, are typically stored in the directory /etc/libvirt/qemu and the back-end of disk storage within the VMs. You could use either files or logical volume management as the storage back-end for the VMs, provided the configuration file tells KVM how to access them.
The second requirement for a manually controlled HA environment is management software, such as Nagios or Zabbix. The management software must simply be capable of alerting you when a VM or a host machine is down.
Manually restarting and load balancing VMs
When disaster strikes and machines go down, the software sends a message to the administrator, who must then perform the following steps to get everything up and running again:
- Log in on the KVM host where you want to start the unavailable VMs.
- Verify the contents of the /etc/libvirt/qemu directory, which should be mapped to a LUN on the SAN where all hosts can access it. The directory should list all defined VMs .
- Use the command virsh list to see if the VMs you want to start are already listed in the directory.
- If the VMs are listed, use virsh start vmname. If not, use virsh create vmname.xml. The latter command should refer to the exact location of the VM configuration file.
At this point the VMs will restart and you will have restored the required functionality. When restarting the host originally running those VMs, you should disconnect the network cable and the SAN. This prevents VMs from starting twice on different hosts.
After starting the failed host, log in as root and use the command virsh list. You should see no available VMs because the SAN where they reside should be disconnected. If you still see some VMs listed as stopped, use virsh delete vmname to remove them from the database. This ensures the host will start without automatically activating these VMs. You can now bring up the original host.
To redistribute the VM workload, use virsh shutdown vmname, which shuts down a VM on the host where you want to start it, and virsh delete vmname, which removes it from the VM database on that machine. The command virsh create vmname.xml will then start the VM and add it to the database.
A manually controlled high availability environment does not enable complete data center automation. But for data centers that can tolerate a small amount of VM downtime, manual high availability can provide admins with better control.
Sander van Vugt asks:
Which type of high availability would you implement?
0 ResponsesJoin the Discussion