Setting up a high availability cluster using basic KVM virtualization is a low-cost approach to ensuring your workloads remain running if a host fails. However, setting up a high availability cluster can be a difficult procedure for someone unfamiliar with the process.
In a previous article, I explained how to create a base cluster and set up an OCFS2 shared file system. In this article, I will show you how to install virtual machines (VMs), integrate those VMs into the cluster and ensure the cluster configuration is working correctly.
Installing KVM VMs
To install KVM VMs, the virtualization host needs to run the libvirt service. Use the command
systemctl start libvirtd; systemctl enable libvirtd to run the service.
There are two ways to start the installation. You can use either the Virtual Machine Manager graphical tool or the
virt-install command. The virt-manager utility is useful if no graphical environment is available, and allows you to create VMs from a scripted environment.
To start the VM installation using
virt-install, use a command similar to this:
virt-install --name smallcent --ram 512 --disk path=/shared/smallcent.img,size=4 --network network:default --vnc --cdrom /isos/CentOS-6.5-x86_64-bin-DVD1.iso
This command specifies all the properties of the new VM. The name of the VM is set to smallcent. This name is important, because it needs to be used when you create the cluster resource for the VM. The VM is allocated 512 MB of RAM, and a 4 GB hard disk is created in the /shared directory. Remember, this directory is supposed to be located on the OCFS2 volume we created in the previous step.
In this setup, an interactive installation is used. In cases where no terminal is connected to the virtualization host, this type of installation will not be feasible and an automated installation needs to be used. Consult your distributions documentation for directions on how to set up an AutoYaST (SUSE) or Kickstart (Red Hat) server that can help you with this.
Setting up cluster resources for the KVM VM
To integrate the VM into the cluster, you need to make the configuration of the VM available to the cluster. To do this, you have to dump the XML configuration of the VM to a text file. First, use the
virt-installvirsh list --allvirt-install command to verify the name of the VM. In this example, the name of the VM is smallcent. Because the cluster needs access to the XML file containing the definition of the VM, you have to dump it to a file that is on the shared storage device that you've set up earlier. To do this, type
vvirsh dumpxml smallcent > /shared/smallcent.xml.
At this point, you can create the resource for the VM in the cluster. The VirtualDomain Resource Agent is used for this purpose. Use
crm configure edit and include a configuration that looks like the following:
primitive smallcent ocf:pacemaker:VirtualDomain \ params hypervisor="qemu:///system" migration_transport="ssh" config="/shared/smallcent.xml" \ meta allow-migrate="true" \ op stop timeout="120" interval="0" \ op start timeout="120" interval="0" \ op monitor interval="20" timeout="20"
For the cluster to be able to manage the resource, it is essential that all nodes in the cluster can access the XML file with the configuration. Therefore, you need to make sure to put it on the shared storage device. In the preceding command, you created a resource with the name smallcent using the VirtualDomain resource agent. In order to tell this resource agent where it can find the hypervisor, we included hypervisor="qemu://system" in the resource definition. To allow this VM to migrate, the migration_transport mechanism is defined as "ssh." This works only if the hosts are configured with keys that allow for automated login from one host to the other. Next, you need to indicate where the cluster can find the XML configuration that is used to manage the resource.
At this point the configuration as shown with crm configure edit should look like the following:
node $id="3232236745" suse1 node $id="3232236746" suse2 primitive dlm ocf:pacemaker:controld \ op start interval="0" timeout="90" \ op stop interval="0" timeout="100" \ op monitor interval="10" timeout="20" start-delay="0" primitive o2cb ocf:ocfs2:o2cb \ op stop interval="0" timeout="100" \ op start interval="0" timeout="90" \ op monitor interval="20" timeout="20" primitive smallcent ocf:pacemaker:VirtualDomain \ params hypervisor="qemu:///system" migration_transport="ssh" config="/shared/smallcent.xml" \ meta allow-migrate="true" \ op stop timeout="120" interval="0" \ op start timeout="120" interval="0" \ op monitor interval="20" timeout="20" group ocfs2-base-group dlm o2cb clone ocfs2-base-clone ocfs2-base-group \ meta ordered="true" clone-max="2" clone-node-max="1" property $id="cib-bootstrap-options" \ dc-version="1.1.10-1.2-d9bb763" \ cluster-infrastructure="corosync" \ stonith-enabled="false" \ last-lrm-refresh="1399852426" #vim:set syntax=pcmk
You can now verify that the configuration is working, using the
crm_mon command. If it is all configured correctly, you should now have an operational KVM high availability cluster.