Build a KVM high availability cluster on a budget

Most Linux distributions include everything you need to build a KVM high availability cluster and protect your environment.

Many small shops use plain KVM virtualization, but in many cases they have no measures in place for ensuring availability if a host fails. In this article, you'll read how you can build a simple approach that takes care of ensuring the availability of VMs.

You can use KVM with any Linux distribution, but you will find differences in clustering across the distributions. The Pacemaker stack originated from the SUSE distribution, and Red Hat is just finalizing its approach in recent versions. So, in this article, I'll explain how to setup clustering for OpenSUSE 13.1.

KVM high availability architecture overviewFigure 1: KVM high availability architecture overview

The procedure I describe assumes that the nodes in your cluster are already connected to a storage area network (SAN). If this is not the case, it is relatively easy to connect virtualization hosts to a Linux SAN. You can, of course, also use a SAN appliance if your environment has one. However, the approach we'll take here -- building the cluster using the OCFS2 shared file system -- will work only if you're using a SAN.

Here's a quick overview of the steps we'll take to configure a KVM high availability cluster.

  1. Create the base cluster
  2. Configure an OCFS2 cluster file system on the SAN for shared storage
  3. Install a VM using the SAN disk as the storage back end
  4. Set up Pacemaker cluster resources for the VM
  5. Verify the cluster configuration

Create the basic cluster

Start creating a basic cluster on OpenSUSE 13.1, by using the command zypper in pacemaker ocfs2-tools lvm2-clvm to install all of the required packages needed to build the cluster. The cluster consists of two layers. The lower layer takes care of communications in the cluster and is called Corosync. The upper layer takes care of resource management. To configure the lower layer, you can modify the example of a configuration file found here: /etc/corosync/corosync.conf.example. Copy this file to the file /etc/corosync/corosync.conf and make sure to modify the following lines:

bindnetaddr: 192.168.4.0
quorum {
        # Enable and configure quorum subsystem (default: off)
        # see also corosync.conf.5 and votequorum.5
        provider: corosync_votequorum
        expected_votes: 2
}

The bindnetaddr line should reflect the IP network address that your node uses to communicate on the network. The quorum lines tell the cluster how many nodes to expect.

Start and enable the Corosync and Pacemaker services, using systemctl start corosync; systemctl start pacemaker; systemctl enable corosync; systemctl enable pacemaker. Type crm_mon. This should give you an output similar to the listing below, which verifies that the cluster is operational.

Last updated: Sun May 11 19:38:24 2014
Last change: Sun May 11 19:38:24 2014 by root via cibadmin on suse1
Stack: corosync
Current DC: suse1 (3232236745) - partition with quorum
Version: 1.1.10-1.2-d9bb763
2 Nodes configured
0 Resources configured
Online: [ suse1 suse2 ]

Configure the SAN for shared storage

To setup the OCFS2 shared file system, you first need to start some supporting services on the cluster. Type crm configure edit and add the following lines to the file.

primitive dlm ocf:pacemaker:controld \
        op start interval="0" timeout="90" \
        op stop interval="0" timeout="100" \
        op monitor interval="10" timeout="20" start-delay="0"
primitive o2cb ocf:ocfs2:o2cb \
        op stop interval="0" timeout="100" \
        op start interval="0" timeout="90" \
        op monitor interval="20" timeout="20"
group ocfs2-base-group dlm o2cb
clone ocfs2-base-clone ocfs2-base-group \
        meta ordered="true" clone-max="2" clone-node-max="1"
property $id="cib-bootstrap-options" \
        cluster-infrastructure="corosync" \
        stonith-enabled="false"

After starting these basic services, you can create the OCFS2 file system. To do this, type mkfs.ocfs2 /dev/sdb. Next, create a directory with the name /shared on both nodes and type crm configure edit again. Now, add the following to the cluster configuration:

primitive ocfs-fs ocf:heartbeat:Filesystem \
	params fstype="ocfs2" device="/dev/disk/by-path/ip-192.168.1.125:3260-iscsi-iqn.2014-01.com.example:kiabi" directory="/shared" \
	op stop interval="0" timeout="60" \
	op start interval="0" timeout="60" \
	op monitor interval="20" timeout="40"
clone ocfs-fs-clone ocfs-fs \
	meta clone-max="2" clone-node-max=1
order ocfs2-fs-after-ocfs-base 1000: ocfs2-base-clone ocfs-fs

At this point, you should have a shared file system available at both nodes and mounted on the /shared directory. Files that are written to one node will be immediately visible and accessible at the other node, which is exactly what you need to set up a KVM high availability environment.

In the second part of this two-part series, I will show you how to install VMs, integrate those VMs into the cluster and ensure the cluster configuration is working correctly.

This was first published in May 2014

Dig deeper on Open source virtualization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Related Discussions

Sander van Vugt asks:

Do you use KVM high availability to protect your Linux environment?

0  Responses So Far

Join the Discussion

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close