Get started Bring yourself up to speed with our introductory content.

A straightforward approach to Linux virtual clustering

Virtual clustering has a reputation for being a difficult and extensive process, but it doesn't necessarily have to be. By starting small, you can greatly reduce virtual cluster complexity.

Setting up a virtual cluster can be a complex, time-consuming process, so the best way to get your footing is to...

start small. In this article, I will show you how to create a virtual cluster by first creating an example test cluster. Ideally, this article will give you a better understanding of virtual clustering on Red Hat.

In order to proceed, you should have two Red Hat Enterprise Linux (RHEL) or CentOS 7 nodes set up with correct forward and reverse domain name system to work with, as there are major changes between Red Hat versions 6 and 7 that drastically reduce cluster management complexity.

There are essentially two technologies that lie underneath the clustering setup: Pacemaker and Corosync. Pacemaker runs the cluster management side, whereas Corosync manages the lower level clustering items to ensure connectivity.

Install them both using the command shown below:

   yum install –y pcs fence-agents-all

This command will download all the dependencies and requirements for the clustering setup. It must be repeated on both clusters.

Out of the box you will also need to add a firewall rule to allow required traffic. Doing this is as simple as entering the following commands:

firewall-cmd --permanent --add-service=high-availability

firewall-cmd --permanent --add-service=http

firewall-cmd --reload

Starting the cluster service

The next step is to enable and start the cluster service. Use the commands below to configure it:

                systemctl enable pcsd.service

     systemctl start pcsd.service

The first line enables the clustering service at boot up; the second starts it up for the current session.

Starting up a clustering service.
Figure A. Enabling and starting up a clustering service.

If you're interested, you can check the logs in /var/logs/cluster/ to see what happens. At this point, we have started the cluster services, but we haven't actually built a cluster setup.

With the latest version of RHEL or CentOS you can directly configure from the command line with the pcs command -- short for "Pacemaker/Corosync configuration system" -- making it much easier than previous installations.

When the cluster setup is installed, it creates a user called hacluster that manages the virtual clustering. In order to be able to use this account, you must change the password, so reset the password to something you know -- on both nodes -- using this command:

                passwd hacluster

Managing the virtual cluster

Once that is done, we can start to manage the virtual cluster and the nodes. The first step is to authorize the nodes that will be in the virtual cluster. The command is as following:

                pcs cluster auth node1 node2

If everything goes according plan, your screen should look similar to Figure B.

Authorizing cluster nodes.
Figure B. Authorizing cluster nodes.

Creating cluster resources

The next step is to create the cluster resources on top. This can be accomplished using the pcs command along with the cluster setup command and substituting your node names and application names.

                pcs cluster setup --start --name myapp node1 node2

At this point, the cluster resource controller will flush the existing configuration, sync new configuration data and build a new configuration incorporating the two nodes specified. All that's left to get the virtual cluster up and running is to enable it:

                pcs cluster enable --all

This should show both your nodes as enabled. To check the full cluster status, we can use the command pcs cluster status, which will give an overview of the current cluster status.

Current cluster status.
Figure C. Checking the current cluster status.

Understanding the role of shared storage

Let's take a moment to talk about shared storage. There are many approaches to shared storage -- Network File System, iSCSI, among others- but in our case, we won't be setting up any storage.

One important item to remember when dealing with virtualized cluster nodes is to ensure any file locking system at the hypervisor level is turned off. Not doing so can create many issues, including causing the disk in question to go read-only. Ignore this at your peril. The locking should be done at the OS level. To override this setting, check the documentation for your hypervisor.

In this example, we'll just be setting up the configuration.

One thing we need to do is manage how the servers decide what is "alive" and what is "dead," so to speak, within the virtual cluster. This is done through a process called fencing. Fencing gives each node the power to stop the other one in order to preserve the integrity of the cluster in the event that a node doesn't fail fully and causes issues.

Without a valid fencing configuration, no client facing services will start. The fencing command is pcs stonith. STONITH is an acronym for "Shoot The Other Node In The Head," which is a Linux service capable of shutting down a node that is not working correctly.

There are several fence methods available, but in our basic example we will use the built-in virtual fencing method. From either node in our example, use the following command:

                pcs stonith create newfence fence_virt pcmk_host_list=f1

At this point, we can re-enable the cluster components by using the command pcs cluster start --all.

If you use the pcs status command, it should show that the services are now running properly.

Enabling the virtual IP

Once this is complete, we need to enable the virtual IP. The virtual IP is the address that doesn't have a physical connection. Its purpose is to act as a fault tolerant front end for the service that virtual clustering provides. For example, if one node fails, all traffic will be routed to the alternative cluster node without any manual configuration or noticeable downtime.

Use the command below, substituting your IP address as needed:

pcs resource create Cluster_VIP ocf:heartbeat:IPaddr2 ip=192.168.0.100 cidr_netmask=24 op monitor interval=20s

The cluster should now be live and active. You will need to add a cluster resource before you begin working with your cluster.

Next Steps

Setting up virtual clusters in ESXi with shared disks

Tips for resolving common clustering problems

Exploring the relationship between VM clustering and HA

This was last published in May 2016

Dig Deeper on Virtualized clusters and high-performance computing

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How do you use virtual clustering in your virtual data center?
Cancel

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close