Once you've learned how to configure a single Proxmox host and Linux and Windows guests, the next step is to expand...
the single node into a cluster with shared storage. This makes it easier to manage the underlying hosts without incurring downtime on the virtual guests.
Creating a second Proxmox node
The first step in this process is to create a second Proxmox node. We've already discussed how to create a basic Proxmox host in a previous article, so refer to that article in order to get a second host up and running. Unfortunately, most Proxmox cluster configurations require a Secure Shell (SSH) terminal. Log in to the first host using PuTTY -- or your chosen SSH client -- as the root user with the password you set when you first configured the host.
Create the cluster by using the command shown below, substituting "mycluster" for an appropriate cluster name:
pvecm create mycluster
This command creates the cluster and adds the first node to the cluster. At this point, the administrator needs to add the second node. Start by logging in to the second Proxmox host. Unlike in other virtualization products, the second host pulls data from the main host.
Run the command shown below, substituting "mainhostip" for the IP address of the first host that we just configured:
pvecm add mainhostip
Barring any errors, the cluster should now be up and running. Use the pvecm status command to see cluster configuration details. You can also view these details by logging out of the web interface, logging back in and then using the server layout tab.
Adding shared storage to a cluster
At this point, the only thing that is missing is shared storage, which prevents VM migration. I recommend adding storage to the servers after the cluster has been created, as it bypasses many issues, ensures the shared storage is added to the nodes correctly and also saves you some typing.
It's beyond the purview of this guide to show you how to set up the storage back end. Suffice it to say, there are several different shared disk setups, including iSCSI and Network File System (NFS). In this example, we'll be using Network Attached Storage.
To add NFS storage to the cluster nodes, make sure you are at the root level of the cluster and navigate to the Storage tab. Within the tab, go to the Add tab and select NFS from the drop-down menu, as shown in Figure A.
You'll notice that this brings up dialog boxes to fill in for the storage addition, as seen in Figure B. The system requires some information to add the NFS share. The ID box refers to a human-readable description -- take note that no spaces are allowed. The Server box refers to the NFS storage provider. Once you've filled in these boxes correctly, click on the down button next to the Export box to see a list of NFS mounts that the NFS server is sharing. This provides a sanity check to ensure your NFS export is working properly and that the configuration details are correct.
The Content tab, shown in Figure C, indicates which type of data will be stored and shared on the disk. The drop-down menu offers a rather long list of items to select from, but don't be intimidated. I advise that the best practice is to keep your VMs and other content separated.
I can't stress enough the importance of keeping VMs on shared storage. This makes maintenance less problematic. In the event of a host failure, you won't lose access to the storage volume, which will make restarting VMs much easier.
You should also have a single shared ISO installation media folder to keep all of your installation media up to date. Creating an ISO media share is as simple as creating an NFS share and exporting it by selecting ISO image from the Content drop-down menu.
Deploying VMs for maximum functionality
Now that we've configured the shared storage, we need to revisit how to deploy VMs to make sure that we've maximized functionality and that our shared storage works.
Before we can create the VM, we will need to upload ISO installation media to the ISO storage volume file we created previously. Doing so is simple: Select Storage View from the Context menu and expand one of the hosts. Now, select the ISO storage folder. There is an upload button under the Content tab; use this to navigate to the ISO media and upload it. The ISO media will then become available to any Proxmox nodes that have access to that chunk of storage.
Now we must create the VM on shared storage using the shared ISO store.
Click the Create VM button. This should bring up an identical Create VM setup to what we have seen before when we created a Linux and a Window VM.
Clicking the down arrow on the Node box should show both nodes. You can select which host the VM resides on or just leave it. Proxmox will pick a host at random by default.
Go through the menu as we did before, this time clicking the Storage drop-down menu in the CD/DVD tab, and select your ISO storage folder. Next, click the drop-down menu for ISO Image; all of your ISO install media should be available there, as seen in Figure D.
The last item to set up is the hard disk. All you'll need to do is select the storage location from the Storage drop-down menu, as seen in Figure E.
You now have a fully setup cluster that can use shared storage and that allows live migration between hosts and different storage systems. Migrating a VM to another node is as simple as right-clicking the VM and migrating.
In the next part of this series, we will set up integration for Proxmox with Active Directory and cover roles and rights within the system and how to configure them.
Create a Linux virtual cluster without the complexity
Quick fixes to common clustering problems
Is open source virtualization software right for you?