Dario Lo Presti - Fotolia
VMware vSphere clustering will require a minimum of two physical servers. Although Windows Server 2003 SP1 and SP2 are limited to only two server nodes, Windows Server 2008 SP2 and later (up through Windows Server 2012 R2) can support up to five server nodes in the same cluster. Every guest operating system within the cluster should use the default virtual network interface card (vNIC) to avoid unexpected communication issues between nodes. Also, be careful of time synchronization issues between servers caused by synchronizing time against a local server within the cluster. Disable local host-based time synchronization and use a common network time protocol server instead.
Storage configuration presents another critical aspect to virtualized server clusters. VSphere clustering can accommodate two types of virtual SCSI adapters; use the LSI Logic Parallel adapter for Windows Server 2003, and use LSI Logic SAS for Windows Server 2008 and later operating systems. As with vNIC selection, using preferred or default SCSI adapters will ensure interoperability between storage drivers between the cluster nodes.
Also set the I/O timeout period (the disk timeout value) to a minimum of 60 seconds to allow nodes to share storage data across a busy network. This setting can often be adjusted through each server's registry, though it's important to verify that the timeout remains set properly if you reset or recreate the cluster later on.
When provisioning disk storage within cluster servers, use thick provisioning with an "eagerzeroedthick" format. This approach allocates all storage space upfront and pre-zeroes the LUN before use. This is often considered the best-performing storage allocation for a high-availability deployment because there is no chance that physical storage will run short of the provisioned space (which can happen in thin provisioning) and the entire space is cleared so the server doesn't need to waste time zeroing disk space on the fly. However, any disks created for raw device mapping (RDM) files need not be thick provisioned.
One other note: Avoid overcommitting memory on VMs across clustered servers. Overcommitting can allow greater consolidation -- especially for idle workloads -- but this can present a serious performance hit when VMs demand more memory than is physically available. The server will rely on page swapping to avoid a hard crash, but page swapping with disk storage will create major performance problems for the server. Use fewer VMs within the cluster, add memory to the servers, or (if you must overcommit memory), use a swap file located on local disk within the servers rather than the local SAN. If page swapping occurs, excess SAN traffic will be eliminated.
Dig Deeper on Virtualized clusters and high-performance computing
Related Q&A from Stephen J. Bigelow
Learn how load balancing in the cloud differs from a traditional network traffic distribution, and explore services available from AWS, Google and ... Continue Reading
Access management is critical to securing the cloud. Understand the differences between AWS IAM roles and users to properly restrict access to AWS ... Continue Reading
Containers have rapidly come into focus as a popular option for deploying applications, but they have limitations and are fundamentally different ... Continue Reading