VMware vSphere clustering will require a minimum of two physical servers. Although Windows Server 2003 SP1 and...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
SP2 are limited to only two server nodes, Windows Server 2008 SP2 and later (up through Windows Server 2012 R2) can support up to five server nodes in the same cluster. Every guest operating system within the cluster should use the default virtual network interface card (vNIC) to avoid unexpected communication issues between nodes. Also, be careful of time synchronization issues between servers caused by synchronizing time against a local server within the cluster. Disable local host-based time synchronization and use a common network time protocol server instead.
Storage configuration presents another critical aspect to virtualized server clusters. VSphere clustering can accommodate two types of virtual SCSI adapters; use the LSI Logic Parallel adapter for Windows Server 2003, and use LSI Logic SAS for Windows Server 2008 and later operating systems. As with vNIC selection, using preferred or default SCSI adapters will ensure interoperability between storage drivers between the cluster nodes.
Also set the I/O timeout period (the disk timeout value) to a minimum of 60 seconds to allow nodes to share storage data across a busy network. This setting can often be adjusted through each server's registry, though it's important to verify that the timeout remains set properly if you reset or recreate the cluster later on.
When provisioning disk storage within cluster servers, use thick provisioning with an "eagerzeroedthick" format. This approach allocates all storage space upfront and pre-zeroes the LUN before use. This is often considered the best-performing storage allocation for a high-availability deployment because there is no chance that physical storage will run short of the provisioned space (which can happen in thin provisioning) and the entire space is cleared so the server doesn't need to waste time zeroing disk space on the fly. However, any disks created for raw device mapping (RDM) files need not be thick provisioned.
One other note: Avoid overcommitting memory on VMs across clustered servers. Overcommitting can allow greater consolidation -- especially for idle workloads -- but this can present a serious performance hit when VMs demand more memory than is physically available. The server will rely on page swapping to avoid a hard crash, but page swapping with disk storage will create major performance problems for the server. Use fewer VMs within the cluster, add memory to the servers, or (if you must overcommit memory), use a swap file located on local disk within the servers rather than the local SAN. If page swapping occurs, excess SAN traffic will be eliminated.
Dig Deeper on Virtualized clusters and high-performance computing
Related Q&A from Stephen J. Bigelow
DR planning mistakes are easy to make. Avoid selecting a tool that doesn't meet your needs or that's overly complex, carefully consider the ...continue reading
Establishing a DR plan for a VMware environment can be overwhelming. How do you design a plan that prioritizes VMs and manage your infrastructure to ...continue reading
Storage I/O control can be an effective way to handle occasional storage sharing issues, but it is not always suitable for every virtual machine.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.