Q
Get started Bring yourself up to speed with our introductory content.

What are the vSphere clustering requirements?

What are the hardware and software requirements for server clustering with vSphere 5.5?

VMware vSphere clustering will require a minimum of two physical servers. Although Windows Server 2003 SP1 and...

SP2 are limited to only two server nodes, Windows Server 2008 SP2 and later (up through Windows Server 2012 R2) can support up to five server nodes in the same cluster. Every guest operating system within the cluster should use the default virtual network interface card (vNIC) to avoid unexpected communication issues between nodes. Also, be careful of time synchronization issues between servers caused by synchronizing time against a local server within the cluster. Disable local host-based time synchronization and use a common network time protocol server instead.

Storage configuration presents another critical aspect to virtualized server clusters. VSphere clustering can accommodate two types of virtual SCSI adapters; use the LSI Logic Parallel adapter for Windows Server 2003, and use LSI Logic SAS for Windows Server 2008 and later operating systems. As with vNIC selection, using preferred or default SCSI adapters will ensure interoperability between storage drivers between the cluster nodes.

Also set the I/O timeout period (the disk timeout value) to a minimum of 60 seconds to allow nodes to share storage data across a busy network. This setting can often be adjusted through each server's registry, though it's important to verify that the timeout remains set properly if you reset or recreate the cluster later on.

When provisioning disk storage within cluster servers, use thick provisioning with an "eagerzeroedthick" format. This approach allocates all storage space upfront and pre-zeroes the LUN before use. This is often considered the best-performing storage allocation for a high-availability deployment because there is no chance that physical storage will run short of the provisioned space (which can happen in thin provisioning) and the entire space is cleared so the server doesn't need to waste time zeroing disk space on the fly. However, any disks created for raw device mapping (RDM) files need not be thick provisioned.

One other note: Avoid overcommitting memory on VMs across clustered servers. Overcommitting can allow greater consolidation -- especially for idle workloads -- but this can present a serious performance hit when VMs demand more memory than is physically available. The server will rely on page swapping to avoid a hard crash, but page swapping with disk storage will create major performance problems for the server. Use fewer VMs within the cluster, add memory to the servers, or (if you must overcommit memory), use a swap file located on local disk within the servers rather than the local SAN. If page swapping occurs, excess SAN traffic will be eliminated.

This was last published in January 2015

Dig Deeper on Virtualized clusters and high-performance computing

PRO+

Content

Find more PRO+ content and other member only offers, here.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close