Q. What hardware specifications should I consider when selecting a server for virtualization? And, is it better...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
to replace all my servers at the same time or stagger the hardware refresh?
Three key elements in selecting a server for virtualization include CPU, memory and network I/O capacity – all of which are important for workload consolidation. CPU issues include clock speed and the number of cores. Resist the urge to buy the fastest possible CPUs. It may be more cost-effective to opt for more modest (and less expensive) CPU clock speeds and go for a CPU with a larger number of cores instead, but you’ll get better consolidation from two 2.4 GHz, 10-core CPUs than you would from two four-core CPUs operating at 3+ GHz. Only invest in faster CPUs when anticipated workload performance demands it. If you really want peak performance in a server for virtualization, invest in CPUs with larger internal caches.
Virtual machines reside in memory, so more memory will support additional consolidation. There should be at least enough DDR3 memory to support the number of workloads you expect to run on the system. For example, the two 10-core CPU example above would support (20 cores with two threads each) 40 threads or potential workloads. If each workload uses an average of 2 GB, the server would need at least 80 GB, though many organizations would select the next closest "binary" amount of 96 GB, or even 128 GB. More memory would simply waste money, while less memory will compromise consolidation or performance. Remember that memory resilience features, such as memory sparing or memory mirroring, will require additional memory modules, which will not add to the available memory pool. Instead, you should save these features for servers running mission-critical workloads.
Every workload needs network access, so be sure there is adequate bandwidth available on any server for virtualization. For example, the one Gigabit Ethernet (GbE) network interface card (NIC) common on stock servers will almost certainly be inadequate for a modern virtualized server. Consider upgrading the network interface with a dual-port or quad-port NIC. You may even consider a 10 GbE NIC if workload demands justify it.
There are other hardware considerations. For example, adding a graphics processor unit (GPU) may demand at least one PCIe x8 slot. An expansion NIC adapter will need a slot, while a storage network interface, such as a Fibre Channel host bus adapter, will also require a PCIe slot. Be sure that the server can handle all of the upgrades that you intend to add.
Servers represent a significant capital investment, so bargain hard with your prospective server vendors. Be sure to bring a server into your data center for evaluation, where you’ll be able to test performance at full consolidation. This is an excellent opportunity to identify possible oversights in system requirements and refine your specifications before making the actual purchase. Vendors recognize it is in their best interest to assist customers with specifications and evaluation units.
When it comes to timing your purchase, the choice is usually based on more of a business decision than a technical issue. It is certainly possible to purchase and upgrade the entire server fleet at the same time, and the biggest purchases often net volume discounts. However, this requires the biggest capital outlay and would pose the most danger of disruption to the production environment.
Instead, many organizations opt to stagger server purchases across the system lifecycle. This results in less costly and less disruptive routine purchases. In addition, buyers can leverage the continuing evolution of server hardware. For example, high-end CPUs that may be too expensive today might be affordable in a year or two.
Dig Deeper on Capacity planning for virtualization
Related Q&A from Stephen J. Bigelow
RAID 5 and RAID 6 are two types of erasure coding. The former protects data with basic parity, while the latter builds in a second layer of parity ...continue reading
Cleanly divided and straightforward applications are good candidates for a container-based deployment, whereas complex applications pose more ...continue reading
Assessing the impact of containers on application workloads can be extremely challenging, partially because of how quickly containers are spun up and...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.