Manage Learn to apply best practices and optimize your operations.

Solve the virtual server farm sizing equation

Once you know how many servers your infrastructure will need, the next challenge is sizing those host servers to meet VM and application needs.

Determining the number of hosts and VMs you will need to support your applications is just the first step in designing...

your virtual server farm -- you'll also need to figure out how much memory and CPU resources each host should contain. In a previous article I explained how to use simple math to calculate the quantity of hosts you'll need. Using those calculations, you can now determine the resources needed in each host.

Since we are adding to our existing infrastructure in this example, we will assume existing storage and networking infrastructure is already up to the task, leaving us to focus on host CPU and memory configurations.

Determining host CPU configuration

Most rack servers today come in dual- or quad-socket configurations, so a single-socket server platform is typically not an option when designing a virtual server farm. This leaves the choice between dual- and quad-socket hosts, but you also have the option not to fully populate a dual-socket server and leave it with only one CPU. While we may think more is better, each time we double the sockets we double all of the software licenses from companies like VMware and Microsoft, so this decision is important.

A critical factor to consider today is that the core density of the modern processor and processor contention isn't as much of a concern today as it was a few years ago. If you're using Intel processors you also get the benefit of hyperthreading, which allows a single core to do the work of two. In a dual-socket server you could be looking at 30 cores per socket, or up to 60 cores if you fully populate a dual-socket server. The number of installed processors is going to depend on how many virtual CPUs you need for your applications.

With Windows Server 2003 and even 2008, one to two virtual CPUs was normal, but as workloads have increased, two to four vCPUs is becoming the standard. Looking at your application needs will determine if you need one, two or four CPUs in your hosts. With our example of 30 virtual machines, a single CPU with 30 virtual cores may be underpowered since many VMs require two or four vCPUs. One the other hand, a server with four sockets would most likely be underutilized and create an expensive licensing scenario, leaving the dual socket as a great balance between cost and scalability. With our example, a dual-socket server with 20 to 30 vCPUs per socket and would give us 40 to 60 vCPUs. This would support a 1:1 ratio for dual CPU VMs or three vCPUs to one physical core for quad CPU VMs.

Sizing host memory needs

Memory selection for your server is normally the most expensive part of a server build, so careful consideration can save you considerable costs. The base operating system is where your initial calculations come in. Windows Server 2003 had an average of 4 GB of base memory, while 2008 was 6 GB and 2012 is close to 8 GB. Your memory values will vary, but this is an average that leans slightly high for performance overhead. With our density of 30 VMs multiplied by 8 GB of memory, that gives us a base of 240 GB per host.

Not all of that 8 GB per VM will be in use at all times, but we have not yet accounted for application needs and this baseline will come into helping us make a selection. Per socket, our server can support several options, including 96 GB, 128 GB, 192 GB and 256 GB. With our dual-socket configuration 128 GB would become 256 GB, which gives us 16 GB of spare memory based on the 8 GB average per operating system. However, we come up short when we have to take into account the n+1 formula, which would require more than 25 GB.

While we have not finalized our application needs, it is likely that 128 GB per socket will be cutting it too close. So, our second choice would then be 192 GB per socket, giving us 384 GB per host. While this might seem high, the memory configuration gives us a little less than 12 GB of memory per VM. Based on our application use, 192 GB sounds better as a starting point and based on average use, we can evaluate if we will need to move to 256 GB.

Configuring a new virtual server farm is not always easy, but a good understanding of what types of applications and workloads you have coupled with a solid understanding of the steps and math involved can help you ensure your next virtual infrastructure fits not only your needs, but also your budget.

This was last published in September 2014

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

How to design your server virtualization infrastructure

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close