Brian Jackson - Fotolia

Tip

How CPU and memory affect application performance

Application performance will change depending on if you want more cores, better clock speed or what you choose for memory.

As the data center becomes more dependent on virtualization, we need to ensure the virtualization infrastructure is prepared for it. With so many choices in the underlying hardware, such as spinning disk, flash, converged and hyper-converged infrastructure, the data center has become an ever-changing landscape. Unfortunately, in most cases, it is not a one-size-fits-all puzzle.

While some vendors may disagree with that, the reality is the virtual infrastructure exists to support applications. In fact, many organizations have hundreds of applications and each application has its own performance profile that needs to be evaluated. Virtualization abstracts the hardware and enables portability, scalability and flexibility like no technology has ever done. However, as with any abstraction of the hardware, limitations and performance constraints still exist -- even if the virtual machines don't directly see it.

Years ago, increasing an application's performance was about getting the fastest server possible, but this is simply no longer cost effective. Virtualization resources don't match up with applications one for one, and the environment more closely resembles a shared sandbox where everyone needs to play well with each other. To find the balance between how many VMs can exist on a single hardware platform without performance issues and still preserve the cost savings, know your applications' profiles. While each application has its own unique profile, they typically share four common key metrics: network, I/O, CPU and memory. With virtualization, you have the ability to control and even limit each of those metrics because of the abstraction offered by the hypervisor. However, you can't increase the amount of resources over what the hardware can provide. Abstraction does not mean hardware creation, but rather the allocation of existing hardware.

Managing CPU capacity

The common thought when it came to processors was, the higher the clock speed­­­­­, the better. Multicore CPUs have changed that thought process -- but not always for the better. CPU resources are a tricky value because you have several different options that can have a wide range of impact. With CPUs, you have a bit of a range in what you can select, depending on if you want more cores with lower GHz or higher GHz with less cores. You can even find a middle ground in-between each as well. Your decision could very well boil down to your applications. Even today, you cannot assume every application you have is multithreaded and is able to take advantage of multiple cores. In fact, still today, many applications can't take advantage of more than one or two cores. Now, you can have that one core application that is multithreaded, but you have to examine it to know if it is a monolithic or transactional-based profile. This will help you to determine if you need to have more GHz or as many cores as possible.

The application profile can be a source for separating your virtualized workloads across different farms to best take advantage of hardware more suited to each profile. Mixing an environment such as VDI, which often requires a large number of cores but normally at a lower GHz speed, can drastically effect production servers in that same environment that need the higher GHz. This is one of the reasons that VDI does not share the same production infrastructure. So, bigger is not always better, but remember to look in the application profile and find out if the CPU need is multithreaded and transactional-based or monolithic. With that data, you can start looking at the CPU profiles and make your selections based on cores and GHz. This will start to give you a base value to work with, but you won't be finished yet until you take into account over-commitment and failures coming up.

Memory sizing and performance concerns

Memory was one of the most confusing values to work with before virtualization because few tools existed to really tell us what was going on inside our servers. Unlike other resources in a server, memory is often cached for future use.

The second and typically most expensive piece to look at is memory. Memory was one of the most confusing values to work with before virtualization, because few tools existed to really tell us what was going on inside our servers. Unlike other resources in a server, memory is often cached for future use. While this can provide a performance boost in some cases, it skews performance usage data because the memory is held by the operating systems but not active. Fortunately, virtualization now allows the administrator to see what is active in memory compared to what is simply cached by the OS, with monitoring at the hypervisor layer. With this level of insight, we now have a true understanding of what is needed. This can help eliminate costly over allocations that bring little to no benefit to the application.

Memory also has a unique caveat that the other resources do not have. Memory can be supplemented with disk -- both solid-state disk (SSD) and spinning -- to allow for a greater level of over-commitment of the resource. Over committing to spinning disk often has a substantial impact on performance and should be avoided at all costs. Swapping to SSD is a bit different, as many believe that the speed of the SSD is nearly the same as memory. Unfortunately, this is not accurate, as SSD going through a disk subsystem has a performance of 600 MB/s. Since memory does not have the same limitation, the speed of transference is closer to 15,000 MB/s depending on speeds. While using SSD for memory should not be ruled out, it would be more ideal for emergency swapping during a failure rather than planned usage.

Next Steps

Selecting CPU, processors and memory for virtualized environments

Monitoring vSphere CPU and memory usage

The relationship between cache technology, CPU and RAM

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close