Sergey Galushko - Fotolia
In the world of computers, bigger is often better. More CPUs and more memory is the common solution for the challenges...
posed by today's bigger applications. The trend of bigger is better is so ingrained into how we approach things that we often don't even notice it any more.
In a physical server environment, having excessive CPU and memory resources does not adversely affect the application, but it does have a negative impact on the return on investment (ROI) in the form of wasted compute resources. In fact, these excessive compute resources help to give rise to virtualization. However, when we move that same application thought process into the virtual world, it can have a negative impact more severe than simply reduced ROI.
A virtual environment is a shared sandbox where all VMs have to play nice with each other. If one VM takes all the resources (whether or not they use them), it means others can't have them. For a virtual infrastructure with dozens or hundreds of VMs, a couple misconfigured VMs can adversely affect an entire environment. Even though we know we need to be cautious in adding resources, the old temptation to throw hardware at problems still carries weight in our thought process. In critical troubleshooting situations where getting systems back online takes precedence over anything else, throwing resources at a problem is still an accepted practice. Unfortunately, throwing more resources at applications becomes the all-to-easy answer and can lead to more problems.
Before making that leap and simply adding resources, we need to consider whether the workload really needs more resources or if the problem runs deeper. What we want to do is look at the source of the problem rather than trying to address it with a quick fix. Administrators can't do this alone, as they may not know all of the application's functions. It will involve both the administrator and application owner working together.
This should not involve the application owner telling the system administrator what resources are needed or quoting system specs. This requires doing some detective work on both sides to find out what is truly going on behind the scenes. Two mistakes often come into play when sizing up resources. The first is relying too heavily on Windows Task Manager -- an often-used tool that can give a quick look at resource use. This tool gives a very high-level approach that is often misused to request additional resources. It is very limited, simply graphing resources without providing details on active, swapping or cached resources.
The second mistake is relying too much on the vendor's guidelines. Unfortunately these specifications are rarely a one-size-fits-all solution. Many factors come into play for how an application will perform outside the vendor's control. For that reason, installation requirements are often inflated to account for the fudge factor of the customer's environment.
As you start the detective work to uncover the root of a performace problem, hypervisor-level performance statistics will become your best source of information. These statistics will show if you have a restriction in VM performance and provide insight as to why. This information provides the virtual administrator and application owners with the ability to look at active memory and CPU use. This insight into what is truly in use on the VM is beyond what Task Manager could ever give administrators, and can indicate if resources were strained.
This level of information has been available for some time, but it was often not shared because of the data's complexity and often silos between roles and responsibilities. While not everyone will understand every technical aspect of virtualization, the concepts are no longer a mystery. This better understanding of what is truly going on in the virtual infrastructure can help application owners understand throwing hardware at the problem is not always the answer.
Now that everyone can see and understand the performance data, we can identify when application problems are not caused by insufficient resources. Patches and misconfigurations can play as much of a role in performance issues.
Now, we begin to see the advantages of creating smaller VMs and growing them rather than creating monster VMs from the start. However, we're still left with the nagging question: "What happens if you don't allocate enough resources?"
Previous versions of VMware vSphere and Windows Server required a VM to be shutdown before you could add virtual resources. But, with VMware vSphere 5, Windows 2008 Data Center and Windows Server 2012, administrators now have the ability to "hot" add virtual CPUs and memory. Unfortunately, this ability does not exist for Hyper-V guests. This flexibility -- coupled with the existing ability to extend and add disks -- leaves very little within a modern Windows installation on VMware that cannot be extended or grown without a reboot. The biggest downside to extending resources on the fly is the VM must be shutdown to remove resources, so incremental jumps are better than large changes.
Having this flexibility is not an excuse to starve your VMs for resources and only increase resources based on who complains the loudest. Due diligence is needed for proper VM sizing based on previous experience and reasonable vendor recommendations.
Virtualization is a shared sandbox of resources, and starting a bit small while having flexibility to grow with a few clicks becomes a win-win for everyone, -- all while saving money in the process.