With virtualization, the general approach to capacity planning is the same as for traditional physical environments. First, computing resources are monitored over time. Monitoring tools may determine CPU usage, memory usage, I/O loading, disk storage demands, network bandwidth and myriad other factors. Utilization trends are evaluated in the context of business goals to identify specific needs, which can then be translated into action items like server upgrades or additional server purchases.
For example, consider a transactional database for online orders running on a virtual machine (VM) hosted on a physical server. Suppose that monitoring reveals ongoing increases in CPU utilization and network latency at the server. This can suggest business growth through more orders, but it also indicates the eventual need for greater processing power to manage increasing volume, keep latency low and maintain a smooth user experience. This in turn may justify a server upgrade or replacement.
Capacity planning in a virtual data center is also driven by the need to manage new
An excessive number of VM workloads (and poor VM workload distribution among servers) can easily choke off a server's performance. This not only compromises the performance of every VM on the server but can also cause stability problems and crash a VM -- or even the entire server and all of its VMs. In virtualization, capacity planning is often coupled with strong policies to ensure that each new VM is indeed necessary for the business and that adequate computing capacity is available to accommodate it in advance.
Furthermore, overloading a server may leave inadequate computing capacity in reserve, and the server might no longer be able to accept VMs failed over from other faulty servers. The result is invariably poor application availability. It's a common practice to load virtual servers to between 50% and 80% of their total computing capacity, leaving the remaining capacity free for failovers.
This was first published in October 2009