Capacity planning is a crucial practice in any corporate data center. Administrators must forecast future computing loads based on analysis of trends monitored over time and coupled with a clear understanding of business goals.
In effect, capacity planning allows organizations to chart a course for their data centers. It allows them to make cost-efficient purchases that meet the needs of business applications across the anticipated user population.
Capacity planning is just as important in a virtual setting. Although virtualization brings flexibility to the data center, it can also waste computing resources and create performance bottlenecks if not managed properly. So virtualization only increases the need for well researched capacity planning.
Here are some common questions and answers about capacity planning as it relates to virtualization technology:
How does virtualization affect capacity planning?
Virtualization does not change the underlying goals or benefits of capacity planning. But the major point of virtualization is to improve the utilization of computing resources, so the need for planning can actually be more acute in virtualized data centers —it's a sentiment echoed by IT professionals active in virtualization.
Virtualization also presents additional planning considerations such as server workload balancing and failover—ensuring that virtual machines, or VMs, are distributed in such a way that makes the most efficient use of a server's computing resources. It also ensures that adequate computing capacity remains available to accept VMs migrated from other host servers as the need arises.
What are the biggest mistakes or issues overlooked with capacity planning in a virtual environment?
One of the biggest errors with virtualization is packing as many VMs onto each host server as its computing resources allow. This is technically feasible but generally discouraged because it's impossible to migrate or failover a VM to a host server that is already at full capacity. Rather than trying to wring 100% utilization from every server, most IT professionals will shoot for about 50% to 80% utilization and leave the remaining capacity available for VM failovers.
Virtualization sprawl, in which the number of VMs grow uncontrollably until they choke off vital computing resources, is another error that is often ignored until additional computing resources become unaffordable. New VMs are routinely introduced to meet important business needs, but noncritical VMs must be prohibited from the data center—especially from production servers. VM lifecycle management practices help to mitigate virtualization sprawl by instituting business processes and policies to regulate the creation, handling and eventual removal of VMs from data center servers.
I thought VMs relied on things like CPU, memory and I/O, so why is storage capacity so important in a virtual data center?
VMs certainly are affected by a server's underlying computing resources. CPU cycles, memory space and I/O capacity will each influence the number of VMs that are hosted on that particular physical machine—and will indirectly affect the performance and stability of those VMs.
For example, the sum CPU, memory and I/O demand of all the VMs on a server should never exceed the total CPU, memory and I/O capacity of that server. If one or more computing resources fall short, VMs may present poor performance or their stability may suffer.
But VMs require data protection. They are typically protected using regular snapshots that capture the precise state of a VM and save that machine's state to storage. VMs that are captured to storage can also be copied or restarted onto other servers as the need arises—even replicated to off-site storage for disaster recovery protection. SANs are almost always used for best performance, but tactics like using VM lifecycle management—to prevent virtualization sprawl—and provisioning the smallest possible VM for an application will enhance performance.
This means the storage needs of a data center's VMs must be considered along with the needs for CPU, memory and I/O. As a minimum, there should be enough storage to retain snapshots of each VM and double that if the snapshots are being replicated off-site. Storage needs will also increase over time as more or larger VMs are added to the business, so it's important to monitor and plan for storage growth—not just server growth.
What's the best way to approach virtual capacity planning? What tools or techniques would you advise for a virtual environment?
The best approach to capacity planning in a virtual environment is to take a holistic view that combines a technical assessment of resource utilization over time with an understanding of business needs and objectives. Measuring resource use is a relatively straightforward matter. For example, Windows operating systems provide data collection tools that can track a diverse array of computing resource and performance criteria.
Microsoft also provides dedicated capacity planning tools such as System Center Capacity Planner 2007. Similarly, there are a number of third-party tools available for capacity planning, including Capacity Planning and Management software from Uptime Software, TeamQuest Performance Software and CitraTest VU from Tevron.
But good capacity planning is more than just watching trends. Those trends must be interpreted within the context of established business goals or plans before they are acted upon.
For example, suppose that new applications will add 10 new VMs to a data center in the next 60 days. By measuring the resource demands of each new VM in advance and considering the relative importance of each new VM to the business, it's possible to assign the new VMs to servers that have adequate computing resources available. More critical VMs can be placed on virtual clusters or other high-availability servers, lesser-used VMs can be reassigned to other servers, and new servers can be purchased as needed to ensure adequate computing power.