This content is part of the Essential Guide: Develop a solid virtualization capacity planning strategy

Essential Guide

Browse Sections

Overcoming virtualization challenges: Capacity planning, provisioning

The abstraction of virtualization makes resource provisioning tricky. Without proper capacity planning, over-allocation can affect performance and waste resources.

Technologies like virtualization have eased some of the physical issues of managing data centers, allowing less hardware to handle more workloads. But this added utilization and flexibility can bring a few virtualization challenges. The abstraction of virtualization has complicated data center management, requiring more technical and procedural regulation to keep the environment from spiraling out of control.

What causes virtualization challenges

Virtualization challenges occur because virtualizing an infrastructure makes it harder to locate problems. In traditional non-virtualized environments, tracing a failed application to a faulty server or subsystem was a relatively simple matter. Virtualization places numerous workloads on the same server and can migrate virtual machines (VMs) between physical host servers almost on demand.

Consequently, tasks like tracking and identifying VMs on physical hosts or sorting out the source of I/O bottlenecks requires far more investigative work on the part of administrators. Additional virtualization challenges may come about because virtualization might not support some hardware devices such as USB flash drives.

Having the capability to migrate VMs between physical servers has emerged as an important troubleshooting tool. It allows administrators to balance workloads and move impaired VMs between servers while preserving each application's availability.

Another common source of virtualization challenges is application compatibility with the virtualization platform. Applications that are not fully compatible with a virtual environment may perform poorly or not at all. Thorough testing in a lab environment can reveal application issues and prevent them from affecting production workloads.

Capacity planning

Almost all IT environments experience the demands of growth over time, and capacity planning is required to handle that growth. Expansions often take the form of additional users -- either internal or external -- utilizing more applications, which, in turn, require greater computing resources such as more powerful servers and faster storage. When demands exceed capacity, you'll run into more virtualization challenges. Applications may become unstable or unavailable, and this can have profound consequences in terms of business revenue and reputation.

Capacity planning tracks resource utilization trends and couples that data with a knowledge of business plans to predict future resource requirements. The business can then plan, budget, purchase and deploy new resources to meet those future demands.

Over-provisioning can complicate capacity planning because more resources are assigned than are physically available. This requires an administrator to allocate more physical resources long before the logical resource is filled.

For example, it's possible to create a 500 MB LUN even though there may only be 100 MB of physical storage assigned to it initially. The administrator then must add more physical capacity to the logical 500 MB LUN as that initial 100 MB fills.

Similar problems can also occur with the memory over-commit features of virtualization platforms.

Resource provisioning and over-commitment

More virtualization challenges can occur when it comes to resource provisioning. Provisioning is the process of allocating computing resources to a workload. Resource provisioning may include tasks like carving a LUN out of storage or assigning server CPUs and memory to VMs.

The basic goals of resource provisioning and configuring in a virtual system are essentially identical to a non-virtualized system. But virtualization carries a distinct risk of over-allocation -- assigning more resources than the system can actually provide -- because of the multiple workloads that a virtual system will handle.

The idea of over-committing computing resources is certainly not new. For example, thin provisioning is a standard practice in storage environments. But over-allocating resources on a virtual server carries two potential penalties.

First, assigning too many resources to an application that doesn't really need them will waste resources and reduce the total number of workloads that the system can handle. Second, over-committing resources may create a situation where the server's performance and stability are compromised, and this threatens all of the workloads on that particular physical host.

Provisioning a virtual system properly requires a keen awareness of each application and its computing needs. Tools like live migration can help ease resource provisioning and other virtualization challenges by allowing administrators to move workloads to balance computing resource demands.

Stephen J. Bigelow, a senior technology editor in the Data Center and Virtualization Media Group at TechTarget Inc., has more than 15 years of technical writing experience in the PC/technology industry. He holds a bachelor of science in electrical engineering, along with CompTIA A+, Network+, Security+ and Server+ certifications, and has written hundreds of articles and more than 15 feature books on computer troubleshooting, including Bigelow's PC Hardware Desk Reference and Bigelow's PC Hardware Annoyances. Contact him at [email protected]

Dig Deeper on Capacity planning for virtualization