Maksim Samasiuk - Fotolia
Storage can impose significant limitations on scalability because VMs are dependent on storage for many essential functions. Consider the limits of storage scalability in your environment before scaling and boost its range by investing in new disk technology or by moving disk groups.
Every VM relies on storage to load the initial VM, receive periodic snapshots of the VM in operation and hold data for the workload running in that VM. Consequently, latencies and performance issues in the local disk or larger storage subsystem can affect the operation of a VM when many VMs use the same storage resource. The principle problem with storage scalability is that traditional storage systems don't thread or multitask well.
Consider 10 VMs loading and running on a single disk with a single partition -- a logical unit number (LUN). The LUN might have ample capacity to hold the VM images, snapshots and constituent data files that the workloads need in those 10 VMs. But the underlying disk only has one spindle. No matter how fast the disk is, it can only read/write one point on its media at a time. When multiple VMs try to access that same disk at the same time, the ensuing queue can delay read/write requests from other VMs. This can cause noticeable performance problems for the VMs that are waiting for storage access.
Workloads with significant storage access activity could monopolize the storage subsystem, which can create unacceptable latencies for other workloads. This is sometimes referred to as the noisy neighbor effect.
There are several tactics that you can use to improve shared storage scalability. One option is to employ faster disk subsystems, perhaps moving from a 10K RPM Serial-Attached SCSI (SAS) disk to a 15K RPM SAS disk, or even to a solid-state drive device.
Become a scalability expert
Before considering a scalability decision, perform a cost-benefit analysis to see if scaling will work in your context. There isn't an objective way to measure how much is too much, but examining the needs of your workloads can help determine what's appropriate.
Vertical scalability is one of two major possibilities -- the other being horizontal scalability -- but different hypervisors often impose limits on vertical scale. Depending on the hypervisor, there might be absolute limits on how many resources you can allocate to a VM.
Other factors weigh in on the decision to scale up or scale out. Rather than using instinct, use objective monitoring data to inform scalability decisions.
Rather than address storage scalability by investing in new disk technologies, it might be easier -- and more cost-effective -- to move from a single disk to a disk group, such as a RAID 5 single parity or RAID 6 double parity group, and place the LUNs on the disk groups. The purpose of a disk group is to add more spindles so each disk in the group has a piece of the data and the concurrent access to each disk enables better apparent performance.
RAID groups also provide storage resilience. It's common for organizations to combine new disks with disk groups for even better storage scalability and performance.
You can also implement quality of service (QoS) or minimum and maximum IOPS limits for VM storage. For example, you can use QoS settings to prioritize certain data types: Streaming data might receive a higher QoS than other data types, which ensures that VM workloads that depend on streaming data are less subject to storage delays.
Similarly, IOPS limits can guarantee some minimum storage bandwidth while limiting the maximum storage bandwidth used by a given VM. These types of VM configuration options improve storage performance for critical data types and mitigate noisy neighbor effects on storage scalability.
Generally speaking, software-defined storage technologies can help with storage scalability by categorizing and pooling available storage into distinct performance tiers and placing workloads onto the most suitable tier. Additionally, virtualization-aware storage moves away from shared LUNs and seeks to create storage instances that are exclusive to each VM.
Dig Deeper on Capacity planning for virtualization
Related Q&A from Stephen J. Bigelow
WET code leaves apps bogged down. Learn how to reduce the challenges brought on by code redundancy by programming based on the DRY principle. Continue Reading
A virtualization layer in an embedded system provides better efficiency for tasks such as network virtualization. Some examples of embedded ... Continue Reading
An embedded hypervisor offers several benefits, such as VM security, system reliability and improved hardware use, and is ideal for admins looking to... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.