BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
A new virtual machine can be deployed in a matter of minutes. This deployment can occur with high degrees of automation, but VMs often still share vastly-oversized LUNs that are manually provisioned on a storage array. The result is wasted storage capacity -- increased IOPS and random disk activity among a limited number of spindles cripples storage performance. Even worse, storage is logically decoupled from the VMs, making optimization and troubleshooting extremely difficult.
But it's not all bad news. Storage is finally catching up to virtualization. Virtualization-aware storage promises faster and more efficient storage use, providing the technology to provision, migrate and manage storage in terms of individual VMs.
Today's storage challenges in virtual environments are rooted in traditional siloed environments, which had server, networking and storage resources that were each handled by separate groups within the organization. Experienced IT professionals probably still recall the amount of planning, coordination and problem-solving that went into deploying a new server or workload in a traditional environment.
Virtualization changed this paradigm, abstracting workloads from the underlying hardware and providing IT administrators with the tools to provision server and network resources together. Despite this potential, storage remained largely independent. Storage administrators typically provided LUNs of a prescribed capacity, and VMs were then assigned to the available LUNs. These numbers were reliant on legacy storage protocols like SCSI, NFS, SMB and others, and weren't tied to the virtual environment or workloads. Consequently, storage remained cumbersome and difficult to manage, especially as VMs proliferated and competed for LUNs and put more pressure on storage performance and capacity.
Storage vendors addressed simple provisioning problems by employing plug-ins and command-line scripts that helped to automate and streamline common storage tasks. Later, hypervisor vendors added new storage protocols like XenServer's (now-deprecated) StorageLink and VMware's vStorage APIs for Array Integration, which aided array-based functionality like replication, snapshots and quality of service (QoS) support. Although helpful from a management standpoint, these so-called improvements didn't enhance the underlying disassociation between VMs and storage.
For example, even though a storage system might be excellent at handling backups and remote replication, those functions are still performed per LUN rather than per VM. Thus, performance and efficiency can be wasted when many VMs share a LUN.
The goal of virtualization-aware storage is to fundamentally change this dependence on traditional LUNs. Virtualization allows storage platforms to integrate with virtual infrastructures and include storage management, compute and network resources at the virtual machine level (per VM rather than per LUN). The use of thin provisioning is an early example of storage virtualization, but it is not hypervisor- or virtualization-aware.
A true VM-aware storage system maps storage to VMs, so management tasks like performance monitoring can gauge issues like storage latency to the VM level. This also applies to features like QoS -- intelligent decisions about moving an afflicted VM to other storage resources can be made based on workload importance (QoS settings) and performance levels elsewhere in the storage infrastructure.
"Most virtualization-aware storage comes with software integration to the hypervisor," said Aldo Cabrera, network engineer and release manager at W. P. Carey Inc., in New York City. "We use Nimble Storage with a vCenter plug-in which speaks directly to the hypervisor when it needs to quiesce to back up, create, destroy or modify new LUNS, or report on IOPS, capacity and issues."
Meeting the requirements
VM-aware storage places a hypervisor integration software layer atop a conventional storage array -- the array itself can still use magnetic disk, flash disk or some other combination of storage media. IT organizations can choose whether to develop and deploy this amalgam in-house, or purchase pre-engineered storage subsystems that already contain appropriate hypervisor integration.
IT groups can certainly "roll their own" virtualization-aware storage. "If you have an empty array around, you can certainly virtualize this with OpenStack tools and no special hardware is required," said Tim Noble, IT director and advisory board member from ReachIPS. "We are using OpenStack on an all-flash array that is shared with our existing internal cloud environment." Noble notes there are benefits to using sub-millisecond access times to accelerate applications. In addition to open source tools, third-party software products such as the Nutanix Xtreme Computing Platform can be installed to virtualize existing storage assets and create software-defined storage environments.
Organizations can also deploy virtualization-aware storage through dedicated storage subsystems (sometimes classified as hyper-converged appliances) such as Tintri's VMstore, the Nutanix NX family of hardware platforms and others. Arrays are making greater use of flash storage and disk, (hybrid arrays) and even all-flash systems. Hybrid and flash storage arrays from vendors like Tegile and Pure Storage make extensive use of compression and deduplication, and increasingly distribute compute and storage as functions that can be clustered for better resiliency.
But there are also potential pitfalls to consider. Regardless of the actual approach, IT leaders need to pay attention to the level of integration and management provided by the underlying hypervisor. Noble said VMware hypervisors can migrate storage between arrays, but don't yet have the ability to move data between different storage tiers based on usage -- an annoying limitation that many traditional, nonvirtualized storage arrays now handle automatically.
Support for the virtualization-aware environment is also critical for deployment success. For example, OpenStack implementations require a keen understanding of Linux versions like KVM. Security in the OpenStack environment also cannot be overlooked, especially in appliance-based OpenStack products that might require vendor patches or updates to fix potential vulnerabilities. OpenStack security requires the right mix of IT and development staff, as they need to be able to create and maintain the software. DevOps skill sets can also be useful for in-house software development. Consultants can help to develop software support for OpenStack implementations, but consultants should possess documented expertise and provide staff training as part of the engagement.
"Strong hypervisor tools and well-supported hardware will really reduce the headaches," said Pete Sclafani, COO and co-founder of 6connect, a network automation solutions provider in San Francisco. Sclafani also points to the need for networking expertise and support. "All of these storage systems are relying on network infrastructure to work, so it really helps to understand where performance bottlenecks can pop up and be able to fix problems intuitively," he said.
Options for coupling virtualization and storage
Hypervisor-aware storage can bring greater insight
Choosing storage that's smart enough for virtualization