Choosing storage components for virtual infrastructures always poses tradeoffs. Ultimately, the goal is to strike a balance between factors such as cost, performance and scalability.
Depending on your installation and goals, direct-attached storage (DAS) may be the right choice. But implementing a DAS array requires careful planning. If it's done incorrectly, a DAS implementation can severely limit your infrastructure's options and features and undercut the goals of virtualization.
In this tip, we outline various scenarios for implementing DAS and how to gauge whether direct-attached storage works with your environment's priorities. The decision to use DAS should be based on your type of hypervisor, in addition to your infrastructure's high-availability, failover and scalability needs.
Direct-attached storage vs. SANs and NAS devices
A DAS array has advantages over network-attached storage (NAS) devices and expensive storage area networks (SANs). Typically, DAS devices are priced well and provide good performance options. The Smart Array P410 controller for the popular HP ProLiant DL380 G6 server, for example, has attractive capabilities, including up to 1 GB of cache on the controller and RAID 1, 1+0, 5, 5+0, 6 and 6+0 levels. This controller also supports Serial-Attached SCSI or Serial Advanced Technology Attachment drives to provide cost and performance flexibility.
While this configuration is typical for an HP server line, other top brands have similar offerings for local storage controllers. Depending on the drive technology, a DAS array can provide more than 5 TB of storage. If more storage is necessary, a just a bunch of disks (JBOD) array can attach to an external controller on the server without the entry-level expense of Fibre Channel SAN switches or additional NAS ports.
Additionally, in most products, the individual disks are interchangeable between SAN and DAS systems. In fact, disks for a server's local array can be used seamlessly on a SAN for Hewlett-Packard Co. servers, which can use the same drive enclosures for both servers and storage devices.
Which hypervisors support direct-attached storage?
Most Type 2 hypervisors use DAS, including VMware Server, VMware Workstation, the free version of VMware ESXi, Oracle VirtualBox, Microsoft Virtual PC and Microsoft Virtual Server. Type 1 hypervisors -- which are their own operating system and provide direct access to the underlying hardware -- generally use SAN and NAS installations. For more information on the differences between the two hypervisor types, check out IBM's Systems Software Information Center.
Type 1 hypervisors -- such as VMware ESX and ESXi as well as Citrix Systems XenServer and Microsoft Hyper-V -- can use a DAS array, but their management features may not support DAS. VMware VMotion, Fault Tolerance and High Availability, for example, require a supported shared-storage repository. In my experience, I have used DAS as a data store for noncritical data, such as CD-ROM ISO images, virtual machine (VM) templates and decommissioned VMs.
Direct-attached storage and booting from flash drives
If you're interested in a DAS array, consider hypervisors that support a boot-from-flash-drive feature. For cost reasons, most server administrators provision one storage array per server. Usually, this approach doesn't constrain storage because multiple terabytes can be locally stored. And Hyper-V R2 and VMware ESXi's ability to boot from flash drives provide additional provisioning options for small virtualization installations.
Also, booting from a flash drive is a low-cost way to separate a hypervisor from a storage array, which contains VMs. The major server vendors have boot-from-flash-drive options on their current server models (e.g., the integrated ESXi option from HP). Most servers now have an internal USB interface, which allows this configuration through a less formal mechanism.
ESXi doesn't require much storage space to run (HP's options for flash and USB storage devices are 2 GB and 4 GB, for example). During a server failure, ESXi's small size enables easy migration of a local storage array to another server because the array isn't a bootable instance of ESXi but rather a Virtual Machine File System volume with data. The Hyper-V boot-from-flash option, on the other hand, requires a minimum of 8 GB, but 16 GB is recommended. In both situations, the local hard-disk array is a collection of VMs and does not include a hypervisor.
Virtual SANs using direct-attached storage
Administrators also have the option to virtualize storage. As mentioned previously, local array controllers offer plenty of functionality, performance and storage capacity. Virtualization administrators have several software storage methods for converting a standard server into a SAN, including StarWind iSCSI SAN, Openfiler, FalconStor Network Storage Server and NexentaStor.
In this situation, you can implement some advanced virtualization features, such as live migration and high availability. The storage infrastructure's reliability depends on the virtual SAN and its commodity hardware, however. But there are numerous small- and medium-sized virtualization installations that run these virtual SAN technologies, without the use of traditional storage products.
Virtual desktop infrastructures running on direct-attached storage
Recently, a number of blogs and discussions have surfaced about using DAS for a virtual desktop infrastructure (VDI). Here's a common question: Should I place a Windows 7 workstation on an enterprise SAN? After all, it feels natural to use SAN storage for VDI because they share similar components.
According to Brian Madden, however, some organizations have chosen DAS over a SAN for VDI installations. In VMware environments, for example, a DAS array not only removes the cost of SAN storage but also reduces virtualization licensing needs. If an ESX or ESXi host environment does not use a shared-storage infrastructure, most Distributed Resource Scheduler features -- such as VMotion and High Availability -- are unavailable. Therefore, a VDI implementation using DAS does not require the highest vSphere licensing levels.
But there are tradeoffs: An administrator must decide where to place workloads and has no migration capability to accommodate workload spikes. These problems are mitigated by diluting a VDI session-to-host ratio, but it still requires more management from an VDI administrator.
On the other hand, not every administrator believes a DAS array is the right storage strategy for VDI. VDI expert Sean Clark expressed his knee-jerk, reaction to it on Twitter. But after considering the cost savings associated with a reduced SAN footprint, storage switch infrastructure and vCenter functionality level, he agreed it's plausible. Ultimately, emerging VDI cache technology is where things start to get interesting because it makes good use of DAS configurations.
The limitations of direct-attached storage
DAS is an attractive storage method because it's less expensive and easy to set up, but consider the issues that may arise during an expansion. Traditional SANs can easily scale outward with additional storage, but DAS does not have that luxury. Administrators can always add additional storage enclosures and controllers to a DAS array, but that strategy eliminates its simplicity. Also, if you want additional features, such as live migration, DAS can become a hard stop.
About the author:
Rick Vanover (firstname.lastname@example.org), VCP, MCITP, MCTS, MCSA, is an IT Infrastructure Manager for Alliance Data in Columbus, Ohio. He is an IT veteran specializing in virtualization, server hardware, operating system support and technology management. Follow Rick (@RickVanover) on Twitter.