How to choose the best hardware for virtualization
A comprehensive collection of articles, videos and more, hand-picked by our editors
Using local storage for virtualization is fashionable again. For several years, best practices have dictated that...
administrators move virtualization storage away from local servers, in the form of storage area networks and network-attached storage. But new virtualization features and products have made local storage a cheap and useful alternative to expensive shared storage.
The trend against local storage began as organizations began moving large databases and file repositories to a storage area network (SAN), while booting servers from local disk. Later, more organization started booting servers from SAN, forgoing local disks altogether.
Then, virtualization exploded -- bringing with it a huge appetite for SANs or network-attached storage (NAS). With each evolutionary step, local disks became less important. At the same time, local storage became faster and less expensive.
Is it time to take another look at local storage? There are a growing number of vendors that think so. For example, DataCore and Fusion-io are developing products that bring shared-storage closer to the server. Fusion-io’s ioDrive and Virtual Storage Layer technologies use local NAND memory on PCIe cards. The Fusion-IO ioDrive cards provide local storage that rivals the already blazing speeds of solid state disks and lower the need for remote calls to the SAN or NAS for data retrieval.
DataCore’s SANsymphony-V aggregates local disks from several servers into a pool of shared storage, providing the redundancy and availability you’d expect from an enterprise shared storage solution. It will also offer NAS functionality without a centralized NAS device.
The vendor Nutanix incorporated Fusion-io technology and a file system similar to DataCore’s to win the Best of VMworld 2011 award for Desktop Virtualization. The Nutanix server platform uses a combination of ioDrive storage and local disks to create a robust, highly available and highly scalable virtual desktop infrastructure that does not require a separate NAS or SAN back end.
VMware, EMC et al. take second look at local storage
Both EMC and VMware have recognized this local storage trend. VMware introduced the Virtual Storage Appliance (VSA) with vSphere 5. Currently, VSA is similar to DataCore’s offerings, grouping local disk into a shared storage pool. This feature is limited to three servers, but VMware also has the CloudFS fling, which doesn’t have a server limit.
EMC also has a number of innovations queued up in this category. Project Lightning will ship soon, bringing storage to the server in a new way. Project Lightning is a PCIe card that incorporates a Host Bus Adapter and local flash storage. The flash memory will cache the data retrieved from an EMC SAN or possibly even pre-fetch data and hold it in memory for rapid retrieval.
To go one step further, EMC plans to integrate Lightning cards into its Fully Automated Storage Tiering (FAST) product, coordinating caching activities with the array to ensure the most efficient placement of data among the SAN disk, SAN cache and server-side cache.
In fact, I expect EMC to introduce additional PCIe cards with flash storage in 2012. Where Project Lightning is a read-only caching device, look for future offerings with read-write capabilities.
EMC has also expressed plans to move VMs directly to the storage arrays -- the opposite of moving storage to the server. With more storage solutions running on hardware that’s very similar to virtual hosts, the idea is to install a lightweight hypervisor on a storage device to allow virtual machines to run directly on the array.
As for other hypervisor vendors, Red Hat’s KVM hypervisor has completely side-stepped the need for shared storage, allowing live migrations between local disks on two servers. Microsoft is also touting the same functionality in the next release of Hyper-V.
I will not say that SAN and NAS devices are here for good, because I have learned not to make definitive, far-reaching statements about technology, and because challengers to shared storage must prove themselves before they can make a serious dent in the storage market.
However, I hope that these emerging solutions will inspire big storage vendors to address the cost and complexity of large storage arrays fueling the return to local disks. And if these products prove reliable, look for the big storage vendors to buy these technologies and add them to existing storage offerings.