Solid-state drives have a big advantage over traditional spinning disks when it comes to IOPS, but they also come with a hefty price tag. Today, innovative companies are finding different ways of using SSD storage to accelerate VM performance. But, with all the SSD-based products (both software and hardware), it's difficult to know where a company can find real value. This month we're asking our Advisory Board members how SSDs should fit into today's virtual data centers. Is it time for companies to start investing in SSD storage, and which use cases offer the best return on investment?
Maish Saidel-Keesing, Cisco Video Technologies Israel (formerly NDS Group Ltd.)
Not long ago, using SSD as your tier-1 storage platform was extremely expensive. Today, it is still not cheap, but the prices are coming down and using SSDs is no longer out of the ordinary.
The amount of flash-only arrays and offerings are growing all the time. All the major players, including EMC, NetApp, Cisco and Hewlett Packard, offer a product. And, many smaller vendors are breaking out with new products that include flash, including Tintri, Nutanix and Kaminario.
The market is headed toward offering even faster performance by moving the caching layer closer to the workload and hypervisor. There are two approaches to this caching -- either by using RAM or SSD storage -- and SSD has emerged as the cheaper option.
Christian MohnEVRY Consulting
Will this affect your decisions when deciding on the specification of your hypervisor? The answer is a definite "yes." You should seriously consider adding an SSD layer to your data center, even if it is only a caching layer.
Now that additional products -- like PernixData and VMware's VSAN -- are looking to make use of the available flash in the server as an additional storage option or as a caching layer, an SSD accessible to your hypervisor makes a whole lot of sense.
Christian Mohn, EVRY Consulting
SSD drives and flash have hit the data center in full force. You can even say that enterprise flash storage has gone mainstream and should, in my opinion, now be one of the cornerstones when building a virtualized infrastructure. SSDs are commonly used in tandem and tiered with traditional hard disk drives, either in hybrid arrays or for host-based caching. Even lower latency PCIe cards are common in data centers these days.
Products like PernixData's FVP can utilize both SSD drives and PCIe-based flash cards. This is a great way to accelerate traffic in and out of the SAN infrastructure, and can yield great performance boosts without having to rip and replace the existing storage hardware. This makes a lot of sense in virtualized environments where a SAN is already in place, but the ever-growing number of VMs and workloads tends to strain the existing storage.
VMware VSAN is interesting as well, because it not only uses flash for acceleration, but it also makes it possible to use local server disks as a distributed SAN without having to use traditional SAN infrastructure. VSAN is still in public beta, and VMware has yet to announce pricing, but it is a very interesting product. I can't wait to see what the different hardware vendors come up with for VSAN-ready nodes, where hosts are specifically designed to run this new, potentially disruptive, class of enterprise storage.
Accelerating existing storage
Another method of using SSD in the data center is to introduce flash into existing storage arrays as a caching layer where hot data is stored on the faster flash storage, and stale data is stored on traditional HDDs. Your existing storage must support this option if you want to take advantage of the performance boost.
All-flash arrays are still pretty expensive and require a complete overhaul of your storage infrastructure. As capacities rise and prices fall, all-flash arrays might become more common, but they are too expensive in most cases.
Memory channel attached storage
Now that VMware has even certified memory channel attached storage for vSphere 5.1 and 5.5, the caching layer is set to speed up even more, since the memory bus is even closer to the compute layer than PCIe cards and SSD drives. One thing is certain: Storage is changing now faster than it has in years. It's exciting to see all the new products come to market, and the possibilities for data center architects and administrators are expanding.
If you haven't already, it's absolutely time to think strategically about how to use flash in your storage infrastructure, especially when looking at future requirements. It makes sense to put the faster flash storage layer as close to the compute layer as possible; this reduces latency to a minimum, because the data doesn't have to traverse the storage network. This yields immediate results, and might even prolong the life of your existing storage investments. Speeding up existing storage and prolonging its life cannot be a bad thing. Just make sure to select enterprise-grade SSD drives. Not all SSDs are created equal. You do not want your acceleration project to fail because you bought the cheapest SSDs available.
Jack Kaiser, Focus Technology Solutions
I went to Brad Maher, the Virtualization Practice Lead at Focus Technology Solutions, for this month's response.
"I think it's past due that companies invest in SSDs. Most of our customers are already using SSDs. We've seen customers use SSD in many forms. It started with using SD cards to run ESXi. Next, we saw flash used in SAN storage with technologies like EMC's Fully Automated Storage Tiering (FAST) and FAST Cache and NetApp's Flash Cache.
"Now, we're seeing SSD prices come down to the point where it's beginning to make a lot of sense to host non-persistent virtual desktop environments completely on SSDs. We're also seeing customers evaluate it for high-end database workloads like SQL and Oracle where IOPS are king, but capacity is not. As costs come down, SSD will become prevalent in the data center for many different types of workloads."