alex_aldo - Fotolia

Adding flash to your virtualized data center

Using flash storage in the data center can change your entire approach to virtualization.

When adding flash to your data center a big challenge is making sure the rest of your infrastructure keeps pace. How the infrastructure needs to adapt depends largely on the type of flash implementation that's chosen. There are three basic choices: server flash with caching, shared flash arrays and hyper-converged flash.

Although flash can be installed in a server and used statically, meaning it is the primary storage, most virtualized server environments will use flash as a cache. For maximum performance gain, this cache should be able to cache reads and writes.

Caching of writes, though, means potential data loss. To protect against this, the server cache should be redundantly protected -- external of the physical server. This can be done via a server-side network, where writes are mirrored to another flash SSD in the virtual cluster or to a shared storage device's flash area.

Server-side cache and shared flash arrays require an advanced network. While performance gains are possible while adding flash on a 1 GbE network, the full potential of flash storage will be realized once a network is upgraded.

Fibre channel environments implementing a shared flash system should strongly consider 16 gigabit (Gb) bandwidth, which became available in 2014. IP storage environments should consider 10 Gb Ethernet or greater. This bandwidth allows the network to keep pace with the flash storage to which it connects.

A key capability in next generation networking is quality of service (QoS). Storage networking QoS allows bandwidth to be prioritized per workload, typically per connecting host. But for virtualized environments, it is important that these networking protocols provide the ability to apply QoS at a VM level of granularity. Networks armed with this capability allow administrators to virtualize mission-critical applications with greater confidence knowing that essential workloads will be guaranteed a certain level of performance. A shared storage system that also provides QoS will allow for improved VM density and the virtualization of mission-critical applications.

Even for server-side cache and hyper-converged architectures, the network remains important. Many of the requirements are the same as they are for a shared storage network. The network traffic created by an active cache environment -- especially a hyper-converged architecture -- can be significant. This network should be dedicated and also use the latest networking technology.

More VMs per Host

The decision to use flash also affects server selection and configuration. Because storage infrastructure can respond rapidly to the needs of the virtualized environment, administrators should consider dramatically increasing the number of VMs per physical host. Doing so significantly reduces the number of hosts required in the data center and could increase the return on investment of a virtualization project as a whole.

But increasing the number of VMs per physical host does have implications for how that host is configured. First, somewhat obviously, the investment in maximum CPU power can now be fully justified.

RAM introduces a significant impediment into this process. Servers go to market with plenty of RAM capacity, but the investment in that RAM is costly. This is an ideal situation for the use of PCIe SSD or memory bus flash to augment dynamic RAM (DRAM) and compliment shared flash storage. Both of these technologies provide high performance and low latency, and, since they would be used as virtual memory, they don't need the same redundancy that a flash cache would.

If the average physical host server costs $20,000 when equipped to support virtual machines, an increase of VM density by five times could save an organization at least $100,000. While some of that savings will be consumed by the additional memory and processing power described above, roughly $80,000 should remain, which can cover as much as one-third or more of the cost of a flash array.

One way an organization can implement flash, while at the same time managing flash's effect on the rest of the data center, is through a converged infrastructure product. These offerings are essentially pre-packaged configurations of hardware, resulting from partnerships between server, networking and storage vendors. The upfront costs of converged infrastructure can be substantial, but such products deliver powerful new capabilities to a data center.

Adding flash storage has the ability to bring tremendous value to the virtualized architecture. Flash media is so fast it can improve performance even when implemented poorly. For an optimum return on the flash investment, IT professionals need to focus on the infrastructure that surrounds flash to make sure real gains are achieved.

Flash technology will evolve, and we may not even use flash 10 years from now. Memory technology continues to advance, and a storage medium that is more like DRAM but with the persistent nature of flash is likely to replace the current flash technologies. During its reign, however, flash will enable data centers to be denser than ever, allowing them to reduce their physical and environmental impacts.

Next Steps

Why now is the time to start using SSD storage

Will adding flash benefit your data center?

Dig Deeper on Server consolidation and improved resource utilization