kalafoto - Fotolia

Making the most of local flash storage for virtualization

Lower-cost PCIe and solid-state drives are making local flash storage a viable option for improving virtual machine performance.

Component and systems designers have long understood that the time it takes to read and write data introduces latency that slows workload performance. Caching techniques help to overcome these latencies by storing frequently accessed data in high-speed memory closer to the application. Caching has been used in many critical areas of the server: the processor, magnetic disk drives and even network devices. Today the growing proliferation of solid-state storage devices has enabled a new tier of high-performance, low-latency storage that is ideal for caching virtual machine (VM) content. Let's examine this new generation of flash cache and its use on virtualized servers.

How local flash storage benefits VMs

Flash caching is a storage technology that pools local flash storage devices (such as Serial Advanced Technology Attachment [SATA] solid-state drives [SSDs] or Peripheral Component Interconnect Express [PCIe] I/O accelerators) as cache resources that can be allocated to VMs on the server. VMware calls this technology vSphere Flash Read Cache (vFRC). Other vendors provide this feature in hardware-based modules, such as NetApp's Flash Cache and Flash Cache 2 products.

Simply stated, a layer of virtualized flash storage offers a read cache for VMs. By storing frequently-accessed data in a local read cache, VMs can boost performance and lower network traffic because the desired read content is already available locally (a cache hit). Lower latency speeds application performance, and the workload responds faster. If the necessary data is not in the cache (a cache miss), the VM simply accesses data from the storage area network (SAN) or network-attached storage (NAS) as it normally does.

It's important to note that every application uses cache differently, so cache benefits will vary depending on the particular application. Read-intensive workloads that generally use the same data all the time often benefit the most from read caching. Workloads that benefit the most from read caching can be configured with more read cache, while other workloads may receive less (if any). VMware's vFRC also uses write-through caching, so writes are always completed to storage before being acknowledged by the application. This prevents data loss in the event of a power failure or other system fault.

Frameworks like vFRC allow administrators to configure such characteristics as cache amount and block size for individual VMs. For example, administrators should configure a cache with enough flash storage to hold the entire set of read content for a workload, but not so much that storage capacity is wasted. By contrast, setting the cache block size effects the amount of memory needed to index the cache contents. Finding the optimum balance of block size and memory index size can often require some testing and performance monitoring.

In addition, read cache performance will be affected by the medium chosen for local flash storage. For example, solid-state PCIe I/O accelerator-type devices often perform better than SSD devices, and single level cell flash devices will perform better than multi-level cell devices. So, organizations seeking to adopt flash cache should balance the cost and performance of the flash storage device to the cache needs of the workload.

Flash cache system requirements

Generally speaking, flash cache can include local flash storage from almost any device using standardized interfaces. This includes disk-based SSD products using SATA and Serial-Attached SCSI (SAS) interfaces, along with expansion card-based devices using PCIe. But the flash storage must be local to the server. Solid state storage devices in SANs, NAS or other remote storage systems cannot be used for flash caching.

VMware vFRC software is currently supported by a variety of SSDs from major vendors, including Dell, EMC, Fusion-io, Intel, Samsung and SanDisk. New flash storage hardware is always appearing and evolving, so it's important to examine the flash vendor's hardware compatibility list for specific product listings and caveats before attempting to deploy vFRC or similar caching tools in the enterprise.

Flash cache deployments also require software support. For example, vFRC relies on vSphere 5.5 and vCenter Server 5.5 and later. VFRC can also use migration, provisioning and high-availability tools like vMotion, vSphere Distributed Resource Scheduler and vSphere High Availability.

What happens to VMs if the flash cache fails?

Note that read cache contents are not essential for proper VM operation. Although frameworks like vFRC can indeed be migrated, backed up or deleted, the cache contents can also be discarded -- the read cache will simply be rebuilt after the migration or restoration or during continued operation.

For example, when a VM is migrated via vFRC, the cache will be migrated to the destination system along with the VM. This preserves the performance of the cached VM because the cache "stays warm," but it also increases the time required for migration (especially when the read cache is large and network traffic levels are high).

Conversely, vFRC administrators can also opt to migrate a VM without the cache, which speeds the migration cycle but may cause a performance drop until the cache can be rebuilt on the destination system. Rebuilding can take more than an hour, depending on the size of the cache and the volume of new read traffic. Your choice here will depend on the importance of the application and its performance. Other VM management tasks also will delete vFRC cache contents. For example, suspending, resizing, changing, deleting, modifying, restarting or restoring the VM from a snapshot will all discard the current read cache contents -- potentially impacting VM performance until the cache is rebuilt.

Although write cache faults can cause catastrophic data loss, read cache faults (e.g., SSD problems) are typically fail-safe when tools like vFRC are used. Cache storage resources are separate from the actual VM storage, so losing the cache should not cause VM outages. However, all I/O activity must access shared SAN or NAS storage across the network, so VM performance may be impaired until the cache storage is repaired and the read cache is rebuilt.

Flash caching, such as vFRC, has the potential to boost VM performance, but it is still a new technology and must be evaluated and adopted on a per-workload basis. This is because flash storage as a hardware resource (SSDs and I/O accelerators) is still considerably more expensive than magnetic disk and conventional RAM, and not all workloads benefit equally from read cache. IT administrators should first evaluate the performance benefit of read cache on applications in a test environment, and experiment with flash types and configuration options before they make deployment decisions in production. 

This was first published in August 2014

Dig deeper on Server hardware and virtualization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Related Discussions

Stephen J. Bigelow asks:

How are you using local flash storage in your virtual infrastructure?

0  Responses So Far

Join the Discussion

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close