Manage Learn to apply best practices and optimize your operations.

Direct NVMe performance with virtualization and storage tiers

Non-volatile memory express technologies have a host of benefits, but admins must use virtualization and storage tiers to best target NVMe performance boosts.

Non-volatile memory express technology offers immense performance benefits, but storage costs compel virtualization...

administrators to use it sparingly and target NVMe performance at only select components with virtualization and storage tiers.

Data center storage is evolving, but storage platforms don't seem to change as much. Even though solid-state drives (SSDs) have become more cost-efficient, many admins still use spinning disks in their data centers. Admins are also witnessing the influx of non-volatile memory express (NVMe) technology, which can greatly increase the rate of storage access speed.

Though an NVMe performance boost is welcome, admins face challenges when choosing how to best use these new devices. If cost weren't an issue, admins would likely put everything on the fastest tier of storage they could use and consider the problem solved. Unfortunately, that isn't realistic for most admins, so they have to be selective about technology placement with virtualization and storage tiers.

The NVMe performance placement problem leads to key questions about data. Not all data is the same. Some data is active and admins always need it, whereas other data is colder and admins only need to retain it without always accessing it. Storage admins have worked through this challenge for years, but virtualization admins have to resolve it all in the same VM if they want to make full use of a new, expensive NVMe storage platform.

One of the best features of virtualization is the ability to separate a VM into different storage tiers based on its internal configuration. A VM is simply a collection of files. Breaking down a guest VM shows the OS, OS paging files, application installation and application storage.

In the physical world, admins often place these aspects in different RAID groups to get an ideal combination of redundancy and performance. Admins in the virtual world have foresworn that storage tier methodology in favor of letting the storage frames do it all. SSDs and NVMe require admins to bring those skills back.

Use virtualization and storage tiers to target NVMe performance

Spinning tier data stores are ideal places for OS partitions that need data protection over high-end performance. The paging files can also find a place on spinning disks.

In Windows alone, a page file can offer one and a half times the memory that admins have installed, so these files can add up quickly. Combine that with a VMware paging file for a VM that is equal to the configured memory for that VM and admins have a lot of space that -- as long as they don't have memory contention -- is rarely in use but needs to exist somewhere.

Once admins tuck the OS and paging files away on spinning disks, they can evaluate SSDs and NVMe. This is when admins must understand their data.

Applications tend to get loaded into memory, so while it's critical for admins to have higher disk performance for an application, it might not need the NVMe performance level. This means admins can save NVMe performance for the application data, which can boost performance by four to five times over traditional SSDs and boost performance by 15 times or more over spinning disk depending on the RAID setup.

This breakdown of the application between spinning disk, SSDs and NVMe ensures that admins use virtualization and storage tiers to balance performance capabilities. However, this storage tier process is only part of the story.

Navigate migration and complexity challenges

NVMe is local storage, and while technologies such as VMware vSAN can bridge that storage across multiple hosts, it's still local storage.

Spinning disk and SSDs are often shared storage, so vMotion and other migration technologies work well with them. NVMe is local storage, and while technologies such as VMware vSAN can bridge that storage across multiple hosts, it's still local storage.

This doesn't mean admins can't use NVMe, but they must understand the limitations NVMe imposes to quickly move workloads. Storage vMotion works well, but it's not as fast as traditional vMotion. VMware vSAN imposes an additional cost, but it helps remove some of the delays from moving workloads with traditional Storage vMotion.

Another factor admins should consider when separating their servers into virtualization and storage tiers is the added complexity. If the VM is no longer in one place, there can be consequences to backups and stability because the VM's location across three different data stores means any issue with one can affect the VM. Admins can find a middle ground where they keep more of the VM together and still gain the benefits of storage separation and NVMe performance.

Admins can reduce their installation footprint with OSes such as Windows Server Core, which helps reduce the OS footprint and makes it less painful to stay on higher tier disks. Admins can also reduce the page files in Windows manually and in VMware with memory reservations. This won't solve all the issues, but it does limit overall waste.

NVMe is a useful addition to many data centers. NVMe performance can be a lifesaver for the right application. Making the best use of virtualization and storage tiers is the key, and doing so in a virtualized infrastructure enables the flexibility admins need to take advantage of everything NVMe offers -- while still paying attention to costs.

This was last published in October 2018

Dig Deeper on Server hardware and virtualization

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

What are your best practices for virtualization and storage tiers?
Cancel

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close