Disk drives were the industry's answer to expensive RAM and the need to scale out storage by huge factors at an affordable price. The reality is that the ideal computer would have persistent memory, so that moving data on and off the system would be unnecessary.
With nonvolatile dual in-line memory modules (NVDIMM), we move one big step closer to that ideal concept. By adding a few terabytes of flash to the server motherboard, we bring persistent memory closer to the server, while the use of the dynamic RAM (DRAM) interface bus allows much faster access.
Advantages of NVDIMMs
The speedup is around four times faster than PCIe-connected NVMe solid-state drives (SSDs), so it's a noticeable improvement in performance. Data is still transferred as 4 KB blocks, much like a SSD, which limits the performance greatly compared to DRAM, which can transfer as little as a single byte with one CPU command.
To get to DRAM speed, a version of NVDIMM mimics DRAM completely, but mirrors all the written data to flash whenever power is lost. This type of NVDIMM operates at a much higher speed than the all-flash version, but is limited to a few gigabytes of capacity.
Using NVDIMMs brings substantial performance improvements to virtual servers. In general instances, such as a local instance store, NVDIMM's speed advantage over SSDs brings shorter job times. At the same time, faster loading of memory allows efficient paging operations, which gives us the alternative of either slightly oversubscribing DRAM -- increasing instance count -- or using smaller instances for a given workload. Likewise, NVDIMMs make good image stores, reducing load times.
Database instances clearly benefit from recognizing NVDIMM block storage as mounted drives, since write operations are very fast. Data integrity requires mirroring the data for redundancy, however, which creates the dilemma of where to mirror, since NVDIMMs aren't removable like drives. To meet modern norms of appliance-level data integrity and redundancy, mirroring across a network using remote direct memory access to another server's NVDIMMs is the optimal solution.
Types of NVDIMMs
Applications aimed at big data should look at NVDIMM space as a DRAM expander, where the effective DRAM capacity is multiplied by a large factor. True, the performance is uneven between the DRAM and flash, but flash on the memory bus is still the fastest persistent storage and, moreover, is much cheaper than DRAM.
In all cases, instances should start in much shorter time. This becomes more important as we look at the other type of NVDIMM, which uses DRAM with a flash backup. The DRAM space on these NVDIMMs is typically the same as on standard DRAM, though it's usually a capacity generation behind due to the extra time it takes to integrate with flash.
It can be accessed as standard DRAM, too, but persistence brings complications. The whole software ecosystem has to recognize that part, or all, of the available DRAM can be available after a power or system failure. This implies changes to compilers, link editors, OSes and the apps themselves. This isn't a small challenge, and we are a year or so from having all the pieces.
Today, many applications use memcached databases as scratchpads and may want to maintain these. NVDIMM byte-mode writing to these would be exceptionally fast, though the proviso of mirroring them if necessary for assured integrity and availability holds here too.
The future of NVDIMMs
Looking to the near future, Intel and Micron Technology are launching X-Point as NVDIMMs at the start of 2017. These are somewhat faster than flash units, but are expected to be expensive. There are hints that Intel is looking at byte-addressable access modes for these, and it should be noted that Intel has extensive compiler development as well as market clout with OS vendors.
It's still in its early days, but NVDIMM technology will appear at the performance end of most clouds within the year, adding options for very fast instances for applications that push the limits of computing.
Familiarize yourself with flash storage