Formulus Black Blog

The Relationship Between an Apple, Latency, and Storage-Class Memory

Choosing high-performance storage technology for your data center involves the consideration of many different factors. This blog will lay out the key differences between NAND, DRAM, and a relative newcomer: storage-class memory, or SCM.

DRAM, NAND, and SCM all use semiconductors to store data, but they each have unique characteristics. DRAM connects to the CPU via a memory channel, which makes it very fast—the access time for DRAM is around 51 nanoseconds. Due to its expense and volatility (its data will not survive a reboot), DRAM has traditionally only been used as primary memory for computers, and not as a data storage solution.

NAND flash memory is a type of nonvolatile storage technology that does not require power to retain data. Unlike DRAM, it has the capability for data to survive system restarts or power losses. NAND is slower than DRAM, with a read latency of around 47,000 nanoseconds and a write latency of around 15,000 nanoseconds. This places it three orders of magnitude slower than DRAM — but NAND is much less expensive than DRAM, and NAND- based devices have larger capacity than DRAM.

SCM is an emerging technology that’s beginning to see use in the data center. SCM distinguishes itself from DRAM and NAND in specific ways:

  • Persistent data storage
  • Larger storage capacity
  • Less expensive than DRAM
  • Significantly faster than NAND storage
  • Significantly more resistant to data re-writes
  • Much higher endurance properties

Many persistent memory (PMEM) devices use SCM, but SCM isn’t exclusive to PMEM. Like DRAM, PMEM devices plug into a computer’s memory channel rather than a computer’s I/O channel, as is the case with NAND devices; this is one of the factors that provides PMEM with its extremely low latency.

Intel Optane persistent memory (Optane PMem) is the most popular PMEM device currently available, which Intel has measured at around 350 nanoseconds. This is roughly 10 times slower than DRAM, but 100 times quicker than NAND, at a much lower cost.  It also has the same storage capacity as NAND devices.

NAND and DRAM have both been around long enough that they’re generally understood by IT professionals. Since SCM is the next advancement in NAND technology, and as such, is a new technology, it requires a little bit of explaining to appreciate how transformative it can be to the data center.

NVMe and SSD devices that use SCM have lower latency than those using NAND technology.  One of the limiting factors with both NVMe and SSD devices is that they plug into a computer’s I/O channel (for example, PCIe, SATA, and so on) rather than the computer’s memory bus via a server’s DIMM slots, like PMEM devices do. The memory channel is considerably quicker than the I/O channel.

An interesting characteristic of SCM devices is that they can be addressed at the byte level, meaning that a single byte can be erased and rewritten, whereas NAND can only be addressed at the block level. Moreover, NAND needs to erase and rewrite an entire block/page of data even if only one byte in that block is changed, eliminating background processes like garbage collection. Not only does byte-level access improve performance, it also increases the life span of the device.

Let’s use an analogy of getting an apple to put the speed difference between DRAM, SCM, and NAND technologies into perspective. If I felt like getting an apple right now, there are three ways I could do so:

  • I could leave my desk and grab an apple from my refrigerator in about two minutes (DRAM)
  • I could leave my home and walk to the corner store and buy an apple in about 20 minutes (SCM)
  • I could leave my neighborhood and walk to a farm to get an apple in about 14 days (NAND)

In other words, it would take me 10 times longer to go to a store, and 10,000 times longer to go to the farm, than it would take for me to get an apple from the fridge. These are the relative latency differences in a computer acquiring data from DRAM, SCM, and NAND technologies, respectively. But even though I can get an apple much faster by just getting one from my refrigerator, there is an associated cost of having an apple always at the ready.

I must plan and make sure that every few days there is a fresh apple available, because I ate one or it expired. This can be a pretty expensive and complex option to manage. The lowest cost option is to walk to the farm and buy the apple directly from the farmer, but that is very time consuming and probably not the best use of my time. The corner store (SCM) provides the best price per complexity, with the added benefit of performance over the other two options.

Hopefully, this analogy has helped clarify the choices you face when deciding to upgrade or add high-performance storage to your environment. SCM has the potential to bring the best of both worlds to next-gen applications by providing a way to store large amounts of data next to the CPU, thus allowing your design to be more innovative. Having this information will lead to a more informed decision, one that meets both your needs and your budget. And ultimately it may be that moving to SCM is the best fit going forward.

Brett Miller headshot

Brett Miller

Field CTO

Brett Miller is the Field CTO at Formulus Black, where he works with customers and partners to solve the business challenges of today and the next five years, as the need to securely manage I/O in the memory channel becomes more prevalent.