Persistent memory technology has the potential to transform and expand the $100 billion dollar DRAM market due to a combination of its impressive performance characteristics and persistence capabilities. Industry experts forecast that Intel’s brand of persistent memory alone will become a $1B+ business in just a few short years. Other storage and memory vendors are expected to release persistent memory variants in the next few years. One advantage and current obstacle to broad persistent memory adoption is that it is byte addressable and provides load/store memory access.
The reason this is an absolute advantage in the long term is that application developers can take advantage of the granularity afforded by the new type of memory media. Short term, companies that want to leverage persistent memory may find that their current OS, hypervisor, database, or ERP vendor does not support persistent memory.
Despite not being able to take advantage of byte addressability and load/store functions, the value of an industry standard block device for persistent memory is that it enables ANY application to run without application changes. Most businesses hesitate to modify existing applications and may not have resources to update their commercial software to an “enterprise” version that supports persistent memory. Our big bet at Formulus Black, and one that we believe pays off for us in both the short and long term, is that many enterprises will NOT rewrite their application to run natively on persistent memory but rather will move existing applications to run on persistent memory to get vastly improved performance, especially if zero development time and risk is required. If Intel and the major OEMs want to drive mass persistent memory adoption, we believe the best approach is to provide a high performance, NUMA aware, easy to manage and provision, POSIX-Compliant block device interface for persistent memory – such as the Formulus Black LEM.
In today’s information-driven economy, quickly processing ever-increasing volumes of data into actionable information to win customers and edge out the competition have become table stakes. Unfortunately, most estimates for just how much data actually gets analyzed are dismal at between 1-2% of all data. Processing data fast is hard, and one of the problems with traditional storage is that it is simply too slow (ie: high latency, subpar bandwidth) to keep up with the capabilities of modern multi-core CPUs.
Imagine if you could store datasets much closer to the CPU such that anytime data needs to be processed and analyzed, it could be accessed almost immediately. Earlier in 2019, Intel released their technology called OptaneTM DC Persistent Memory, enabling just that. NVMe SSDs are still considered “fast” storage technology but relative to persistent memory, the performance falls quite a bit short. Putting the performance gap in terms that most data analysts would understand, it is like waiting 1 minute instead of 1 second for the exact same database query to complete.
While OptaneTM DC Persistent Memory Modules are not offered in the same larger capacities as NVMe SSDs, they do offer higher capacity than DRAM and also enable data persistency across system reboots and crashes. On a standard two-socket server, up to 6 terabytes of persistent memory can be configured. With Facebook, Apple, and the largest enterprises as rare exceptions, 6 terabytes is sufficient to run the transactional databases of most mid-sized enterprises. There will always be a market for secondary and large capacity storage but persistent memory’s low latency, high bandwidth, and magnified IOPs capabilities put it solidly in first place.