With the market adopting Intel Optane persistent memory (PMem) and larger capacity internal hard drives, and with DRAM capacity increasing while costs decrease, an individual server is no longer just a bunch of compute power. So…is it now possible to eliminate the SAN?
Global data is projected to grow from 33 ZB (2018) to 175 ZB (2025) according to IDC, and 30% of this data will be fast data that must be available in real time to support data analytics, machine learning, artificial intelligence, and IoT. Storing data in data lakes inherently results in bottlenecks when trying to analyze, process, and query large data sets in real or near real time. The size of a data lake is of diminishing value since the ability to process that data into actionable information is not fast enough to be useful.
Infrastructure managers, IT Directors, and Senior Executives are all looking to simplify their environments, reduce costs, and improve performance. Traditionally, organizations run two networks, one dedicated to LAN/WAN traffic and an additional network for storage activity. In looking to reduce costs, organizations can look to eliminate the storage-based network. The original intent of a SAN was to eliminate any impact from storage activity on network activity, but more recently the focus has been on trying to improve speed and throughput. With the bandwidth and capacities that have recently been made available in servers, using FORSA™ with Optane PMem Tier 0 can now provide the performance of memory with the security and persistence of storage!
Let’s use an analogy of getting an apple to put the speed difference between DRAM, PMEM, and NAND technologies into perspective. In olden times, to get an apple one would have to go to a farm, so it could take a really long time, let’s say 14 days. Modernization and refrigerated transportation came along and now apples can be brought closer to consumers, which is faster. It would be even better if the apple could be brought into the kitchen and therefore be available almost immediately.
If we change the story to reflect data instead of apples and DRAM is the kitchen, PMEM is the store, and the farm is NAND technology, you can see how having all of the data in the server can be much faster than a traditional SAN environment. It would take me 10 times longer to go to a store, and 10,000 times longer to go to the farm, than it would for me to get an apple from the fridge. These are the relative latency differences in a computer acquiring data from DRAM, PMEM, and NAND technologies, respectively.
With a 2-socket server that can provide 30TB of storage within the server, a majority of database workloads can be hosted without a SAN. As organizations refresh data centers and continue to drive down costs, perhaps the SAN can be reduced by never moving the data from the server. This provides both performance benefits and cost savings, a true win/win.
Brett Miller is the Field CTO at Formulus Black, where he works with customers and partners to solve the business challenges of today and the next five years, as the need to securely manage I/O in the memory channel becomes more prevalent.