Understanding Persistent Memory in the Storage Pyramid
Key Points
- Bradley Knapp introduces persistent memory (PMEM) as a new, ultra‑fast storage tier that debuted in spring 2019 and sits between SSD/PCIe drives and DRAM in the storage hierarchy.
- He describes the storage pyramid, noting that as you move up (from tape to HDD to SSD to PCIe SSD to PMEM to RAM) both cost and performance increase while latency decreases and bandwidth rises.
- Unlike SSDs that access data via the PCIe bus, PMEM communicates directly over the memory bus, giving it far lower latency and higher bandwidth than traditional non‑volatile storage.
- PMEM can be configured in the BIOS to operate in “memory mode,” where each processor channel hosts both a DRAM DIMM and a PMEM DIMM, allowing the system to use PMEM as an extension of volatile memory.
Full Transcript
# Understanding Persistent Memory in the Storage Pyramid **Source:** [https://www.youtube.com/watch?v=7gkr-_t7wAk](https://www.youtube.com/watch?v=7gkr-_t7wAk) **Duration:** 00:05:52 ## Summary - Bradley Knapp introduces persistent memory (PMEM) as a new, ultra‑fast storage tier that debuted in spring 2019 and sits between SSD/PCIe drives and DRAM in the storage hierarchy. - He describes the storage pyramid, noting that as you move up (from tape to HDD to SSD to PCIe SSD to PMEM to RAM) both cost and performance increase while latency decreases and bandwidth rises. - Unlike SSDs that access data via the PCIe bus, PMEM communicates directly over the memory bus, giving it far lower latency and higher bandwidth than traditional non‑volatile storage. - PMEM can be configured in the BIOS to operate in “memory mode,” where each processor channel hosts both a DRAM DIMM and a PMEM DIMM, allowing the system to use PMEM as an extension of volatile memory. ## Sections - [00:00:00](https://www.youtube.com/watch?v=7gkr-_t7wAk&t=0s) **Introducing Persistent Memory Hierarchy** - Bradley Knapp explains IBM Cloud’s new persistent memory technology and its placement in the storage pyramid between SSDs and RAM, highlighting its ultra‑fast performance and cost trade‑offs. ## Full Transcript
Hey guys, welcome to the channel my name is Bradley Knapp from IBM Cloud and I
wanted to talk with you a little bit about persistent memory. Persistent
memory is a new technology, it just came onto the market this last spring, so the
spring of 2019 and its ultra, ultra fast memory, right. So if we think about our
storage pyramid, we've got a pyramid over here and a storage pyramid I like to
draw it out this way because we've kind of got two arrows right, as you go up the
storage pyramid like this the cost goes up and as you go down the storage
pyramid like this one your performance goes down. In so keeping this kind of
storage pyramid in mind down here at the bottom this is tape right. Tape is still
around tape isn't going anywhere anytime soon. The next level up from a
performance slightly more expensive but more performant as well is when you get
into our good old fashioned hard disk drives right. The spinning disks next
level up from that is where you're gonna get into your SSDs right your different
SSD form factors U.2, M.2, NVMe, all of the different letters. The next level
up in performance but again adding cost is gonna be PCI-E drive and then
the next level up this is the one that we're talking about today
this is PMEM. And then up at the very top of our pyramid this one right here
that's RAM. So as you go up the cost goes up but the performance level does. Why?
Well it's because the access times go down, the seek time goes down, and the
bandwidth goes up right. So tape takes a long time to get the data to and from
the processor, hard disks less time, SSD less time, right.
These are limited for a number of different factors, hard disks and SSDs
they've gotta talk back and forth through a Raid card going through the
PCI bus. This next level up a PCIe drive right this goes right into the PCIe bus.
So this could be a NVMe M.2 drive or one that goes in an actual PCIe slot
itself. So again, faster than SSDs, faster than hard drives, same general technology
and SSD it's still using NAND chips but it's getting to that processor faster.
PMEM if we look over here, PMEM talks back and forth to the processor directly
right, you don't have to go through a PCIe bus you're going through the memory
bus, which again lower latency, higher bandwidth. So it's much, much faster and
then at the very top of the pyramid that's RAM right, that's your traditional
DRAM that is the fastest storage medium. And so if we come over here I want to
talk a little bit about the two modes that we run in right. The first mode is
memory mode, so PMEM can be switched at the BIOS level into either of these
modes and so if we consider our processor right, I'm just going to mark
the processor with a P. The processor out of each processor you get six
channels we didn't draw all of them out here, but in each channel you're gonna
get a DIMM, right a RAM DIMM and you're going to get a PMEM DIMM. And then as you
go down right, so that's slot zero and then one slot one you get a RAM DIMM
again and you get a PMEM DIMM. In 0, 1, 2, 3, 4, and 5 for each processor right. So in a
dual socket server you're gonna end up with 12 sticks of RAM, 12 sticks of PMEM. What
makes PMEM valuable right, well it's lower cost than RAM, slightly lower performance
than RAM, but it's much larger. So if you think about typical RAM DIMM sizes right,
you got a 16, you got a 32, you get a 64, got a 128 and now you've got 256's
but the cost goes up dramatically as you go up in these sizes. On the PMEM side you
start with the 128, and then you've also got a 256, and you've got a 512. And so if
you've got 512's and you're putting 512's into this server
right, you have six 512's which is 3 terabytes of storage per processor. So on
a tool to socket server to processor server you're gonna actually have 6
terabytes of memory because when you're running in memory mode the RAM access
cache and the PMEM's accesses your RAM. So you've got two sockets, six terabytes of
RAM. In App Direct Mode same kind of idea right you've got your processor
you've got your RAM and then you've got those PMEM DIMMs, but what makes this
different right? So in App Direct Mode rather than the PMEM operating is RAM it
operates as storage, it's a persistent storage right. And so your RAM that
that's what adds up that's your RAM and then you can lay a namespace on top of
this PMEM, you can put a filesystem on top of it but because it's talking back
and forth through the memory bus it's ultra, ultra high performance. Where is
this App Direct important, this is your in-memory databases, this is your big
data workloads, this is where you're really looking to take advantage of
having an insanely fast connection between your storage and your processor
so that you can write back and forth very easily. Ao that's kind of an
overview right, so you've got your in-memory database, like SAP HANA and
your big data workloads like Hadoop. And if you want to learn more about this go
ahead and hit the links in the comments and we'll take you through kind of an
individual use case level description. If you have any questions at all please
drop us a line. If you want to see more videos like this in the future please do
hit that like and subscribe button, and don't forget you can always get started
on the cloud at no cost by signing up for a free IBM Cloud account at
cloud.ibm.com.