170: FMS25 wrap-up with Jim Handy, Objective Analysis

Jim Handy, General Director at Objective Analysis and I were at FMS25 in Santa Clara last week and there was a lot of news going around. Jim’s been on our show just about every year to discuss FMS news, And with the recent focus beyond flash, it’s even harder for one person to keep up.

Much of the discussion at FMS was on HBM4, new QLC capacity points, UAlink/UCe for chiplets, 100M IOP SSDs, and more. Listen to the podcast to learn more.

Th.ere was not as much on CXL as in past shows. and ditto on increasing layer counts to drive more NAND capacity. A couple of years ago layer counts were all they talked about. And CXL was the major change to hit the data center. Jim’s view (and Jason’s) was that CXL was as a way for hyperscalers to make use of DDR4 DRAM but that need has passed now.

As for layer counts they are still going up but not as fast. And the economics of 3D scaling now have to compete with 2D scaling and “virtual scaling”.

But UAlink and UCe were active topics both of which are used to tie together chiplets in CPUs to build SoCs. SSD vendors are starting to use chiplet architectures to build their massive capacity SSDs and UAlink/UCe would be a way to architect these.

SLC NAND is back to support very high performance SSDs or as a replacement for SCM (storage class memory or Optane). One vendor talked about reaching 100M (random 512B read) IOPS for a single SSD. Current SCL flash can do ~10M IOPS, next gen is speced to do ~30M and the one following would be 100M. One challenge is that current SSDs do 4Kbyte IO and it still takes a msec. or so to erase a page and reading a page isn’t that fast. But the performance is for read only activity.

HBM4 was one topic at the show but the newest wrinkle was HB Flash, or putting SSDs behind HBM to support GPU caching (SSD to HBM to GPU). This would allow more data to be quickly accessed by a GPU.

Jim also mentioned that there’s some interest in narrowing HBM access width, currently 1Kb and increasing to 2Kb with HBM4. This width, and all the pins it requires, limits how many HBM chips one can surround a GPU with. If HBM had a narrower interface more HBM chips could surround a GPU, increasing memory size and perhaps memory bandwidth. HBM4 seems to be going the wrong way but with narrower width HBM, they could easily double the number of HBM chips surrounding a GPU.

They were also showing off a 40 SSD 2U chassis using E.2 form factor SSDs. Pretty impressive and given the capacity on offer a lot of storage per RU.

Speaking of capacity one vendor announced a 246TB QLC SSD, roughly a 1/4PB in a single SSD. With 24 of these per 2U shelf, one could have a >1/10 Exabyte, (>100 PB) in a 40U rack. It looks like no end in sight for SSD capacities. And we aren’t even talking about PLC yet.

At the other end of SSD capacity, it appears that M.2 SSDs were getting hotter on one side (controller side) than the other, throttling performance. So one vender decided to provide heat (liquid cooling) pipes between the two sides to equalize thermal load.

Jim Pappas (lately of Intel) won the lifetime achievement award from FMS. Jim’s accomplishments span a wide swath of storage technology but at the award ceremony he waxed on his work on the USB connector. He said his will stipulates that once he is interned in the ground, they are to take out the casket and spin it around 180 degrees and put it back down again. 🙂

There were quite a number of side topics not directly related to FMS25 on the podcast which were interesting in their own right, but I think i’ll leave it here.

Jim Handy, General Director Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry, including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication.

He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media. 

He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com

164: GreyBeards talk FMS24 Wrap-up with Jim Handy, General Dir., Objective Analysis

Jim Handy, General Director, Objective Analysis, is our long, time goto guy on SSD and Memory Technologies and we were both at FMS (Future of Memory and Storage – new name/broader focus) 2024 conference last week in Santa Clara, CA. Lots of new SSD technology both on and off the show floor as well as new memory offerings and more.

Jim helps Jason and I understand what’s happening with NAND, and other storage/memory technologies that matter to today’s IT infrastructure. Listen to the podcast to learn more.

First off, I heard at the show that the race for more (3D NAND) layers is over. According to Jim, companies are finding it’s more expensive to add layers than it is just to do a lateral (2D, planar) shrink (adding more capacity per layer).

One vendor mentioned that the CapEx Efficiencies were degrading as they add more layers. Nonetheless, I saw more than one slide at the show with a “3xx” layers column.

Kioxia and WDC introduced a 218 layer, BICS8 NAND technology with 1Tb TLC and up to 2Tb QLC NAND per chip. Micron announced a 233 layer Gen 9 NAND chip.

Some vendor showed a 128TB (QLC) SSD drive. The challenge with PCIe Gen 5 is that it’s limited to 4GB/sec per lane and for 16 lanes, that’s 64GB/s of bandwidth and Gen 4 is half that. Jim called using Gen 4/Gen 5 interfaces for a 128TB SSD like using a soda straw to get to data.

The latest Kioxia 2Tb QLC chip is capable of 3.6Gbps (source: Kioxia America) and with (4*128 or) 512 of these 2Tb chips needed to create a 128TB drive that’s ~230GB/s of bandwidth coming off the chips being funneled down to 16X PCIe Gen5 64GB/s of bandwidth, wasting ~3/4ths of chip bandwidth.

Of course they need (~1.3x?) more than 512 chips to make a durable/functioning 128TB drive, which would only make this problem worse. And I saw one slide that showed a 240TB SSD!

Enough on bandwidth, let’s talk data growth. Jason’s been doing some research and had current numbers on data growth. According to his research, the world’s data (maybe xmitted over internet) in 2010 was 2ZB (ZB, zettabytes = 10^21 bytes), and in 2023 it was 120ZB and by 2025 it should be 180ZB. For 2023, thats over 328 Million TB/day or 328EB/day (EB, exabytes=10^18 bytes).

Jason said ~54% of this is video. He attributes the major data growth spurt since 2010 to mainly social media videos.

Jason also mentioned that the USA currently (2023?) had 5,388 data centers, Germany 522, UK 517, and China 448. That last number seems way low to all of us but they could just be very, very big data centers.

No mention on the average data center size (meters^2, # servers, #GPUs, Storage size, etc). But we know, because of AI, they are getting bigger and more power hungry,

There were more FMS 2024 topics discussed, like the continuing interest in TLC SSDs, new memory offerings, computational storage/memory, etc.

Jim Handy, General Director, Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry, including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication.

He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media. 

He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com

153: GreyBeards annual FMS2023 wrapup with Jim Handy, General Director, Objective Analysis

Jim Handy, General Director, Objective Analysis and I were at the FMS 2023 conference in Santa Clara last week and there were a number of interesting discussions at the show. I was particularly struck with the progress being made on the CXL front. I was just a participant but Jim moderated and was on many panels during the show. He also comes with a much deeper understanding of the technologies. Listen to the podcast to learn more.

We asked for some of Jim’s top takeaways from the show.

Jim thought that the early Tuesday Morning Market sessions on the state of the flash, memory and storage markets were particularly well attended. As these were the first day’s earliest sessions, in the past they weren’t as well attended.

The flash and memory markets both seem to be in a downturn. As the great infrastructure buy out of COVID ends, demand seems to have collapsed. As always, these and other markets go thru cycles, i.e., downturn where demand collapses and prices fall, to price stability as demand starts to pick up, and to supply constrained where demand can’t be satisfied. The general consensus seems to be that we may see a turn in the market by middle of next year.

CXL is finally catching on. At the show there were a couple of vendors showing memory extension/expansion products using CXL 1.1 as well as CXL switches (extenders) based on CXL 2.0. The challenge with memory today, in this 100+ core CPU world, is trying to keep the core to memory bandwidth flat and keep up with application memory demand. CXL was built to deal with both of these concerns

CXL has additional latency but it’s very similar to dual CPUs accessing shared memory. Jim mentioned that Microsoft Azure actually checked to see if they can handle CXL latencies by testing with dual socket systems.

There was a lot of continuing discussion on new and emerging memory technologies. And Jim Handy mentioned that their team has just published a new report on this. He also mentioned that CXL could be the killer app for all these new memory technologies as it can easily handle multiply different technologies with different latencies.

The next big topic were chiplets and the rise of UCIe (universal chiplet interconnect express) links. AMD led the way with their chiplet based, multi-core CPU chips but Intel is there now as well.

Chiplets are becoming the standard way to create significant functionality on silicon. But the problem up to now has been that every vendor had their own proprietary chiplet interconnect.

UCIe is meant to end proprietary interconnects. With UCIe, companies can focus on developing the best chiplet functionality and major manufacturers can pick and choose whichever chiplet offers the best bang for their buck and be assured that it will all talk well over UCIe. Or at least that’s the idea.

Computational storage is starting to become mainstream. Although everyone thought they would become general purpose compute engines, they seem to be having more success doing specialized (data) compute services like compression, transcoding, ransomware detection, etc. They are being adopted by companies that have need to do that type of work.

Computational memory is becoming a thing. Yes memristor, pcm, mram, etc. always offered computational capabilities on their technologies but now, organizations are starting to add compute logic to DIMMs to carry out computations close to the memory. We wonder if this will find niche applications just like computational storage did.

AI continues to drive storage and compute. But we are starting to see some IoT applications of AI as well and Jim thinks it won’t take long to see AI ubiquitous throughout IT, industry and everyday devices. Each with special purpose AI models trained to perform very specific functionality better and faster than general purpose algorithms could do.

One thing that’s starting to happen is that SSD intelligence is moving out of the SSD (controllers) and to the host. We can see this with the use of Zoned Name Spaces but OCP is also pushing flexible data placement so host’s can provide hints as to where to place newly written data.

There was more to the show as well. It was interesting to see the continued investment in 3D NAND (1000 layers by 2030), SSD capacity (256TB SSD coming in a couple of years), and some emerging tech like Memristor development boards and a 3D memory idea, but it’s a bit early to tell about that one.

Jim Handy, Director Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication.

He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media. 

He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com

146: GreyBeards talk K8s cloud storage with Brian Carmody, Field CTO, Volumez

We’ve known Brian Carmody (@initzero), Field CTO, Volumez for over a decade now and he’s always been very technically astute. He moved to Volumez earlier this year and has once again joined a storage startup. Volumez is a cloud K8s storage provider with a new twist, K8s persistent volumes hosted on ephemeral storage.

Volumes currently works in public clouds (AWS & Azure( soft launch), with GCP coming soon) and is all about supplying high performing, enterprise class data services to K8s container apps. But doing this using transient (Azure ephemeral &AWS instance) storage and standard Linux. Hyperscalers offer transient storage as almost an afterthought with customer compute instances. Listen to the podcast to learn more.

It turns out that over the last decade or so, there has been a lot of time and effort devoted to maturing Linux’s storage stack and nowadays, with appropriate configuration, Linux can offer enterprise class data services and performance using direct attached NVMe SSDs. These services include thin provisioning, encryption, RAID/erasure coding, snapshots, etc., which on top of NVMe SSDs, provide IOPS, bandwidth and latency performance that boggles the mind.

However, configuring Linux sophisticated and high performing data services is a hard problem to solve..

Enter Volumez, they have a SaaS control plane, client software plus CSI drivers that will configure Linux with ephemeral storage to support any performance and data service that can be obtained from NVMe SSDs.

Once installed on your K8s cluster, Volumez software profiles all ephemeral storage, and supplies that information to their SaaS control plane. Once that’s done your platform engineers can define specific storage class policies or profiles useable by DevOps to consume ephemeral storage. .

These policies identify volume [IOPs, Bandwidth, Latency] X [read, write] performance specifications as well as data protection, resiliency and other data service requirements. DevOps engineers consume this storage using PVCs that call for these storage classes at some capacity. When it sees the PVC claim, Volumez SaaS control plane will carve out slices of ephemeral storage that can support the performance and other storage requirements defined in the storage class.

Once that’s done, their control plane next creates a network path from the compute instances with ephemeral storage to the worker nodes running container apps. After that it steps out of the picture and the container apps have a direct (network) data path to the storage they requested. Note, Volumez’s SaaS control plane is not in the container app storage data path at all.

Volumez supports multi-AZ data resiliency for PVCs. In this case, another mirror K8s cluster would reside in another AZ, with Volumez software active and similar if not equivalent ephemeral storage. Volumez will configure the container volume to mirror data between AZs. Similarly, if the policy requests erasure coding, Volumez SaaS software configures the ephemeral storage to provide erasure coding for that container volume.

Brian said they’ve done some amazing work to increase the speed of Linux snapshotting and restoring.

As noted above, the Volumez control plane SaaS software is outside the data path, so even if the K8s cluster running Volumez enabled storage loses access to the control plane, container apps continue to run and perform IO to their storage. This can continue until there’s a new PVC request that requires access to their control plane.

Ephemeral storage is accessed through special compute instances. These are not K8s worker nodes and they essentially act as a passthru or network attachment between worker nodes running apps with PVC’s and the Volumez configured Linux Logical Volumes hosted on slices of ephemeral storage.

Volumez is gaining customer traction with data platform clients, DBaaS companies, and some HPC environments. But just about anyone needing high performing data services for cloud K8s container apps should give Volumez a try.

I looked at AWS to see how they price instance store capacities and found out it’s not priced separately, but rather instance storage is bundled into the cost of EC2 compute instances.

Volumez is priced based on the number of media devices (instance/ephemeral stores) and performance (IOPs) available. They also have different tiers depending on support level requirements (e.g., community, Business hrs, 7X24) which also offers different levels of enterprise security functionality.

Brian said they have a free tier that customers can easily signup for and try out by going to their web site (see link above), or if you would like a guided demo, just contact him directly.

Brian Carmody, Field CTO, Volumez

Brian Carmody is Field CTO at Volumez. Prior to joining Volumez, he served as Chief Technology Officer of data storage company Infinidat where he drove the company’s technology vision and strategy as it ramped from pre-revenue to market leadership.

Before joining Infinidat, Brian worked in the Systems and Technology Group at IBM where he held senior roles in product management and solutions engineering focusing on distributed storage system technologies.

Prior to IBM, Brian served as a technology executive at MTV Networks Viacom, and at Novus Consulting Group as a Principal in the Media & Entertainment and Banking practices.

142: GreyBeards talk scale-out, software defined storage with Bjorn Kolbeck, Co-Founder & CEO, Quobyte

Software defined storage is a pretty full segment of the market these days. So, it’s surprising when a new entrant comes along. We saw a story on Quobyte in Blocks and Files and thought it would be great to talk with Bjorn Kolbeck (LinkedIn), Co-Founder & CEO, Quobyte. Bjorn got his PhD in scale out storage and went to work at Google on anything but storage. While there, he was amazed by Goodle’s vast infrastructure being managed by only a few people and thought this could should be commercialized, so Quobyte was born. Listen to the podcast to learn more.

Quobyte is a scale out file and object storage system with mirrored metadata and data which is 3-way mirrored or erasure coded (EC). Minimum cluster is 4 nodes (fault tolerant for a single node failure.). Quobyte has current customers with ~250 nodes and ~20K clients accessing a storage cluster.

Although they support NFSv3 and NFSv4 for file (and object) access, their solution is typically deployed using host client and storage services software accessing the files with Posix or objects via S3. Objects can also be accessed as file within the file system directories.

Host client software runs on Linux, Mac or Windows machines. Storage server software runs on Linux systems bare metal or under VMs in user space. Quobyte also support containerized storage server software for K8s but their bare metal/VM storage server software option doesn’t require containers.

Quobyte is also available in the GCP marketplace and can run in AWS, Azure and Oracle Cloud.

Their metadata service is a mirrored key-value store distributed across any number of (customer configured, I believe) storage nodes. Metadata resides on flash and distribution is designed to eliminate the metadata service as a performance bottleneck.

Their data services supports (any number of) storage tiers. Storage policies determine how tiering is used for files, directories, objects, etc. For example, with 3 tiers (NVMe Flash, SSD, and disk), file data could be first landed on NVMe Flash, but as it grows, it gets moved off to SSD, and as it grows, even more, it’s moved to disk. This could also be triggered using time since last access.

Bjorn said anything in file system metadata could be used to trigger data movement across tiers. Each tier could be defined with different data protection policies, like mirroring or EC 8+3.

Backend storage is split up into Volumes. They also support thinly provisioned volumes for file creation.

Unclear how tiering and thin provisioning applies to objects with much richer metadata options but as they can be mapped to files, we suppose that anything in the object file metadata could conceivably used to trigger tiering as a bare minimum.

As for security, 

  1. Quobyte supports end to end data encryption. This is done once and the customer owns the keys. They do support external key servers.  I believe this is another option that is enabled by file based policy management. It seems like different files can have different keys to encrypt them.
  2. Quobyte supports TLS. Depending on customer requirements data may go across open networks and this is where TLS could very well be used. And Quobyte supports user X.509 certificates for users, devices and systems authentication. 
  3. Quobyte supports file access controls. They support a subset of Windows capabilities but have full support for Linux and Mac access controls.

Quobyte also supports two forms of cluster to cluster replication. One is event driven where event occurrence (i.e. file close) signals data replication and another which is time driven (i.e., every 5 minutes) but both are asynchronous.

Quobyte was designed from the start to be completely API driven. But they do support CLI and a GUI for those customers that want them. 

They have a Free (forever) edition, a downloadable version of the software without 24/7 support and minus some enterprise capabilities (think encryption). This is gated at 150TB disk/30TB flash with limited number of clients and volumes.

The Infrastructure edition is their full featured solution with 7/24 enterprise support. It’s comes with a yearly service fee, priced by capacity with volume discounts.

Bjorn Kolbeck, Co-Founder & CEO, Quobyte

Bjorn Kolbeck, Co-Founder and CEO of Quobyte attended the Technical University of Berlin and Humboldt University of Berlin.

His PhD thesis dealt with fault-tolerant replication, but he gained several years’ experience in distributed and storage systems while developing the distributed research file system XtreemFS at the Zuse Institute Berlin.

He then spent time at Google working as a Software Engineer before he and fellow Co-Founder Felix Hupfield decided to combine the innovative research from XtreemFS and the operations experience from Google to build a highly reliable and scalable enterprise-grade storage system now known as Quobyte.