136: Flash Memory Summit 2022 wrap-up with Tom Coughlin, President, Coughlin Assoc.

We have known Tom Coughlin (@thomascoughlin), President, Coughlin Associates for a very long time now. He’s been an industry heavyweight almost as long as Ray (maybe even longer). Tom has always been very active in storage media, storage drives, storage systems and memory as well as active in the semiconductor space. All this made him a natural to perform as Program Chair at Flash Memory Summit (FMS)2022, so it’s great to have on the show to talk about the conference.

Just prior to the show, Micron announced that they had achieved 232 layer 3D NAND(in sampling methinks). Which would be a major step on the roadmap to higher density NAND. Micron was not at the show, but held an event at Levi stadium, not far from the conference center.

During a keynote, SK Hynix announced they had achieved 238 layer NAND, just exceeding Micron’s layer count. Other vendors at the show promised more layers as well but also discussed different ways other than layer counts to scale capacity, such as shrinking holes, moving logic, logical (more bits/cell) scaling, etc. PLC (5 bits/cell) was discussed and at least one vendor mentioned 6LC (not sure there’s a name yet but HxLC maybe?). Just about any 3D NAND is capable of logical scaling in bits/cell. So 200+ layers will mean more capacity SSDs over time.

The FMS conference seems to be expanding beyond Flash into more storage technologies as well as memory systems. In fact they had a session on DNA storage at the show.

In addition, there was a lot of talk about CXL, the new shared memory standard which supports shared memory over PCIe at FMS2022. PCIe is becoming a near universal connection protocol and is being used for 2d scaling of chips as a chip to chip interconnect as well as distributed storage and shared memory interconnect.

The CXL vision is that servers will still have DDR DRAM memory but they can share external memory systems. With shared memory systems in place memory, memory could be pooled and aggregated into one large repository which could then be carved up and parceled out to servers to support the workload dejour. And once those workloads are done, recarved up for the next workload to come. Almost like network attached storage only in this world its network attached memory.

Tom mentioned that CXL is starting to adopting other memory standers such as the Open Memory Interface (OMI) which has also been going on for a while now.

Moreover, CXL can support a memory hierarchy, which includes different speed memories such as DRAM, SCM, and SSDs. If the memory system has enough smarts to keep highly active data in the highest speed devices, an auto-tiering, shared memory pool could provide substantial capacities (10s-100sTB) of memory at a much reduced cost. This sounds a lot like what was promised by Optane.

Another topic at the show was Software Enabled/Defined Flash. There are a few enterprise storage vendors (e.g., IBM, Pure Storage and Hitachi) that design their own proprietary flash devices, but with SSD vendors coming out with software enabled flash, this should allow anyone to do something similar. Much more to come on this. Presumably, the hyper-scalers are driving this but having software enabled flash should benefit the entire IT industry.

The elephant in the room at FMS was Intel’s winding down of Optane. There were a couple of the NAND/SSD vendors talking about their “almost” storage class memory using SLC and other NAND tricks to provide Optane like performance/endurance using NAND storage.

Keith mentioned a youtube clip he saw where somebody talked about an Radeon Pro SSG ( (AMD GPU that had M.2 SSDs attached to it). And tried to show how it improved performance for some workloads (mostly 8k video using native SSG APIs). He replaced the old M.2 SSDs with newer ones with more capacity which increased the memory but it still had many inefficiencies and was much slower than HBM2 memory or VRAM. Keith thought this had some potential seeing as how in memory databases seriously increase performance but as far as I could see the SSG and it’s moded brethren died before it reached that potential.

As part of the NAND scaling discussion, Tom said one vendor (I believe Samsung) mentioned that by 2030, with die stacking and other tricks, they will be selling an SSD with 1PB of storage behind it. Can’t wait to see that.

By the way, if you are an IEEE member and are based in the USA, Tom is running for IEEE USA president this year, so please vote for him. It would be nice having a storage person in charge at IEEE.

Thomas Coughlin, President Coughlin Associates

Tom Coughlin, President, Coughlin Associates is a digital storage analyst and business and technology consultant. He has over 40 years in the data storage industry with engineering and senior management positions at several companies. Coughlin Associates consults, publishes books and market and technology reports (including The Media and Entertainment Storage Report and an Emerging Memory Report), and puts on digital storage-oriented events.

He is a regular storage and memory contributor for forbes.com and M&E organization websites. He is an IEEE Fellow, Past-President of IEEE-USA, Past Director of IEEE Region 6 and Past Chair of the Santa Clara Valley IEEE Section, Chair of the Consultants Network of Silicon Valley and is also active with SNIA and SMPTE.

For more information on Tom Coughlin and his publications and activities go to

135: Greybeard(s) talk file and object challenges with Theresa Miller & David Jayanathan, Cohesity

Sponsored By:

I’ve known Theresa Miller, Director of Technology Advocacy Group at Cohesity, for many years now and just met David Jayanathan (DJ), Cohesity Solutions Architect during the podcast. Theresa could easily qualify as an old timer, if she wished and DJ was very knowledgeable about traditional file and object storage.

We had a wide ranging discussion covering many of the challenges present in today’s file and object storage solutions. Listen to the podcast to learn more.

IT is becoming more distributed. Partly due to moving to the cloud, but now it’s moving to multiple clouds and on prem has never really gone away. Further, the need for IT to support a remote work force, is forcing data and systems that use them, to move as well.

Customers need storage that can reside anywhere. Their data must be migrate-able from on prem to cloud(s) and back again. Traditional storage may be able to migrate from one location to a select few others or replicate to another location (with the same storage systems present), but migration to and from the cloud is just not easy enough.

Moreover, traditional storage management has not kept up with this widely disbursed data world we live in. With traditional storage, customers may require different products to manage their storage depending on where data resides.

Yes, having storage that performs, provides data access, resilience and integrity is important, but that alone is just not enough anymore.

And to top that all off, the issues surrounding data security today have become just too complex for traditional storage to solve alone, anymore. One needs storage, data protection and ransomware scanning/detection/protection that operates together, as one solution to deal with IT security in today’s world

Ransomware has rapidly become the critical piece of this storage puzzle needing to be addressed. It’s a significant burden on every IT organization today. Some groups are getting hit each day, while others even more frequently. Traditional storage has very limited capabilities, outside of snapshots and replication, to deal with this ever increasing threat.

To defeat ransomware, data needs to be vaulted, to an immutable, air gapped repository, whether that be in the cloud or elsewhere. Such vaulting needs to be policy driven and integrated with data protection cycles to be recoverable.

Furthermore, any ransomware recovery needs to be quick, easy, AND securely controlled. RBAC (role-based, access control) can help but may not suffice for some organizations. For these environments, multiple admins may need to approve ransomware recovery, which will wipe out all current data by restoring a good, vaulted copy of the organizations data.

Edge and IoT systems also need data storage. How much may depend on where the data is being processed/pre-processed in the IoT system. But, as these systems mature, they will have their own storage requirements which is yet another data location to be managed, protected, and secured.

Theresa and DJ had mentioned Cohesity SmartFiles during our talk which I hadn’t heard about. Turns out that SmartFiles is Cohesity’s file and object storage solution that uses the Cohesity storage cluster. Cohesity data protection and other data management solutions also use the cluster to store their data. Adding SmartFiles to the mix, brings a more complete storage solution to support customer data needs. .

We also discussed Helios, Cohesity’s, next generation, data platform that provides a control and management plane for all Cohesity products and services,.

Theresa Miller, Director, Technology Advocacy Group, Cohesity

Theresa Miller is the Director, Technology Advocacy Group at Cohesity.  She is an IT professional that has worked as a technical expert in IT for over 25 years and has her MBA.

She is uniquely industry recognized as a Microsoft MVP, Citrix CTP, and VMware vExpert.  Her areas of expertise include Cloud, Hybrid-cloud, Microsoft 365, VMware, and Citrix.

David Jayanathan, Solutions Architect, Cohesity

David Jayanathan is a Solutions Architect at Cohesity, currently working on SmartFiles. 

DJ is an IT professional that has specialized in all things related to enterprise storage and data protection for over 15 years.

134: GreyBeards talk (storage) standards with Dr. J Metz, SNIA Chair & Technical Director AMD

We have known Dr. J Metz (@drjmetz, blog), Chair of SNIA (Storage Networking Industry Association) BoD, for over a decade now and he has always been an intelligent industry evangelist. DrJ was elected Chair of SNIA BoD in 2020.

SNIA has been instrumental in the evolution of storage over the years working to help define storage networking, storage form factors, storage protocols, etc. Over the years it’s been crucial to the high adoption of storage systems in the enterprise and still is.. Listen to the podcast to learn more.

SNIA started out helping to define and foster storage networking before people even knew what it was. They were early proponents of plugfests to verify/validate compatibility of all the hardware, software and systems in a storage network solution.

One principal that SNIA has upheld, since the very beginning, is strict vendor and technology neutrality. SNIA goes out of it’s way to insure that all their publications,  media and technical working group (TWGs) committees maintain strict vendors and technology neutrality.

The challenge with any evolving technology arena is that new capabilities come and go with a regular cadence and one cannot promote one without impacting another. Ditto for vendors, although vendors seem to stick around a bit longer.

One SNIA artifact that has stood well the test of time is the SNIA dictionary.  Free to download and free copies available at every conference that SNIA attends. The dictionary covers just about every relevant acronym, buzzword and technology present in the storage networking industry today as well as across its long history.

SNIA also presents and pushes the storage networking point of view at every technical alliance in the IT industry. .

In addition, SNIA holds storage conferences around the world, as well as plugfests and  hackathons focused on the needs of the storage industry. Their Storage Developer Conference (SDC), coming up in September in the USA, is a highly technical conference specifically targeted at storage system developers. 

SDC presenters include many technology inventors driving the leading edge of storage (and memory, see below) industries. So, if you are developing storage systems, SDC is a must attend conference.

As for plugfests, SNIA has held FC storage networking plugfests over the years which have been instrumental in helping storage networking adoption.

We also talked about SNIA hackathons. Apparently a decade or so back, SNIA held a hackathon on SMB (the file protocol formerly known as CIFS) where most of the industry experts and partners doing work on SAMBA (open source SMB implementation) and SMB proprietary software were present.

At the time, Jason was working for another company, developing an SMB protocol. While attending the hackathon, Jason found that he was able to develop 1-1 relationships with many of the lead SMB/SAMBA developers and was able to solve problems in days that would have taken months before.

SNIA also has technology alliances with just about every other standards body involved in IT infrastructure, software and hardware today. As an indicator of where they are headed, SNIA recently joined with CNCF (Cloud Native Computing Foundation) to push for better storage under K8s.

SNIA has TWGs focused on technological areas that impact storage access. One TWG that has been going on now, for a long time, is Swordfish, an extension to the DMTF Redfish that focuses on managing storage.

Swordfish has struggled over the years to achieve industry adoption. We spent time discussing some of the issues with Swordfish, but honestly,  IMHO, it may be too late to change course.

Given the recent SNIA alliance with CNCF, we started discussing the state of storage under K8s and containers. DrJ and Jason mentioned that storage access under K8s goes through so many layers of abstraction that IO performance is almost smothered in overhead. The thinking at SNIA is we need to come up with a better API that bypasses all this software overhead to  directly access hardware.

 SNIA’s been working on SDXI (Smart Data Acceleration Interface), a new hardware memory to memory, direct path protocol. Apparently, this is a new byte level, (storage?) protocol for moving data between memories. I believe SDXI assumes that at least one memory device is shared. The other could be in a storage server, smartNIC, GPU, server, etc. If SDXI were running in your shared memory and server, one could use the API to strip away all of the software abstraction layers that have built up over the years to accessi shared memory at near hardware speeds

DrJ mentioned was NVMe as another protocol that strips away software abstractions to allow direct access to (storage) hardware. The performance of Optane and SSDs (and it turns out disks) was being smothered by SCSI device protocols/abstrations that were the only way to talk to storage devices in the past. But NVM and NVMe came along, and stripped all the non-essential abstractions and protocol overhead away and all of a sudden sub 100 microsecond IO’s were possible. 

Dr. J Metz,  SNIA Chair & Technical Director AMD

J is the Chair of SNIA’s (Storage Networking Industry Association) Board of Directors and Technical Director for Systems Design for AMD where he works to coordinate and lead strategy on various industry initiatives related to systems architecture. Recognized as a leading storage networking expert, J is an evangelist for all storage-related technology and has a unique ability to dissect and explain complex concepts and strategies. He is passionate about the innerworkings and application of emerging technologies.

J has previously held roles in both startups and Fortune 100 companies as a Field CTO,  R&D Engineer, Solutions Architect, and Systems Engineer. He has been a leader in several key industry standards groups, sitting on the Board of Directors for the SNIA, Fibre Channel Industry Association (FCIA), and Non-Volatile Memory Express (NVMe). A popular blogger and active on Twitter, his areas of expertise include NVMe, SANs, Fibre Channel, and computational storage.

J is an entertaining presenter and prolific writer. He has won multiple awards as a speaker and author, writing over 300 articles and giving presentations and webinars attended by over 10,000 people. He earned his PhD from the University of Georgia.

133: GreyBeards talk trillion row databases/data lakes with Ocient CEO & Co-founder, Chris Gladwin

We saw a recent article in Blocks and Files (Storage facing trillion-row db apocalypse), about a couple of companies which were trying to deal with trillion row database queries without taking weeks to respond. One of those companies was Ocient (@Ocient), a Chicago startup, whose CEO and Co-Founder, Chris Gladwin, was an old friend from CleverSafe (now IBM Cloud Object Storage).

Chris and team have been busy creating a new way to perform data analytics on massive data lakes. It’s has a lot to do with extreme parallelism, high core counts, NVMe SSDs, and sophisticated network and compute flow control. Listen to the podcast to learn more.

The key to Ocient’s approach involves NVMe SSDs which have become ubiquitous over the last couple of years which can be deployed to deal with large data problems. Another key to Ocient is multi-core CPUs, which again seem everwhere and if anything, are almost doubling with every new generation of CPU chip.

We let Chris wax a little too long on the SSD revolution in IOPs, especially as pertains too random 4K reads. Put a 20 or so NVMe SSDs in a server with dual 50 core CPU chips and you have one fast random IO machine.

Another key to Ocient is very sophisticated network and bus data flow management. With all this data running any query on it, involves consuming lots of data that all has to be brought into the CPU. PCIe bandwidth helps, as does NVMe SSDs, but you still need to insure that nothing gets bottlenecked moving all that data around a system/server.

Yet another key to Ocient is parallelism. With one 20 NVMe SSD server and 2-50 core CPUs you’ve got a lot of capability but when you are talking about trillion row databases you need more. So in order to respond to queries in anything a second or so, they throw a lot of NVMe servers at the problem.

I asked how they split the data across all these servers and Chris mentioned that at the moment that’s part of their secret sauce and involves professional services.

Ocient supports full ANSI SQL queries against trillion row databases and replies to those queries in a matter of seconds. And we aren’t just talking about SQL selects, Ocient can do splits, joins and updates to this trillion row database at the same time as the SQL select are going on. Chris mentioned that Ocient can be loading 100K JSON files each second, while still performing SQL queries in near real time against the trillion row database.

Ocient supports Reed-Solomon error correction on database data as well as data compression and encryption.

In addition to SQL queries, Chris mentioned that Ocient supports data load and transform activities. He said that most of this data is being generated from IoT applications and often needs to be cleaned up before it can be processed. Doing this in real time, while handling queries to the database is part of their secret sauce.

Chris said there’s probably not that many organizations that have need for trillion row databases. But ad auctions, telecom routers, financial services already use trillion row databases and they all want to be able to process queries faster on this data. Ocient is betting that there will be plenty more like this over time.

Ocient is available on AWS and GCP as a cloud service, can also be used operating in their own Ocient Cloud or can be deployed on premises. Ocient services are billed on a per core pack (500 cores, I think) subscription model.

Chris Gladwin, CEO and Co-founder, Ocient

Chris is the CEO and Co-Founder of Ocient whose mission is to provide the leading platform the world uses to transform, store, and analyze its largest datasets.

In 2004, Chris founded Cleversafe which became the largest and most strategic object storage vendor in the world (according to IDC.)  He raised $100M and then led the company to over a $1.3B exit in 2015 when IBM acquired the company.  The technology Cleversafe created is used by most people in the U.S. every day and generated over 1,000 patents granted or filed, creating one of the ten most powerful patent portfolios in the world. 

Prior to Cleversafe, Chris was the Founding CEO of startups MusicNow and Cruise Technologies and led product strategy for Zenith Data Systems.  He started his career at Lockheed Martin as a database programmer and holds an engineering degree from MIT. 

132: GreyBeards talk fast embedded k-v stores with Speedb’s Co-Founder&CEO Adi Gelvan

We’ve been talking a lot about K8s of late so we thought it was time to get back down to earth and spend some time with Adi Gelvan (@speedb_io), Co-founder and CEO of Speedb, an embedded key-value store, drop-in/replacement for RocksDB, that significantly improves on its IO performance for large metadata databases.

At Adi’s last job they were searching for a key-values store or database to manage the substantial metadata they needed. After looking at RocksDB, they found it had a number of performance problems, especially as you got up to lots of metadata. Speedb was specifically designed to address the problems they found. Listen to the podcast to learn more.

RocksDB is a key-value store engine that manages the metadata for just about every open source project in existence that uses metadata. RocksDB is a Facebook open source fork of Google’s LevelDB database.

The main issues with RocksDB is that when you have a lot of metadata (key:volume pairs), RocksDB performance suffers from highly variable latency and write stalls.

Most RocksDB users are aware of these problems and turn to sharding the database to address them (by essentially shrinking the amount of metadata under management within a single node/instance..

Historically, key-volume stores used B+-trees to store data. B+-trees are great for reading, but bad for writing. Namely the B+-tree usually had to be rebalanced when entries were added and potentially when they were updated. This could cause a cascade of read-write IO throughout the tree, delaying the original IO.

Log Structured Merge trees (LMS-trees) were created to reduce write problems while at the same time provide B+-tree speed for reading. Essentially, an LSM-tree is an in-memory, sequence of (sometimes sorted) key-value pairs that can be written (destaged) to multiple sequential (sorted) string tables (SST) files on some backing store. A hierarchical index is maintained in memory to identify which SSTs holds which key-value data.

RocksDB uses LSM-tree, in memory data structures, to buffer writes. When memory becomes full, the LSM tree can be destaged to backing store to one or more SST files. Howewer, SSTs, when first written, aren’t necessarily in sorted key order, and they may contain duplicate key-value entries to what’s already in other SSTs.

So earlier versions of SSTs will need to be read back in, compacted (duplicate key-value entries deleted), sorted and written back out. The earliest version of the SSTs is considered Level 0 (L0), the next (1st level compacted and sorted) is considered L1, and this process can go on generating L2 to Ln SSTs. We would call this garbage collection, the metadata world calls it compaction.

But each time an SST is written out that’s another read of all the key-value pairs AND another write to storage. In SSDs we would call these repeated writes, write amplification. It turns out that RocksDB can have up to a 30X write amplification for a key-value entry. This means that instead of just writing it once or twice it’s written (and reread) up to 30 times. This IO takes away bandwidth and processing power from normal metadata read and write activity, which impacts IO performance

As GreyBeards know, storage (and flash) garbage collection can lead to unpredictable latencies and system busy times. Intense garbage collection (for SSDs) can seemingly hold off or stall all other IO for some amount of time during this activity. This is the main reason why RocksDB has highly variable latencies and write stalls.

Garbage collection is not an issue when you have limited amounts of metadata entries (key:volume pairs), but as you get more entries, ongoing garbage collection can become a serious impediment to performing IO. When we say “large metadata stores” we are talking 30GBs of metadata, with probably, billions of key-volume pair entries.

There appears to be two dimensions to (RocksDB) LSM-tree/SST file performance. One is the number of levels allowed and the other is the size of SST file.

Speedb determined that two dimensions weren’t sufficient to solve RocksDB performance problems. And sharding the database seemed to be putting the burden on the customer to fix the issue. So Speedb restructured their LSM-trees and SSTs to create 3 or more dimensions to tune for database performance.

With Speedb’s restructured LMS-tree and SST files, they reduce write amplification for large metadata databases, from 30X to 5X. That alone could easily increase system performance by a factor of 6.

Adi mentioned that for one cloud based customer, they were able to double performance with 1/4 the (cloud instance) server hardware, essentially providing an ~8X improvement in performance over RocksDB.

Adi also mentioned that they are targeting system developers with large metadata stores. Luckily Speedb is a fully RocksDB compatible replacement. This means developers should only take ~30 minutes to convert a system to use Speedb.

We also asked about pricing. Adi said there’s two current pricing models: 1) OEMs pay a revenue share to use Speedb and 2) non-OEMs can license the product on a per node per month basis. Given that Speedb node efficiency over RocksDB, there should be a less nodes required to support the same performance for any given metadata store.

Adi also mentioned they are in the process of releasing an open source version of Speedb that incorporates some of the enterprise product. This way developers can try Speedb to see how it works for free. It won’t bethe complete product but it’s better than native RocksDB.

Adi Gelvan, Co-Founder and CEO Speedb

Adi Gelvan is co-founder and CEO of Speedb, a data management startup, that provides a drop-in replacement for RocksDB embedded storage engine. 

Adi was a former IT infrastructure manager with over two decades of management, commercialization and executive sales position. Adi specializes in leading global software technology companies like Infinidat and SQream to outstanding growth.

Adi holds a double academic degree in mathematics & computer science.