155: GreyBeards SDC23 wrap up podcast with Dr. J Metz, Technical Dir. of Systems Design AMD and Chair of SNIA BoD

Dr. J Metz (@drjmetz, blog), Technical Director of Systems Design at AMD and Chair of SNIA BoD, has been on our show before discussing SNIA research directions. We decided this year to add an annual podcast to discuss highlights from their Storage Developers Conference 2023 (SDC23).

Dr, J is working at AMD to help raise their view from a pure components perspective to a systems perspective. On the other hand, at SNIA, we can see them moving out of just storage interface technology into memory (of all things) and real long term, storage archive technologies.

SDC is SNIA’s main annual conference, which brings storage developers together with storage users to discuss all the technologies underpinning storing the data we all care so much about. Listen to the podcast to learn more

SNIA is trying to get their hands around trends impacting the IT industry today. These days, storage, compute and networking are all starting to morph into one another and the boundary lines, always tenuous at best, seem to be disappearing.

Aside from industry standards work that SNIA has always been known for, they are also deeply involved in education. One of their more popular artifacts is the SNIA Dictionary (recently moved online only), which provides definitions for probably over a 1000 storage terms. But SDC also has a lot of tutorials and other educational sessions worthy of time and effort. And all SDC sessions will be available online, at some point. (Update 10/25/23: they are all available now at Sessions | SDC 2023 website)

SNIA also presented at SFD26, while SDC23 was going on. At SFD26, SNIA discussed DNA data storage which is a recent technical affiliate and a new Smart Data Transfer Interface (SDXI), a software defined interface to perform memory to memory DMA.

First up, DNA storage, the DNA team said that they pretty much are able to store and access GB of DNA data storage today, without breaking a sweat and are starting to consider how to scale that up to TB of DNA storage.  We’ve discussed DNA data storage before on GBoS podcasts (see: 108: GreyBeards talk DNA storage... )

The talk at SFD26 was pretty detailed. Turns out the DNA data storage team have to re-invent a lot of standard storage technologies (catalogs/Indexes, metadata, ECC, etc) in order to support a DNA data soup of unstructured data.

For exampe, ECC for DNA segments (snippets) would be needed to correctly store and retrieve DNA data segments, And these segments could potentially be replicated 1000s of times in a DNA storage cell. And all DNA data segments would be tagged with file oriented metadata indicating (segment) address within file, file name or identifier, date created, etc.

As far as what an application for DNA storage would look like, Dr. J mentioned write once and read VERY infrequently. It turns out while making 1000s of copies of DNA data segments is straightforward, inexpensive and trivial, reading it is another matter entirely. And as was discussed at SFD26, reading DNA storage, as presently conceived, is destructive. (So maybe having lots of copies is a good and necessary idea.)

But the DNA guru’s really have to a come up with methods for indexing, searching, and writing/reading data quickly.  Todays disks have file systems that are self-defining. If you hand someone an HDD, it’s fairly straightforward to read information off of it and determine the file system used to create it. These days, with LTO-FS, the same could be said for LTO tape.

DNA is intended to be used to store data for 1000s of years. They have retrieved intact DNA from a number of organisms that are over 50K years old.  Retaining applications that can access, format and process data after a 1000 years is yet another serious problem someone will need to solve.

Next up was SDXI, a software defined DMA solution, that any application can use to move data from one memory to another without having to resort to 20 abstraction layers to do it. SDXI is just about moving data between memory banks.

Today, this is all within one system/server, but as CXL matures and more and more hardware starts supporting CXL 2 and 3, shared memory between servers will become more pervasive all on a CXL memory interface.

Keith tried bringing it home to moving data between containers or VMs and all that’s possible today within the same memory and sometime in the future between shared memory and local memory. 

Memory to memory transfers have to be done securely. It’s not like accessing memory from some other process hasn’t been frought with security exposures in the past. And Dr. J assured me that SDXI was built from the ground up with security considerations front and center.

To bring it all back home. SNIA has always been and always will be concerned with data. Whether that data resides on storage, memory or god forbid, in transit somewhere over a network. Keith went as far as to say that the network was storage, I felt that was a step too far.

Dr. J Metz, Technical Director of Systems at AMD, Chair of SNIA BoD

J is the Chair of SNIA’s (Storage Networking Industry Association) Board of Directors and Technical Director for Systems Design for AMD where he works to coordinate and lead strategy on various industry initiatives related to systems architecture. Recognized as a leading storage networking expert, J is an evangelist for all storage-related technology and has a unique ability to dissect and explain complex concepts and strategies. He is passionate about the innerworkings and application of emerging technologies.

J has previously held roles in both startups and Fortune 100 companies as a Field CTO,  R&D Engineer, Solutions Architect, and Systems Engineer. He has been a leader in several key industry standards groups, sitting on the Board of Directors for the SNIA, Fibre Channel Industry Association (FCIA), and Non-Volatile Memory Express (NVMe). A popular blogger and active on Twitter, his areas of expertise include NVMe, SANs, Fibre Channel, and computational storage.

J is an entertaining presenter and prolific writer. He has won multiple awards as a speaker and author, writing over 300 articles and giving presentations and webinars attended by over 10,000 people. He earned his PhD from the University of Georgia.

141: GreyBeards annual 2022 wrap-up podcast

Well it has been another year and time for our annual year end wrap up. Since Covid hit, every year has certainly been interesting. This year we have seen the start of back in person conferences which was a welcome change from the covid lockdown. We are very glad to start seeing everybody again.

From the tech standpoint, the big news this year was CXL. As everyone should recall, CXL is a new-ish PCIe hardware and protocol that supports larger memory sitting out on a PCIe bus and in the future shared memory between servers. All this is to enable a new wave of memory based computing. We spent probably half our time discussing CXL and it’s impact on IT.

The other major topic was the Cloud Native ecosystem. In the past all we talked about was K8s but nowadays the ecosystem that surrounds it is almost as important as K8s itself. The final topic was a bit of a shock earlier this year and yes it was the Broadcom’s acquisition of VMware. Jason and I spend our Explore podcast talking about it (see our 137: VMware Explore wrap-up). Keith has high hopes that the EU will shut it down but the jury’s still out on that one. Listen to the podcast to learn more.

As for CXL, it turns out that AMD have just released full support for CXL hardware and protocols with their latest round of CPU chips. But the new AMD CPUs only support DDR5 memory, (something about there’s only so much logic one can fit on a chip…) which means all those DDR4 DIMs out in the wild need somewhere to land. CXL could supply a new lease on life for DDR4 DIMs.

And it’s not just about shared memory or increased memory sizes, CXL can also provide a tiered memory hierarchy, with gobs of flash behind memory DIMs (see: 136: FMS2022 wrap up …) So, now its no longer a TB or ten of server memory but potentially 100s of TBs. What this means for SAP HANNA, AWS Aurora and other heavy-memory solutions has yet to play out.

Cloud Native won. We see this in the increasing adoption of containers and K8s in the enterprise, cloud and just about anywhere IT happens these days. But the ecosystem surrounding K8s is chaos.

Over time, many of these ecosystem solutions will die off, be purchased, or consolidated but in the mean time, it’s entirely too confusing. Red Hat’s OpenShift is one answer and VMware’s Tanzu is another. And of course all the clouds have their own K8s packaged solution. But just to cover their bets, everyone also supports native K8s and just about every software package that works with it. So, K8s’s ecosystem is in a state of flux and may take time to become a stable set of tools useable by the enterprise IT.

Finally, Broadcom’s acquisition of VMware has everyone up in arms. Customers are concerned the R&D juggernaut that VMware has been, since its very beginning, will be jettisoned in favor of profits. And HCI vendors that always felt Dell EMC had an unfair advantage will all look at Broadcom in a similar light.

Keith says there’s a major difference in how USA regulators view an acquisition and how EU regulators view one. According to Keith, EU views acquisitions in how they help or hurt the customer. USA regulators view acquisitions on show they help or hurt the competition. Will have to wait and see how this all plays for Broadcom-VMware.

On the other hand, speaking of competition, Nutanix seems to be feeling the heat as well. Rumors are it’s up for sale. Who will want it and how the regulators view both of these acquisitions may be as interesting story for 2023

2023 looks to be another year of transition for enterprise IT. The cloud players all seem to be coming around to the view that they can’t be all things to all (IT) people. And the enterprise vendors are finally seeing some modicum of staying power in the face of a relentless push to the cloud. How this plays out over the next few years will be of major interest to everybody.

Happy New Year from the GreyBeards!

Keith Townsend, The CTO Advisor

Keith Townsend (@CTOAdvisor) is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations. Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek, and can be found on LinkedIN.

Jason Collier, Principal Member of Technical Staff, AMD

Jason Collier (@bocanuts) is a long time friend, technical guru and innovator who has over 25 years of experience as a serial entrepreneur in technology. He was founder and CTO of Scale Computing and has been an innovator in the field of hyperconvergence and an expert in virtualization, data storage, networking, cloud computing, data centers, and edge computing for years. He’s on LinkedIN.

136: Flash Memory Summit 2022 wrap-up with Tom Coughlin, President, Coughlin Assoc.

We have known Tom Coughlin (@thomascoughlin), President, Coughlin Associates for a very long time now. He’s been an industry heavyweight almost as long as Ray (maybe even longer). Tom has always been very active in storage media, storage drives, storage systems and memory as well as active in the semiconductor space. All this made him a natural to perform as Program Chair at Flash Memory Summit (FMS)2022, so it’s great to have on the show to talk about the conference.

Just prior to the show, Micron announced that they had achieved 232 layer 3D NAND(in sampling methinks). Which would be a major step on the roadmap to higher density NAND. Micron was not at the show, but held an event at Levi stadium, not far from the conference center.

During a keynote, SK Hynix announced they had achieved 238 layer NAND, just exceeding Micron’s layer count. Other vendors at the show promised more layers as well but also discussed different ways other than layer counts to scale capacity, such as shrinking holes, moving logic, logical (more bits/cell) scaling, etc. PLC (5 bits/cell) was discussed and at least one vendor mentioned 6LC (not sure there’s a name yet but HxLC maybe?). Just about any 3D NAND is capable of logical scaling in bits/cell. So 200+ layers will mean more capacity SSDs over time.

The FMS conference seems to be expanding beyond Flash into more storage technologies as well as memory systems. In fact they had a session on DNA storage at the show.

In addition, there was a lot of talk about CXL, the new shared memory standard which supports shared memory over PCIe at FMS2022. PCIe is becoming a near universal connection protocol and is being used for 2d scaling of chips as a chip to chip interconnect as well as distributed storage and shared memory interconnect.

The CXL vision is that servers will still have DDR DRAM memory but they can share external memory systems. With shared memory systems in place memory, memory could be pooled and aggregated into one large repository which could then be carved up and parceled out to servers to support the workload dejour. And once those workloads are done, recarved up for the next workload to come. Almost like network attached storage only in this world its network attached memory.

Tom mentioned that CXL is starting to adopting other memory standers such as the Open Memory Interface (OMI) which has also been going on for a while now.

Moreover, CXL can support a memory hierarchy, which includes different speed memories such as DRAM, SCM, and SSDs. If the memory system has enough smarts to keep highly active data in the highest speed devices, an auto-tiering, shared memory pool could provide substantial capacities (10s-100sTB) of memory at a much reduced cost. This sounds a lot like what was promised by Optane.

Another topic at the show was Software Enabled/Defined Flash. There are a few enterprise storage vendors (e.g., IBM, Pure Storage and Hitachi) that design their own proprietary flash devices, but with SSD vendors coming out with software enabled flash, this should allow anyone to do something similar. Much more to come on this. Presumably, the hyper-scalers are driving this but having software enabled flash should benefit the entire IT industry.

The elephant in the room at FMS was Intel’s winding down of Optane. There were a couple of the NAND/SSD vendors talking about their “almost” storage class memory using SLC and other NAND tricks to provide Optane like performance/endurance using NAND storage.

Keith mentioned a youtube clip he saw where somebody talked about an Radeon Pro SSG ( (AMD GPU that had M.2 SSDs attached to it). And tried to show how it improved performance for some workloads (mostly 8k video using native SSG APIs). He replaced the old M.2 SSDs with newer ones with more capacity which increased the memory but it still had many inefficiencies and was much slower than HBM2 memory or VRAM. Keith thought this had some potential seeing as how in memory databases seriously increase performance but as far as I could see the SSG and it’s moded brethren died before it reached that potential.

As part of the NAND scaling discussion, Tom said one vendor (I believe Samsung) mentioned that by 2030, with die stacking and other tricks, they will be selling an SSD with 1PB of storage behind it. Can’t wait to see that.

By the way, if you are an IEEE member and are based in the USA, Tom is running for IEEE USA president this year, so please vote for him. It would be nice having a storage person in charge at IEEE.

Thomas Coughlin, President Coughlin Associates

Tom Coughlin, President, Coughlin Associates is a digital storage analyst and business and technology consultant. He has over 40 years in the data storage industry with engineering and senior management positions at several companies. Coughlin Associates consults, publishes books and market and technology reports (including The Media and Entertainment Storage Report and an Emerging Memory Report), and puts on digital storage-oriented events.

He is a regular storage and memory contributor for forbes.com and M&E organization websites. He is an IEEE Fellow, Past-President of IEEE-USA, Past Director of IEEE Region 6 and Past Chair of the Santa Clara Valley IEEE Section, Chair of the Consultants Network of Silicon Valley and is also active with SNIA and SMPTE.

For more information on Tom Coughlin and his publications and activities go to

134: GreyBeards talk (storage) standards with Dr. J Metz, SNIA Chair & Technical Director AMD

We have known Dr. J Metz (@drjmetz, blog), Chair of SNIA (Storage Networking Industry Association) BoD, for over a decade now and he has always been an intelligent industry evangelist. DrJ was elected Chair of SNIA BoD in 2020.

SNIA has been instrumental in the evolution of storage over the years working to help define storage networking, storage form factors, storage protocols, etc. Over the years it’s been crucial to the high adoption of storage systems in the enterprise and still is.. Listen to the podcast to learn more.

SNIA started out helping to define and foster storage networking before people even knew what it was. They were early proponents of plugfests to verify/validate compatibility of all the hardware, software and systems in a storage network solution.

One principal that SNIA has upheld, since the very beginning, is strict vendor and technology neutrality. SNIA goes out of it’s way to insure that all their publications,  media and technical working group (TWGs) committees maintain strict vendors and technology neutrality.

The challenge with any evolving technology arena is that new capabilities come and go with a regular cadence and one cannot promote one without impacting another. Ditto for vendors, although vendors seem to stick around a bit longer.

One SNIA artifact that has stood well the test of time is the SNIA dictionary.  Free to download and free copies available at every conference that SNIA attends. The dictionary covers just about every relevant acronym, buzzword and technology present in the storage networking industry today as well as across its long history.

SNIA also presents and pushes the storage networking point of view at every technical alliance in the IT industry. .

In addition, SNIA holds storage conferences around the world, as well as plugfests and  hackathons focused on the needs of the storage industry. Their Storage Developer Conference (SDC), coming up in September in the USA, is a highly technical conference specifically targeted at storage system developers. 

SDC presenters include many technology inventors driving the leading edge of storage (and memory, see below) industries. So, if you are developing storage systems, SDC is a must attend conference.

As for plugfests, SNIA has held FC storage networking plugfests over the years which have been instrumental in helping storage networking adoption.

We also talked about SNIA hackathons. Apparently a decade or so back, SNIA held a hackathon on SMB (the file protocol formerly known as CIFS) where most of the industry experts and partners doing work on SAMBA (open source SMB implementation) and SMB proprietary software were present.

At the time, Jason was working for another company, developing an SMB protocol. While attending the hackathon, Jason found that he was able to develop 1-1 relationships with many of the lead SMB/SAMBA developers and was able to solve problems in days that would have taken months before.

SNIA also has technology alliances with just about every other standards body involved in IT infrastructure, software and hardware today. As an indicator of where they are headed, SNIA recently joined with CNCF (Cloud Native Computing Foundation) to push for better storage under K8s.

SNIA has TWGs focused on technological areas that impact storage access. One TWG that has been going on now, for a long time, is Swordfish, an extension to the DMTF Redfish that focuses on managing storage.

Swordfish has struggled over the years to achieve industry adoption. We spent time discussing some of the issues with Swordfish, but honestly,  IMHO, it may be too late to change course.

Given the recent SNIA alliance with CNCF, we started discussing the state of storage under K8s and containers. DrJ and Jason mentioned that storage access under K8s goes through so many layers of abstraction that IO performance is almost smothered in overhead. The thinking at SNIA is we need to come up with a better API that bypasses all this software overhead to  directly access hardware.

 SNIA’s been working on SDXI (Smart Data Acceleration Interface), a new hardware memory to memory, direct path protocol. Apparently, this is a new byte level, (storage?) protocol for moving data between memories. I believe SDXI assumes that at least one memory device is shared. The other could be in a storage server, smartNIC, GPU, server, etc. If SDXI were running in your shared memory and server, one could use the API to strip away all of the software abstraction layers that have built up over the years to accessi shared memory at near hardware speeds

DrJ mentioned was NVMe as another protocol that strips away software abstractions to allow direct access to (storage) hardware. The performance of Optane and SSDs (and it turns out disks) was being smothered by SCSI device protocols/abstrations that were the only way to talk to storage devices in the past. But NVM and NVMe came along, and stripped all the non-essential abstractions and protocol overhead away and all of a sudden sub 100 microsecond IO’s were possible. 

Dr. J Metz,  SNIA Chair & Technical Director AMD

J is the Chair of SNIA’s (Storage Networking Industry Association) Board of Directors and Technical Director for Systems Design for AMD where he works to coordinate and lead strategy on various industry initiatives related to systems architecture. Recognized as a leading storage networking expert, J is an evangelist for all storage-related technology and has a unique ability to dissect and explain complex concepts and strategies. He is passionate about the innerworkings and application of emerging technologies.

J has previously held roles in both startups and Fortune 100 companies as a Field CTO,  R&D Engineer, Solutions Architect, and Systems Engineer. He has been a leader in several key industry standards groups, sitting on the Board of Directors for the SNIA, Fibre Channel Industry Association (FCIA), and Non-Volatile Memory Express (NVMe). A popular blogger and active on Twitter, his areas of expertise include NVMe, SANs, Fibre Channel, and computational storage.

J is an entertaining presenter and prolific writer. He has won multiple awards as a speaker and author, writing over 300 articles and giving presentations and webinars attended by over 10,000 people. He earned his PhD from the University of Georgia.

130: GreyBeards talk high-speed database access using Apache Arrow Flight, with James Duong and David Li

We had heard about Apache Arrow and Arrow Flight as being a hi-performing database with access speeds to match for a while now and finally got a chance to hear what it was all about with James Duong, Co-Fourder of Bit Quill Technologies/Senior Staff Developer at Dremio and David Li (@lidavidm), Apache PMC and software developer at Voltron Data.

First, Apache Arrow is an open source, in memory data base (GitHub repo) for columnar data that enables lightening fast access and processing of data. Apache Arrow Flight is a set of interfaces, protocols, and services that parallelizes access to load and unload Arrow data over the network, from storage to memory and back, very fast. Listen to the podcast to learn more.

Columnar databases are all the rage these days and have more or less taken over from row oriented data bases. With row based database, data is stored (and accessed) row by row. In a columnar database, data is stored in columns, i.e, all data for one column is stored in sequence and then the next column is stored in sequence. Columnar databases can be queried/processed faster than row databased (depending on whether you are looking at/accessing multiple columns per row or not). And columnar data should compress better as all the data in a single column is of the same type..

Also the fact that columns are located contiguous in memory means if you process a column at a time, CPU data caches should work better. This is because they can grab a whole vector (columns worth of data) with one request.

Arrow data is processed and accessed in record batches. These are 2D segments which represent all the columns in a sequence/set of rows. And record batches are the unit of parallelism in Arrow and Arrow Flight. So an Arrow client operating on a CPU thread/core/chip or server could be processing one record batch while another CPU thread/core/CPU or server could process a different record batch.

Arrow Flight (GitHub RPC format doc repo) is an RPC framework that includes API’s, protocols, standards (for on storage, on wire and in memory) and libraries used to transfer Arrow data and metadata (record batches) across the network. For the typical system there exists Flight clients and Flight services in a system.

Arrow Flight currently uses Google’s gRPC for data transfers. gRPC is a open source remote procedure call (RPC) service that supports within data center, across data centers and out to the edge processing services. Although Arrow Flight is currently implemented on top of gRPC, other network protocols will be supported in the future.

What makes Arrow Flight so fast is its ability to support parallel transfers. That is customers can configure Arrow (Flight) clients across clusters of servers and Arrow (Flight) services residing on one or more other servers. Any client can request metadata and record batches from any end point (Flight service) in the data center. And yes Arrow data can be supplied from multiple end points by being mirrored/replicated. All data transfers can operate in parallel across all Flight client and services, with no known bottleneck other than the network.

A single stream of Arrow Flight data was able to deliver 20GB/sec. The fact that you can have any (?) number of Arrow Flight data streams in operation at the same time makes that a very interesting number.

Also, Arrow data can be stored on or sourced from typical data lakes such as Azure Data Lake, AWS S3, Google Cloud storage, etc.

Another advantage of Arrow Flight is the ability to use the same format on the wire and in storage. Normally JDBC (and ODBC) have on storage and on wire formats which require format conversion (serialization) to move data from storage/memory to wire and another conversion (deserialization) to move data from on wire format to in storage/memory format. Arrow Flight does away with serialization and deserialization of data all together and uses the same format for on wire and in storage.

Arrow Flight SQL allows Arrow processing of SQL database data. My understanding is that customers using non Arrow databases such as Oracle, SQL Server, Postgres, etc. can use Arrow Flight SQL to provide Arrow in-memory database processin/query execution for their data.

Arrow and Arrow flight are primarily used to process data analytics workloads but Arrow also has a new execution engine, the Arrow Gandiva project, that enables vectorized processing of Arrow data. This is a special execution engine for Arrow that supports X86 cores with AVX instructions, (NVIDIA) GPUs, and FPGAs.

There’s also an open source package, Fletcher, used to create Arrow and Arrow flight processing HDLs so that customers can add Arrow data processing and Arrow Flight data transfer functionality to custom built FPGAs.

One challenge with open source software is support for problems/bugs that crop up. An active developer community helps, but enterprise customers require professional, on call 7×24 (5×12?) support for all their critical (and most non-critical) software. Voltron Data (David’s) company provides paid for support for Arrow Flight and Arrow data services.

The other major problem with open source software has been use complexity. At the moment the Arrow Flight team is very responsive in clarifying documentation and are trying to make it easier to use. But at the moment Arrow Flight is mostly a set of APIs, libraries and connectors that end users can use to standup Arrow (Flight) clients and servers to transfer Arrow data between them.

James Duong, Co-Founder Bit Quill Technologies & Sr. Staff Developer at Dremio

An Apache Arrow contributor, cofounder at Bit Quill Technologies, and contributor to Dremio Corporation projects, James Duong has worked with databases for over 15 years, from backend query engines to drivers and protocols. He’s worked with a variety of relational, big data, and cloud databases including Dremio, SQL Server, Redshift, and Hive.

Previously at Simba Technologies, James architected and built connectors for sources, as well as designing the Simba Engine SDK for developing connectivity solutions for any data source.

Bit Quill Technologies, the company James helped co-found, builds back end software in the data and cloud space. Bit Quill has built a name for itself as a producer of high-quality software, a collaborative approach to design and development, and a love for good tech and happy people.

Balancing his passion for the data ecosystem with a young family, James occasionally steps away from it all to go hiking.

David Li, Apache Arrow PMC and software engineer at Voltron Data

David is a PMC member for Apache Arrow and a software engineer at Voltron Data (formerly known as Ursa Computing). Prior to that, he worked on data services and Apache Arrow at Two Sigma.

David holds an M.Eng. in Computer Science from Cornell University.