136: Flash Memory Summit 2022 wrap-up with Tom Coughlin, President, Coughlin Assoc.

We have known Tom Coughlin (@thomascoughlin), President, Coughlin Associates for a very long time now. He’s been an industry heavyweight almost as long as Ray (maybe even longer). Tom has always been very active in storage media, storage drives, storage systems and memory as well as active in the semiconductor space. All this made him a natural to perform as Program Chair at Flash Memory Summit (FMS)2022, so it’s great to have on the show to talk about the conference.

Just prior to the show, Micron announced that they had achieved 232 layer 3D NAND(in sampling methinks). Which would be a major step on the roadmap to higher density NAND. Micron was not at the show, but held an event at Levi stadium, not far from the conference center.

During a keynote, SK Hynix announced they had achieved 238 layer NAND, just exceeding Micron’s layer count. Other vendors at the show promised more layers as well but also discussed different ways other than layer counts to scale capacity, such as shrinking holes, moving logic, logical (more bits/cell) scaling, etc. PLC (5 bits/cell) was discussed and at least one vendor mentioned 6LC (not sure there’s a name yet but HxLC maybe?). Just about any 3D NAND is capable of logical scaling in bits/cell. So 200+ layers will mean more capacity SSDs over time.

The FMS conference seems to be expanding beyond Flash into more storage technologies as well as memory systems. In fact they had a session on DNA storage at the show.

In addition, there was a lot of talk about CXL, the new shared memory standard which supports shared memory over PCIe at FMS2022. PCIe is becoming a near universal connection protocol and is being used for 2d scaling of chips as a chip to chip interconnect as well as distributed storage and shared memory interconnect.

The CXL vision is that servers will still have DDR DRAM memory but they can share external memory systems. With shared memory systems in place memory, memory could be pooled and aggregated into one large repository which could then be carved up and parceled out to servers to support the workload dejour. And once those workloads are done, recarved up for the next workload to come. Almost like network attached storage only in this world its network attached memory.

Tom mentioned that CXL is starting to adopting other memory standers such as the Open Memory Interface (OMI) which has also been going on for a while now.

Moreover, CXL can support a memory hierarchy, which includes different speed memories such as DRAM, SCM, and SSDs. If the memory system has enough smarts to keep highly active data in the highest speed devices, an auto-tiering, shared memory pool could provide substantial capacities (10s-100sTB) of memory at a much reduced cost. This sounds a lot like what was promised by Optane.

Another topic at the show was Software Enabled/Defined Flash. There are a few enterprise storage vendors (e.g., IBM, Pure Storage and Hitachi) that design their own proprietary flash devices, but with SSD vendors coming out with software enabled flash, this should allow anyone to do something similar. Much more to come on this. Presumably, the hyper-scalers are driving this but having software enabled flash should benefit the entire IT industry.

The elephant in the room at FMS was Intel’s winding down of Optane. There were a couple of the NAND/SSD vendors talking about their “almost” storage class memory using SLC and other NAND tricks to provide Optane like performance/endurance using NAND storage.

Keith mentioned a youtube clip he saw where somebody talked about an Radeon Pro SSG ( (AMD GPU that had M.2 SSDs attached to it). And tried to show how it improved performance for some workloads (mostly 8k video using native SSG APIs). He replaced the old M.2 SSDs with newer ones with more capacity which increased the memory but it still had many inefficiencies and was much slower than HBM2 memory or VRAM. Keith thought this had some potential seeing as how in memory databases seriously increase performance but as far as I could see the SSG and it’s moded brethren died before it reached that potential.

As part of the NAND scaling discussion, Tom said one vendor (I believe Samsung) mentioned that by 2030, with die stacking and other tricks, they will be selling an SSD with 1PB of storage behind it. Can’t wait to see that.

By the way, if you are an IEEE member and are based in the USA, Tom is running for IEEE USA president this year, so please vote for him. It would be nice having a storage person in charge at IEEE.

Thomas Coughlin, President Coughlin Associates

Tom Coughlin, President, Coughlin Associates is a digital storage analyst and business and technology consultant. He has over 40 years in the data storage industry with engineering and senior management positions at several companies. Coughlin Associates consults, publishes books and market and technology reports (including The Media and Entertainment Storage Report and an Emerging Memory Report), and puts on digital storage-oriented events.

He is a regular storage and memory contributor for forbes.com and M&E organization websites. He is an IEEE Fellow, Past-President of IEEE-USA, Past Director of IEEE Region 6 and Past Chair of the Santa Clara Valley IEEE Section, Chair of the Consultants Network of Silicon Valley and is also active with SNIA and SMPTE.

For more information on Tom Coughlin and his publications and activities go to

134: GreyBeards talk (storage) standards with Dr. J Metz, SNIA Chair & Technical Director AMD

We have known Dr. J Metz (@drjmetz, blog), Chair of SNIA (Storage Networking Industry Association) BoD, for over a decade now and he has always been an intelligent industry evangelist. DrJ was elected Chair of SNIA BoD in 2020.

SNIA has been instrumental in the evolution of storage over the years working to help define storage networking, storage form factors, storage protocols, etc. Over the years it’s been crucial to the high adoption of storage systems in the enterprise and still is.. Listen to the podcast to learn more.

SNIA started out helping to define and foster storage networking before people even knew what it was. They were early proponents of plugfests to verify/validate compatibility of all the hardware, software and systems in a storage network solution.

One principal that SNIA has upheld, since the very beginning, is strict vendor and technology neutrality. SNIA goes out of it’s way to insure that all their publications,  media and technical working group (TWGs) committees maintain strict vendors and technology neutrality.

The challenge with any evolving technology arena is that new capabilities come and go with a regular cadence and one cannot promote one without impacting another. Ditto for vendors, although vendors seem to stick around a bit longer.

One SNIA artifact that has stood well the test of time is the SNIA dictionary.  Free to download and free copies available at every conference that SNIA attends. The dictionary covers just about every relevant acronym, buzzword and technology present in the storage networking industry today as well as across its long history.

SNIA also presents and pushes the storage networking point of view at every technical alliance in the IT industry. .

In addition, SNIA holds storage conferences around the world, as well as plugfests and  hackathons focused on the needs of the storage industry. Their Storage Developer Conference (SDC), coming up in September in the USA, is a highly technical conference specifically targeted at storage system developers. 

SDC presenters include many technology inventors driving the leading edge of storage (and memory, see below) industries. So, if you are developing storage systems, SDC is a must attend conference.

As for plugfests, SNIA has held FC storage networking plugfests over the years which have been instrumental in helping storage networking adoption.

We also talked about SNIA hackathons. Apparently a decade or so back, SNIA held a hackathon on SMB (the file protocol formerly known as CIFS) where most of the industry experts and partners doing work on SAMBA (open source SMB implementation) and SMB proprietary software were present.

At the time, Jason was working for another company, developing an SMB protocol. While attending the hackathon, Jason found that he was able to develop 1-1 relationships with many of the lead SMB/SAMBA developers and was able to solve problems in days that would have taken months before.

SNIA also has technology alliances with just about every other standards body involved in IT infrastructure, software and hardware today. As an indicator of where they are headed, SNIA recently joined with CNCF (Cloud Native Computing Foundation) to push for better storage under K8s.

SNIA has TWGs focused on technological areas that impact storage access. One TWG that has been going on now, for a long time, is Swordfish, an extension to the DMTF Redfish that focuses on managing storage.

Swordfish has struggled over the years to achieve industry adoption. We spent time discussing some of the issues with Swordfish, but honestly,  IMHO, it may be too late to change course.

Given the recent SNIA alliance with CNCF, we started discussing the state of storage under K8s and containers. DrJ and Jason mentioned that storage access under K8s goes through so many layers of abstraction that IO performance is almost smothered in overhead. The thinking at SNIA is we need to come up with a better API that bypasses all this software overhead to  directly access hardware.

 SNIA’s been working on SDXI (Smart Data Acceleration Interface), a new hardware memory to memory, direct path protocol. Apparently, this is a new byte level, (storage?) protocol for moving data between memories. I believe SDXI assumes that at least one memory device is shared. The other could be in a storage server, smartNIC, GPU, server, etc. If SDXI were running in your shared memory and server, one could use the API to strip away all of the software abstraction layers that have built up over the years to accessi shared memory at near hardware speeds

DrJ mentioned was NVMe as another protocol that strips away software abstractions to allow direct access to (storage) hardware. The performance of Optane and SSDs (and it turns out disks) was being smothered by SCSI device protocols/abstrations that were the only way to talk to storage devices in the past. But NVM and NVMe came along, and stripped all the non-essential abstractions and protocol overhead away and all of a sudden sub 100 microsecond IO’s were possible. 

Dr. J Metz,  SNIA Chair & Technical Director AMD

J is the Chair of SNIA’s (Storage Networking Industry Association) Board of Directors and Technical Director for Systems Design for AMD where he works to coordinate and lead strategy on various industry initiatives related to systems architecture. Recognized as a leading storage networking expert, J is an evangelist for all storage-related technology and has a unique ability to dissect and explain complex concepts and strategies. He is passionate about the innerworkings and application of emerging technologies.

J has previously held roles in both startups and Fortune 100 companies as a Field CTO,  R&D Engineer, Solutions Architect, and Systems Engineer. He has been a leader in several key industry standards groups, sitting on the Board of Directors for the SNIA, Fibre Channel Industry Association (FCIA), and Non-Volatile Memory Express (NVMe). A popular blogger and active on Twitter, his areas of expertise include NVMe, SANs, Fibre Channel, and computational storage.

J is an entertaining presenter and prolific writer. He has won multiple awards as a speaker and author, writing over 300 articles and giving presentations and webinars attended by over 10,000 people. He earned his PhD from the University of Georgia.

129: GreyBeards talk composable infrastructure with GigaIO’s, Matt Demas, Field CTO

We haven’t talked composable infrastructure in a while now but it’s been heating up lately. GigaIO has some interesting tech and I’ve been meaning to have them on the show but scheduling never seemed to work out. Finally, we managed to sync schedules and have Matt Demas, field CTO at GigaIO (@giga_io) on our show.

Also, please welcome Jason Collier (@bocanuts), a long time friend, technical guru and innovator to our show as another co-host. We used to have these crazy discussions in front of financial analysts where we disagreed completely on the direction of IT. We don’t do these anymore, probably because the complexities in this industry can be hard to grasp for some. From now on, Jason will be added to our gaggle of GreyBeard co-hosts.

GigaIO has taken a different route to composability than some other vendors we have talked with. For one, they seem inordinately focused on speed of access and reducing latencies. For another, they’re the only ones out there, to our knowledge, demonstrating how today’s technology can compose and share memory across servers, storage, GPUs and just about anything with DRAM hanging off a PCIe bus. Listen to the podcast to learn more.

GigaIO started out with pooling/composing memory across PCIe devices. Their current solution is built around a ToR (currently Gen4) PCIe switch with logic and a party of pooling appliances (JBoG[pus], JBoF[lash], JBoM[emory],…). They use their FabreX fabric to supply rack-scale composable infrastructure that can move (attach) PCIe componentry (GPUs, FPGAs, SSDs, etc.) to any server on the fabric, to service workloads.

We spent an awful long time talking about composing memory. I didn’t think this was currently available, at least not until the next version of CXL, but Matt said GigaIO together with their partner MemVerge, are doing it today over FabreX.

We’ve talked with MemVerge before (see: 102: GreyBeards talk big memory … episode). But when last we met, MemVerge had a memory appliance that virtualized DRAM and Optane into an auto-tiering, dual tier memory. Apparently, with GigaIO’s help they can now attach a third tier of memory to any server that needs it. I asked Matt what the extended DRAM response time to memory requests were and he said ~300ns. And then he said that the next gen PCIe technology will take this down considerably.

Matt and Jason started talking about High Bandwidth Memory (HBM) which is internal to GPUs, AI boards, HPC servers and some select CPUs that stacks synch DRAM (SDRAM) into a 3D package. 2nd gen HBM silicon is capable of 256 GB/sec per package. Given this level of access and performance. Matt indicated that GigaIO is capable of sharing this memory across the fabric as well.

We then started talking about software and how users can control FabreX and their technology to compose infrastructure. Matt said GigaIO has no GUI but rather uses Redfish management, a fully RESTfull interface and API. Redfish has been around for ~6 yrs now and has become the de facto standard for management of server infrastructure. GigaIO composable infrastructure support has been natively integrated into a couple of standard cluster managers. For example. CIQ Singularity & Fuzzball, Bright Computing cluster managers and SLURM cluster scheduling. Matt also mentioned they are well plugged into OCP.

Composable infrastructure seems to have generated new interest with HPC customers that are deploying bucketfuls of expensive GPUs with their congregation of compute cores. Using GigaIO, HPC environments like these can overnight, go from maybe 30% average GPU utilization to 70%. Doing so can substantially reduce acquisition and operational costs for GPU infrastructure significantly. One would think the cloud guys might be interested as well.

Matt Demas, Field CTO, GigaIO

Matt’s career spans two decades of experience in architecting innovative IT solutions, starting with the US Air Force. He has built federal, healthcare, and education-based vertical solutions at companies like Dell, where he was a Senior Solutions Architect. Immediately prior to joining GigaIO, he served as Field CTO at Liqid. 

Matt holds a Bachelor’s degree in Information Technology from American InterContinental University, and an MBA from Concordia University Austin.

122: GreyBeards talk big data archive with Floyd Christofferson, CEO StrongBox Data Solutions

The GreyBeards had a great discussion with Floyd Christofferson, CEO, StrongBox Data Solutions on their big data/HPC file and archive solution. Floyd’s is very knowledgeable on problems of extremely large data repositories and has been around the HPC and other data intensive industries for decades.

StrongBox’s StrongLink solution offers a global namespace file system that virtualizes NFS, SMB, S3 and Posix file environments and maps this to a software-only, multi-tier, multi-site data repository that can span onsite flash, disk, S3 compatible or Azure object and LTFS tape iibrary storage as well as offsite versions of all the above tiers.

Typical StrongLink customers range in the 10s to 100s of PB, and ingesting or processing PBs a day. 200TB is a minimum StrongLink configuration, but Floyd said any shop with over 500TB has problems with data silos and other issues, but may not understand it yet. StrongLink manages data placement and movement, throughout this hierarchy to better support data access and economical storage. In the process StrongLink eliminates any data silos due to limitations of NAS systems while providing the most economic placement of data to meet user performance requirements.


Floyd said that StrongLink first installs in customer environment and then operates in the background to discover and ingest metadata from the primary customers file storage environment. Some point later the customer reconfigures their end-users share and mount points to StrongLink servers and it’s up and starts running.

The minimal StrongLink, HA environment consists of 3 nodes. They use a NoSQL metadata database which is replicated and sharded across the nodes. It’s shared for performance load balancing and fully replicated (2-way or 3-way) across all the StrongLink server nodes for HA.

The StrongLink nodes create a cluster, called a star in StrongBox vernacular. Multiple clusters onsite can be grouped together to form a StrongLink constellation. And multiple data center sites, can be grouped together to form a StrongLink galaxy. Presumably if you have a constellation or a galaxy, the same metadata is available to all the star clusters across all the sites.

They support any tape library and any NFS, SMB, S3 orAzure compatible object or file storage. Stronglink can move or copy data from one tier/cluster to another based on policies AND the end-users never sees any difference in their workflow or mount/share points.

One challenge with typical tape archives is that they can make use of proprietary tape data formats which are not accessible outside those systems. StrongLink has gone with a completely open-source, LTFS file format on tape, which is well documented and is available to anyone.

Floyd also made it a point of saying they don’t use any stubs, or soft links to provide their data placement magic. They only use standard file metadata.

File data moves across the hierarchy based on policies or by request. One of the secrets to StrongLink success is all the work they have done to ensure that any data movement can occur at line rate speeds. They heavily parallelize any data movement that’s required to support data placement across as many servers as the customer wants to throw at it. StrongBox services will help right-size the customer deployment to support any data movement performance that is required.

StrongLink supports up to 3-way replication of a customer’s data archives. This supports a primary archive and 3 more replicas of data.

Floyd mentioned a couple of big customers:

  • One autonomous automobile supplier, was downloading 2PB of data from cars in the field, processing this data and then moving it off their servers to get ready for the next day’s data load.
  • Another weather science research organization, had 150PB of data in an old tape archive and they brought in StrongLink to migrate all this data off and onto LTFS tape format as well as support their research activities which entail staging a significant chunk of file data on research servers to do a climate run/simulation.

NASA, another StrongLink customer, operates slightly differently than the above, in that they have integrated StrongLink functionality directly into their applications by making use of StrongBox’s API.

StrongLink can work in three ways.

  • Using normal file access services where StrongLink virtualizes your NFS, SMB, S3 or Posix file environment. For this service StrongLink is in the data path and you can use policy based management to have data moved or staged as the need arises.
  • Using StrongLink CLI to move or copy data from one tier to another. Many HPC customers use this approach through SLURM scripts or other orchestration solutions.
  • Using StrongLink API to move or copy data from one tier to another. This requires application changes to take advantage of data placement.

StrongBox customers can of course, use all three modes of operation, at the same time for their StrongLink data galaxy. StrongLink is billed by CPU/vCPU level and not for the amount of data customers throw into the archive. This has the effect of Customers gaining a flat expense cost, once StrongLink is deployed, at least until they decide to modify their server configuration.

Floyd Christofferson, CEO StrongBox Data Solutions

As a professional involved in content management and storage workflows for over 25 years, Floyd has focused on methods and technologies needed to manage massive volumes of data across many different storage types and use cases.

Prior to joining SBDS, Floyd worked with software and hardware companies in this space, including over 10 years at SGI, where he managed storage and data management products. In that role, he was part of the team that provided solutions used in some of the largest data environments around the world.

Floyd’s background includes work at CBS Television Distribution, where he helped implement file-based content management and syndicated content distribution strategies, and Pathfire (now ExtremeReach), where he led the team that developed and implemented a satellite-based IP-multicast content distribution platform that manages delivery of syndicated content to nearly 1,000 TV stations throughout the US.

Earlier in his career, he ran Potomac Television, a news syndication and production service in Washington DC, and Manhattan Center Studios, an audio, video, graphics, and performance facility in New York.

120: GreyBeards talk CEPH storage with Phil Straw, Co-Founder & CEO, SoftIron

GreyBeards talk universal CEPH storage solutions with Phil Straw (@SoftIronCEO), CEO of SoftIron. Phil’s been around IT and electronics technology for a long time and has gone from scuba diving electronics, to DARPA/DOD researcher, to networking, and is now doing storage. He’s also their former CTO and co-founder of the company. SoftIron make hardware storage appliances for CEPH, an open source, software defined storage system.

CEPH storage includes file (CEPHFS, POSIX), object (S3) and block (RBD, RADOS block device, Kernel/librbd) services and has been out since 2006. CEPH storage also offers redundancy, mirroring, encryption, thin provisioning, snapshots, and a host of other storage options. CEPH is available as an open source solution, downloadable at CEPH.io, but it’s also offered as a licensed option from RedHat, SUSE and others. For SoftIron, it’s bundled into their HyperDrive storage appliances. Listen to the podcast to learn more.

SoftIron uses the open source version of CEPH and incorporates this into their own, HyperDrive storage appliances, purpose built to support CEPH storage.

There are two challenges to using open source solutions:

  • Support is generally non-existent. Yes, the open source community behind the (CEPH) project supplies bug fixes and can possibly answer some questions but this is not considered enterprise support where customers require 7x24x365 support for a product
  • Useability is typically abysmal. Yes, open source systems can do anything that anyone could possibly want (if not, code it yourself), but trying to figure out how to use any of that often requires a PHD or two.

SoftIron has taken both of these on to offer a CEPH commercial product offering.

Take support, SoftIron offers enterprise level support that customers can contract for on their own, even if they don’t use SoftIron hardware. Phil said the would often get kudos for their expert support of CEPH and have often been requested to offer this as a standalone CEPH service. Needless to say their support of SoftIron appliances is also excellent.

As for ease of operations, SoftIron makes the HyperDrive Storage Manager appliance, which offers a standalone GUI, that takes the PHD out of managing CEPH. Anything one can do with the CEPH CLI can be done with SoftIron’s Storage Manager. It’s also a very popular offering with SoftIron customers. Similar to SoftIron’s CEPH support above, customers are requesting that their Storage Manager be offered as a standalone solution for CEPH users as well.

HyperDrive hardware appliances are storage media boxes that offer extremely low-power storage for CEPH. Their appliances range from high density (120TB/1U) to high performance NVMe SSDs (26TB/1U) to just about everything in between. On their website, I count 8 different storage appliance offerings with various spinning disk, hybrid (disk-SSD), SATA and NVMe SSDs (SSD only) systems.

SoftIron designs, develops and manufacturers all their own appliance hardware. Manufacturing is entirely in the US and design and development takes place in the US and Europe only. This provides a secure provenance for HyperDrive appliances that other storage companies can only dream about. Defense, intelligence and other security conscious organizations/industries are increasingly concerned about where electronic systems come from and want assurances that there are no security compromises inside them. SoftIron puts this concern to rest.

Yes they use CPUs, DRAMs and other standardized chips as well as storage media manufactured by others, but SoftIron has have gone out of their way to source all of these other parts and media from secure, trusted suppliers.

All other major storage companies use storage servers, shelves and media that come from anywhere, usually sourced from manufacturers anywhere in the world.

Moreover, such off the shelf hardware usually comes with added hardware that increases cost and complexity, such as graphics memory/interfaces, Cables, over configured power supplies, etc., but aren’t required for storage. Phil mentioned that each HyperDrive appliance has been reduced to just what’s required to support their CEPH storage appliance.

Each appliance has 6Tbps network that connects all the components, which means no cabling in the box. Also, each storage appliance has CPUs matched to its performance requirements, for low performance appliances – ARM cores, for high performance appliances – AMD EPYC CPUs. All HyperDrive appliances support wire speed IO, i.e, if a box is configured to support 1GbE or 100GbE, it transfers data at that speed, across all ports connected to it.

Because of their minimalist hardware design approach, HyperDrive appliances run much cooler and use less power than other storage appliances. They only consume 100W or 200W for high performance storage per appliance, where most other storage systems come in at around 1500W or more.

In fact, SoftIron HyperDrive boxes run so cold, that they don’t need fans for CPUs, they just redirect air flom from storage media over CPUs. And running colder, improves reliability of disk and SSD drives. Phil said they are seeing field results that are 2X better reliability than the drives normally see in the field.

They also offer a HyperDrive Storage Router that provides a NFS/SMB/iSCSI gateway to CEPH. With their Storage Router, customers using VMware, HyperV and other systems that depend on NFS/SMB/iSCSI for storage can just plug and play with SoftIron CEPH storage. With the Storage Router, the only storage interface HyperDrive appliances can’t support is FC.

Although we didn’t discuss this on the podcast, in addition to HyperDrive CEPH storage appliances, SoftIron also provides HyperCast, transcoding hardware designed for real time transcoding of one or more video streams and HyperSwitch networking hardware, which supplies a secure provenance, SONiC (Software for Open Networking in [the Azure] Cloud) SDN switch for 1GbE up to 100GbE networks.

Standing up PB of (CEPH) storage should always be this easy.

Phil Straw, Co-founder & CEO SoftIron

The technical visionary co-founder behind SoftIron, Phil Straw initially served as the company’s CTO before stepping into the role as CEO.

Previously Phil served as CEO of Heliox Technologies, co-founder and CTO of dotFX, VP of Engineering at Securify and worked in both technical and product roles at both Cisco and 3Com.

Phil holds a degree in Computer Science from UMIST.