151: GreyBeards talk AI (ML) performance benchmarks with David Kanter, Exec. Dir. MLCommons

Ray’s known David Kanter (@TheKanter), Executive Director, MLCommons, for quite awhile now and has been reporting on MLCommons Mlperf AI benchmark results for even longer. MLCommons releases new benchmark results each quarter and this last week they released new Data Center Training (v3.0) and new Tiny Inferencing (v1.1) results. So, the GreyBeards thought it was time to get a view of what’s new in AI benchmarking and what’s coming later this year.

David’s been around the startup community in the Bay Area for a while now and sort of started at MLPerf early on as a technical guru working on submissions and other stuff and worked his way up to being the Executive Director/CEO. The big news this week from MLCommons is that they have introduced a new training benchmark and updated an older one. The new one simulates training GPT-3 and they also updated their Recommendation Engine benchmark. Listen to the podcast to learn more.

MLCommons is an industry association focused on supplying recreatable, verifiable benchmarks for machine learning (ML) and AI which they call MLperf benchmarks. Their benchmark suite includes a number of different categories such as data center training, HPC training, data center inferencing, edge inferencing, mobile inferencing and finally tiny (IoT device) inferencing. David likes to say MLperf benchmarks range from systems consuming Megawatts (HPC, literally a supercomputer) to Microwatts (Tiny) solutions.

The challenge holding AI benchmarking back early on was a few industry players had done their own thing but there was way to compare one to another. MLcommons was born out of that chaos, and sought to create a benchmarking regimen that any industry player could use to submit AI work activity and would allow customers to compare their solution to any other submission on a representative sample of ML model training and inferencing activity

MLCommons has both an Open and Closed class of submissions. For the Closed class, submissions have a very strict criteria for submission. These include known open source AI models and data, accuracy metrics that training and inferencing need to hit, and reporting a standard set of metrics information for the benchmark. All of which need to be done in order to create a repeatable and verifiable submission.

Their Open class is a way for any industry participant to submit whatever model they would like, to whatever accuracy level they want, and it’s typically used to benchmark new hardware, software or AI models.

As mentioned above MLcommons training benchmarks use accuracy specification that must be achieved to have a valid submission. Benchmarks also have to be run 3 times. All submissions list hardware (CPU and Accelerators) and software (AI framework). And these could range from 0 accelerators (e.g. CPU only with no GPUs) to 1000’s of GPUs.

The new GPT-3 model is a very complex AI model, that seemed until yesterday, unlikely to ever be benchmarked. But apparently the developers at MLCommons (and their industry partners) have been working on this for some time now. In this round of results there were 3 cloud submissions and 4 on prem submissions for GPT-3 training.

GPT-3, -3.5 & -4 are all OpenAI solutions which power their ChatGPT text transformer Large Language Model (LLM). GPT-3 has 175B parameters and was trained on TBs of data covering web crawls, book crawls, official documentation, code, etc. OpenAI said, at GPT-3 announcement, it took over $10M and months to train.

MLcommons GPT-3 benchmark is not a full training run of GPT-3 but uses a training checkpoint, trained on a subset of data used for the original GPT-3 training, Checkpoints are used for long running jobs (training sessions, weather simulations, fusion energy simulations, etc) and copy all internal state of a job/system while its running (ok, quiesced) at some interval (say every 8hrs, 24 hrs, 48hrs, etc), so that in case of a failure, one could just restart the activity from the last checkpoint rather than the beginning.

MLCommons GPT-3 checkpoint has been trained on a 10B token data set. The benchmark starts with loading this checkpoint and trains on an even smaller subset of the data for GPT-3 and trains to achieve the accuracy baseline.

Accuracy for text transformers is not as simple as other models (correct image classification, object identification, etc.) and uses “perplexity”. Hugging Face defines perplexity as “the exponentiated average negative log-likelihood of a sequence.”

The 4 on-premises submissions for GPT-3 using 45 minutes (768 NVIDIA H100 GPUs) to 442 minutes (64 Habana Guadi2 GPUS). The 3 cloud submissions all used NVIDIA H100 GPUs and ranged from 768 (@47 minutes to train) to 3584 GPUs (@11 min. to train).

Aside from DataCenter training, MLcommons also released a new round of Tiny (IoT) inferencing benchmarks. These generally use smaller ARM processors and no GPUs with much smaller AI models such as Keyword spotting (“Hey SIRI”), visual wake words (door opening), image classification, etc.

We ended our discussion with me asking David why there was no storage oriented MLcommons benchmark. David said creating a storage benchmarks for AI is much different than inferencing or training benchmarks. But MLCommons has taken this on and now have a storage MLcommons series of benchmarks for storage that uses emulated accelerators.

At the moment, anyone with a storage system can submit MLcommons storage benchmark. After some time, MLcommons will only allow submissions from member companies but early on it’s open for all.

For their storage benchmarks, rather than using accuracy as benchmark criteria they use keeping (emulated) accelerators X% busy. This way storage support of the MLops activities can be isolated from the training and inferencing.

The GreyBeards eagerly anticipate the first round of MLcommons storage benchmark results. Hopefully coming out later this year.

143: GreyBeards talk Chia cypto with Jonmichael Hands, VP Storage at Chia Project

Today we interview Jonmichael Hands (@LebanonJon, LinkedIn), VP Storage at Chia Project , who has been in and around the storage business forever, mostly with Intel and their SSD team, before it was sold. He was technical marketing for NVMe. He also ran the security and crypto track at FMS2022. He recently worked on sustainability, helping to create a circular economy for disk and SSD storage. Moreover, he assisted IEEE with their new (media) sanitization standard to make reuse/recycling storage easier.

Chia was born to provide a way to take advantage of storage media for blockchains in a government compliant way so that it could be spun off as a public company someday. Chia is a crypto currency that depends on proof of space (storage space exists) and proof of time (storage space is reserved for a period of time). There have been many crypto coins based on proof of work (running hard cryptographic algorithms to come up with some specific bit pattern). And ETH was forked last year to support proof of stake (where one stakes some amount of ETH for a defined period). But few, if any, have been based on proof of space and time.

Disk and SSD commands already exist to provide “Secure Erase” (multiple passes of different bit patterns overwriting the same block) and cryptographic erasure (For encrypted drives, the encryption key is changed). Both approaches insure that customer/organization data is no longer retained on media leaving an organization’s control. And yet, many companies use secure erase/cryptographic erasure and still shred disk drives and SSDs, just to be sure that no data is retained. This is a vast waste of energy and resources.

Jonmichael said that both disk and SSD drives typically have another 5 years beyond their guaranteed (5 year) production life where both can function perfectly well as storage devices (ok may performance may not be the same as current drives). And after using them for another 5 years, they are much easier to recycle, if left un-shredded and returned to manufacturers, who can dismantle them to reuse expensive components and rare earth materials.

We didn’t spend much time on the technical underpinnings of Chia so if you are interested in that we suggest you check out Jonmichael’s FMS2022 presentation video.

But if you’re interested in a high level understanding of Chia and what one can do with it we did cover that. For example, Chia has farmers (not miners). Farmers create (~100GB) Chia plot files and store these on media.

Plot files take some amount of CPU power and memory to create but once created can stay on storage forever. What makes Chia work is that it comes out and checks to see if you have a certain plot file and if you do you get rewarded for that. Jonmichael said that with a typical Chia crypto setup, one could make $0.50/TB/Month farming Chia.

The Chia project currently has about 24EB of plots online and at their peak had over 300EB. They also have 130K farmers in their current network. Bitcoin, at its peak, had about 60K miners. Jonmichael thinks Chia crypto coin may be the most distributed crypto coin in existence today.

A couple of years back Chia accounted for a significant amount of new disk drive purchases but that has died down considerably since then. As discussed earlier, Jonmichael is working to create a circular economy for storage that could lead to media reuse for Chia farming.

Jonmichael mentioned that Chia has matured significantly since peak use. It used to be that creating Chia plot files required high end CPUs and lots of technical skills, but today Jonmichael said you can be a farmer with an RPi. He did say that they have moved to making better use of available memory in the plotting process and have reduced the write load on the storage media.

Another aspect to Chia’s maturation is that they now support Chia smart coins or smart contracts. They have created ChiaLisp, a Turing complete language, as their language to implement Chia smart coins. It turns out that Lisp and other functional languages provide a natural way to implement secure code. Jonmichael mentioned that other crypto coins are starting to move towards using ChiaLisp.

Some recent innovations in Chia smart coins include:

  • Chia Offer Management – that is anything you wish to trade can be digitally tracked and traded using this Chia Offer Management smart coins.
  • Chia NFTs (non-fungible token) Management – NFT’s have been used by other blockchains to sell digital rights to assets Chia’s support for NFTs opens Chia up to this as well. The reference implementation for Chia’s NFT management is Chia Friends, where all proceeds are being donated to the Marmot Recovery Foundation.
  • Chia Data Layer Management, a federated database – here the Chia block chain is being used to support a K-V store, where the block chain stores the Key and a hash of the Value. Users can use this Chia Data Layer to store any key-hash(value) database they wish. It’s important to realize that actual the data or value is stored external to the Chia block chain.

The Data Layer solution is currently being used to develop a way to track carbon credits by the World Bank (see: the Climate Action Data Trust).

Chia has come a long way. In its heyday it was significant consumer of new disk media but with what Jonmichael and others have planned for it is to take advantage of the longer term life of storage media and to use this for the benefit of all humanity.

Jonmichael Hands, VP Storage at Chia Project

Jonmichael Hands partners with the storage vendors for Chia optimized product development, market modeling, and Chia blockchain integration.

Jonmichael spent the last ten years at Intel in the Non-Volatile Memory Solutions group working on product line management, strategic planning, and technical marketing for the Intel data center SSDs.

In addition, he served as the chair for NVM Express (NVMe), SNIA (Storage Networking Industry Association) SSD special interest group, and Open Compute Project for open storage hardware innovation.

Jonmichael started his storage career at Sun Microsystems designing storage arrays (JBODs) and holds an electrical engineering degree from the Colorado School of Mines.

139: GreyBeards talk HPC file systems with Marc-André Vef and Alberto Miranda of GekkoFS

In honor of SC22 conference this month in Dallas, we thought it time to check in with our HPC brethren to find out what’s new in storage for their world. We happened to see that IO500 had some recent (ISC22) results using a relative new comer, GekkoFS (@GekkoFS). So we reached out to the team to find out how they managed to crack into the top 10. We contacted Marc-André Vef (@MarcVef), a Ph.D. student at Johannes Guttenberg University Mainz and Alberto Miranda (@amiranda_hpc) Ph.D. of Barcelona Supercomputing Center two of the authors on the GekkoFS paper.

GekkoFS is a new burst file system that is tailor made to create, process and tear down scratch data sets for HPC workloads. It turns out that HPC does lots of work using scratch files as working data sets. Burst file systems typically use another parallel file systems to (stage) read (permanent) data into the scratch files and write (permanent) result data out. But during processing, the burst file system handles all scratch data access. Listen to the podcast to learn more

We had never heard of a burst file system before but it’s been around for a while now in HPC. For example, BeeGFS provides one (check out our GreyBeards podcast on BeeGFS). BeeGFS supports both a PFS and a burst file system. GekkoFS only offers a burst file system.

GekkoFS is a distributed burst file systems which operates across nodes to stitch together a single global file system. GekkoFS is strictly open source at the moment and can be downloaded (see: GekkoFS Gitlab) and used by anyone.

They are considering in the future of supplying professional support but at the moment if you have an issue, Marc and Alberto suggest you use the GekkoFS GitLab incident tracking system to tell them about it.

Turns out Lustre, IBM Spectrum Scale, DAOS and other HPC file systems take gobs of overhead to create scratch files. And even though it takes a lot of IO to load scratch file data and write out results, there’s a whole lot more IO that gets done to scratch files during HPC jobs.

This sort of IO also occurs for AI/ML/DLL where training data is staged into a sort of scratch area (typically in memory, depending on size) and then repeatedly (re-)processed there. GekkoFS can offer significant advantages to AI/ML/DL work when training data is very large. Normally without a burst file system, one would need to shard this data across nodes and then deal with the partial training that results. But with GekkoFS, all you need do is stage it into the burst file system and read it from there.

GekkoFS is partially posix compliant. They install a client-side interposer library that intercepts those posix requests destined for GekkoFS files.

GekkoFS has no central metadata server, which means that all nodes in the GekkoFS cluster support metadata services. Filenames are hashed to tell GekkoFS which node has its (metadata &) data.

GekkoFS stores their data and metadata on local disks, SSDs or in memory (tempfs) storage. All local node storage in the cluster is stitched together into a single global file system.

GekkoFS supports strict consistency for IO and file creation/deletion within nodes. They use an internal transaction database to enforce this strict consistency.

Across nodes they support eventual consistency. Which means files created on one node may not be immediately viewable/accessible by other nodes in the cluster for a short period of time while (meta) data updates are propagated across the cluster.

As part of their consistency paradigm, GekkoFS doesn’t support directory locking. Jason mentioned that HPC “LS” (directory listings) commands can sometimes take forever due to directory locking No directory locking makes LS commands happen faster but may show inconsistent results (due to eventual consistency).

We had some discussion on this lack of directory locking and eventual consistency in file systems, but we agreed to disagree. They did say that for the HPC workloads (and probably AI/ML/DLL) workloads, their approach seems appropriate as they are way more read intensive than write intensive.

In any case, they must be doing something right as they have a screaming scratch file system for HPC work.

Marc will be attending SC22 in Dallas this month, so if your attending please look him up and say hello from us.

Marc-André Vef, Ph.D. student

Marc-André Vef is a Ph.D. candidate at the Johannes Gutenberg University Mainz. He started his Ph.D. in 2016 after receiving his B.Sc. and M.Sc. degrees in computer science from the Johannes Gutenberg University Mainz. His master’s thesis was in cooperation with IBM Research about analyzing file create performance in the IBM Spectrum Scale parallel file system (formerly GPFS).

During his Ph.D., he has worked on several projects focusing on file system tracing (in collaboration with IBM Research) and distributed file systems, among others. Most notably, he designed two ad-hoc distributed file systems: DelveFS (in collaboration with OpenIO), which won the Best Paper in its category, and GekkoFS (in collaboration with the Barcelona Supercomputing Center). GekkoFS placed fourth in its first entry in the 10-node challenge of the IO500 benchmark. The file system is actively developed in the scope of the EuroHPC ADMIRE project.

His research interests focus on file systems and system analytics.

Alberto Miranda, Ph.D., Senior Researcher, Barcelona Supercomputing Center

Dr. Eng. Alberto Miranda is a Senior Researcher in
advanced storage systems in the Computer Science Department of the Barcelona Supercomputing Center (BSC) and co-leader of the Storage Systems Research Group since 2019. Dr. Eng. Miranda received a diploma in Computer Engineering (2004), a M.Sc. degree in Computer Science (2006) and a M.Sc. degree in Computer Architectures, Networks and Systems (2008) from the Technical University of Catalonia (UPC-BarcelonaTech). He later received a Ph.D. degree Cum Laude in Computer Science from the Technical University of Catalonia in 2014 with his thesis “Scalability in Extensible and Heterogeneous Storage Systems”.

His current research interests include efficient file and storage systems, operating systems, distributed system architectures, as well as information retrieval systems. Since he started his work at BSC in 2007, he has published 14 papers in international conferences and journals, as well as 5 white papers and technical reports and 1 book chapter. Dr. Eng. Miranda is currently involved in several European and national research projects and has participated in competitively funded EU projects XtreemOS, IOLanes, Prace2IP, IOStack, Mont-Blanc 2, EUDAT2020, Mont-Blanc 3, and NEXTGenIO.

135: Greybeard(s) talk file and object challenges with Theresa Miller & David Jayanathan, Cohesity

Sponsored By:

I’ve known Theresa Miller, Director of Technology Advocacy Group at Cohesity, for many years now and just met David Jayanathan (DJ), Cohesity Solutions Architect during the podcast. Theresa could easily qualify as an old timer, if she wished and DJ was very knowledgeable about traditional file and object storage.

We had a wide ranging discussion covering many of the challenges present in today’s file and object storage solutions. Listen to the podcast to learn more.

IT is becoming more distributed. Partly due to moving to the cloud, but now it’s moving to multiple clouds and on prem has never really gone away. Further, the need for IT to support a remote work force, is forcing data and systems that use them, to move as well.

Customers need storage that can reside anywhere. Their data must be migrate-able from on prem to cloud(s) and back again. Traditional storage may be able to migrate from one location to a select few others or replicate to another location (with the same storage systems present), but migration to and from the cloud is just not easy enough.

Moreover, traditional storage management has not kept up with this widely disbursed data world we live in. With traditional storage, customers may require different products to manage their storage depending on where data resides.

Yes, having storage that performs, provides data access, resilience and integrity is important, but that alone is just not enough anymore.

And to top that all off, the issues surrounding data security today have become just too complex for traditional storage to solve alone, anymore. One needs storage, data protection and ransomware scanning/detection/protection that operates together, as one solution to deal with IT security in today’s world

Ransomware has rapidly become the critical piece of this storage puzzle needing to be addressed. It’s a significant burden on every IT organization today. Some groups are getting hit each day, while others even more frequently. Traditional storage has very limited capabilities, outside of snapshots and replication, to deal with this ever increasing threat.

To defeat ransomware, data needs to be vaulted, to an immutable, air gapped repository, whether that be in the cloud or elsewhere. Such vaulting needs to be policy driven and integrated with data protection cycles to be recoverable.

Furthermore, any ransomware recovery needs to be quick, easy, AND securely controlled. RBAC (role-based, access control) can help but may not suffice for some organizations. For these environments, multiple admins may need to approve ransomware recovery, which will wipe out all current data by restoring a good, vaulted copy of the organizations data.

Edge and IoT systems also need data storage. How much may depend on where the data is being processed/pre-processed in the IoT system. But, as these systems mature, they will have their own storage requirements which is yet another data location to be managed, protected, and secured.

Theresa and DJ had mentioned Cohesity SmartFiles during our talk which I hadn’t heard about. Turns out that SmartFiles is Cohesity’s file and object storage solution that uses the Cohesity storage cluster. Cohesity data protection and other data management solutions also use the cluster to store their data. Adding SmartFiles to the mix, brings a more complete storage solution to support customer data needs. .

We also discussed Helios, Cohesity’s, next generation, data platform that provides a control and management plane for all Cohesity products and services,.

Theresa Miller, Director, Technology Advocacy Group, Cohesity

Theresa Miller is the Director, Technology Advocacy Group at Cohesity.  She is an IT professional that has worked as a technical expert in IT for over 25 years and has her MBA.

She is uniquely industry recognized as a Microsoft MVP, Citrix CTP, and VMware vExpert.  Her areas of expertise include Cloud, Hybrid-cloud, Microsoft 365, VMware, and Citrix.

David Jayanathan, Solutions Architect, Cohesity

David Jayanathan is a Solutions Architect at Cohesity, currently working on SmartFiles. 

DJ is an IT professional that has specialized in all things related to enterprise storage and data protection for over 15 years.

133: GreyBeards talk trillion row databases/data lakes with Ocient CEO & Co-founder, Chris Gladwin

We saw a recent article in Blocks and Files (Storage facing trillion-row db apocalypse), about a couple of companies which were trying to deal with trillion row database queries without taking weeks to respond. One of those companies was Ocient (@Ocient), a Chicago startup, whose CEO and Co-Founder, Chris Gladwin, was an old friend from CleverSafe (now IBM Cloud Object Storage).

Chris and team have been busy creating a new way to perform data analytics on massive data lakes. It’s has a lot to do with extreme parallelism, high core counts, NVMe SSDs, and sophisticated network and compute flow control. Listen to the podcast to learn more.

The key to Ocient’s approach involves NVMe SSDs which have become ubiquitous over the last couple of years which can be deployed to deal with large data problems. Another key to Ocient is multi-core CPUs, which again seem everwhere and if anything, are almost doubling with every new generation of CPU chip.

We let Chris wax a little too long on the SSD revolution in IOPs, especially as pertains too random 4K reads. Put a 20 or so NVMe SSDs in a server with dual 50 core CPU chips and you have one fast random IO machine.

Another key to Ocient is very sophisticated network and bus data flow management. With all this data running any query on it, involves consuming lots of data that all has to be brought into the CPU. PCIe bandwidth helps, as does NVMe SSDs, but you still need to insure that nothing gets bottlenecked moving all that data around a system/server.

Yet another key to Ocient is parallelism. With one 20 NVMe SSD server and 2-50 core CPUs you’ve got a lot of capability but when you are talking about trillion row databases you need more. So in order to respond to queries in anything a second or so, they throw a lot of NVMe servers at the problem.

I asked how they split the data across all these servers and Chris mentioned that at the moment that’s part of their secret sauce and involves professional services.

Ocient supports full ANSI SQL queries against trillion row databases and replies to those queries in a matter of seconds. And we aren’t just talking about SQL selects, Ocient can do splits, joins and updates to this trillion row database at the same time as the SQL select are going on. Chris mentioned that Ocient can be loading 100K JSON files each second, while still performing SQL queries in near real time against the trillion row database.

Ocient supports Reed-Solomon error correction on database data as well as data compression and encryption.

In addition to SQL queries, Chris mentioned that Ocient supports data load and transform activities. He said that most of this data is being generated from IoT applications and often needs to be cleaned up before it can be processed. Doing this in real time, while handling queries to the database is part of their secret sauce.

Chris said there’s probably not that many organizations that have need for trillion row databases. But ad auctions, telecom routers, financial services already use trillion row databases and they all want to be able to process queries faster on this data. Ocient is betting that there will be plenty more like this over time.

Ocient is available on AWS and GCP as a cloud service, can also be used operating in their own Ocient Cloud or can be deployed on premises. Ocient services are billed on a per core pack (500 cores, I think) subscription model.

Chris Gladwin, CEO and Co-founder, Ocient

Chris is the CEO and Co-Founder of Ocient whose mission is to provide the leading platform the world uses to transform, store, and analyze its largest datasets.

In 2004, Chris founded Cleversafe which became the largest and most strategic object storage vendor in the world (according to IDC.)  He raised $100M and then led the company to over a $1.3B exit in 2015 when IBM acquired the company.  The technology Cleversafe created is used by most people in the U.S. every day and generated over 1,000 patents granted or filed, creating one of the ten most powerful patent portfolios in the world. 

Prior to Cleversafe, Chris was the Founding CEO of startups MusicNow and Cruise Technologies and led product strategy for Zenith Data Systems.  He started his career at Lockheed Martin as a database programmer and holds an engineering degree from MIT.