154: GreyBeards annual VMware Explore wrap-up podcast

Thanks, once again to The CTO Advisor|Keith Townsend, (@CTOadvisor) for letting us record the podcast in his studio. VMware Explore this year was better than last year. The show seemed larger. the show floor busier, the Hub better and the Hands-On Lab much larger than I ever remember before. The show seems to be growing, but still not at the pre-pandemic levels, but the trend is good.

The engineers have been busy at VMware this past year. Announced at the show include Private AI Foundation, a way for enterprises to train open source LLMs on corporate data kept private, a significant re-direct to VMware Edge environments moving from the push model updates to push model updates, and vSAN Max, NSX+, Tanzu App Engine, and more. And we heard that Brocade is clearing more hurdles to the acquisition. Listen to the podcast to learn more.

Private AI plays to VMware’s strengths and its control over on-prem processing. Customers need a safe space and secured data to train corporate ChatBots curated on corporations knowledge base. VMware rolled this out two ways,

  • Reference architecture approach based on Ray cluster management, KubeFlow, PyTorch, VectorDB, GPU Scaling (NVLink/NVswitch), vSAN fast path (RDMA, GPUdirect), and deep learning VMs. There was no discussion of tie ins to the Data Persistence (object) storage.
  • Proprietary NVIDIA approach based on NVIDIA workbench, TensorRT, NeMO, NVIDIA GPU & Network Operator

By having both approaches VMware provides alternatives for those wanting a non-proprietary solution. And with with AI/MLOps moving so fast, the open source may be better able to keep up.

The tie in with NVIDIA is a natural extension of what VMware have been doing with GPUs and DPUs, etc.

Also, VMware announced a technological partnership with Hugging Face. We were somewhat concerned with all the focus on LLM and GenAI but the agreement with Hugging Face goes beyond just LLMs.

VMware Edge solutions are pivoting. Apparently, VMware is moving from the vSphere pull model of code updates in the field which seems to handle 64 server, multi-cluster environments without problem to more of a YAML-GitHub push model of IoT device updates that seems better able to manage fleets of 1K to 100K devices in the field.

With the new model one creates a GitHub repo and a YAML file describing the code update to be done and all your IoT devices just get’s updated to the new level.

Once again the Brocade acquisition is on everyone’s mind. As I got to the show, one analyst asked if this was going to be the last VMware Explore. I highly doubt that, but Brocade will make lots of changes once the transaction closes. One thing mentioned at the show was that Brocade will make an immediate, additional $1B investment in R&D. The deal had provisionally passed the UK regulatory body and was on track to close near the end of October.

Other news from the show:

  • The Tanzu brand is broadening. Tanzu Application Platform (TAP) still exists but they have added a new App Engine is to take the VMware management approach to K8s clusters, other cloud infrastructure and the rest of the IT world. Tanzu Intelligent Services also now supports policy guardrails, cost control, management insight and migration services for other environments.
  • vSAN Max, which supports disaggregation (separation) of storage and compute is available. vSAN Max becomes a full fledged, standalone storage system that just happens to run on top of vSphere. Disaggregated (vSAN Max) storage and (regular vSAN) HCI can co-exist as different mounted datastores and vSAN Max supports PB of storage.
  • Workspace One is updated to provide enhanced digital experience monitoring that adds coverage of what Workspace One users are actually experiencing.
  • NSX+ continues to roll out. VMware mentioned that the number one continuing problem with hybrid cloud/multi-cloud setup is getting the networking right. NSX+ will reduce this complexity by becoming a management/configuration overlay over any and all cloud/on-prem networking for your environment(s).
  • VMware chatbots for Tanzu, Workspace One and NSX+ are now in tech preview and will supply intelligent assistants for these solutions. Based on LLM/GenAI and trained on VMware’s extensive corporate knowledge base, the chatbots will help admins focus on the signal over the noise and will provide recommendations on how to resolve issues. .

Jason Collier, Principal Member of Technical Staff, AMD

Jason Collier (@bocanuts) is a long time friend, technical guru and innovator who has over 25 years of experience as a serial entrepreneur in technology. Jason currently works at AMD focused on emerging technology for IT, IoT and anywhere else in the world and across the universe that needs compute, storage or networking resources.

He was Chief Evangelist, CTO & Co-Founder of Scale Computing and has been an innovator in the field of hyper-convergence and an expert in virtualization, data storage, networking, cloud computing, data centers, and edge computing for years.

He has also been another co-founder, director of research, VP of technical operations and director of operations at other companies over his long career prior to AMD and Scale.

He’s on LinkedIN.

151: GreyBeards talk AI (ML) performance benchmarks with David Kanter, Exec. Dir. MLCommons

Ray’s known David Kanter (@TheKanter), Executive Director, MLCommons, for quite awhile now and has been reporting on MLCommons Mlperf AI benchmark results for even longer. MLCommons releases new benchmark results each quarter and this last week they released new Data Center Training (v3.0) and new Tiny Inferencing (v1.1) results. So, the GreyBeards thought it was time to get a view of what’s new in AI benchmarking and what’s coming later this year.

David’s been around the startup community in the Bay Area for a while now and sort of started at MLPerf early on as a technical guru working on submissions and other stuff and worked his way up to being the Executive Director/CEO. The big news this week from MLCommons is that they have introduced a new training benchmark and updated an older one. The new one simulates training GPT-3 and they also updated their Recommendation Engine benchmark. Listen to the podcast to learn more.

MLCommons is an industry association focused on supplying recreatable, verifiable benchmarks for machine learning (ML) and AI which they call MLperf benchmarks. Their benchmark suite includes a number of different categories such as data center training, HPC training, data center inferencing, edge inferencing, mobile inferencing and finally tiny (IoT device) inferencing. David likes to say MLperf benchmarks range from systems consuming Megawatts (HPC, literally a supercomputer) to Microwatts (Tiny) solutions.

The challenge holding AI benchmarking back early on was a few industry players had done their own thing but there was way to compare one to another. MLcommons was born out of that chaos, and sought to create a benchmarking regimen that any industry player could use to submit AI work activity and would allow customers to compare their solution to any other submission on a representative sample of ML model training and inferencing activity

MLCommons has both an Open and Closed class of submissions. For the Closed class, submissions have a very strict criteria for submission. These include known open source AI models and data, accuracy metrics that training and inferencing need to hit, and reporting a standard set of metrics information for the benchmark. All of which need to be done in order to create a repeatable and verifiable submission.

Their Open class is a way for any industry participant to submit whatever model they would like, to whatever accuracy level they want, and it’s typically used to benchmark new hardware, software or AI models.

As mentioned above MLcommons training benchmarks use accuracy specification that must be achieved to have a valid submission. Benchmarks also have to be run 3 times. All submissions list hardware (CPU and Accelerators) and software (AI framework). And these could range from 0 accelerators (e.g. CPU only with no GPUs) to 1000’s of GPUs.

The new GPT-3 model is a very complex AI model, that seemed until yesterday, unlikely to ever be benchmarked. But apparently the developers at MLCommons (and their industry partners) have been working on this for some time now. In this round of results there were 3 cloud submissions and 4 on prem submissions for GPT-3 training.

GPT-3, -3.5 & -4 are all OpenAI solutions which power their ChatGPT text transformer Large Language Model (LLM). GPT-3 has 175B parameters and was trained on TBs of data covering web crawls, book crawls, official documentation, code, etc. OpenAI said, at GPT-3 announcement, it took over $10M and months to train.

MLcommons GPT-3 benchmark is not a full training run of GPT-3 but uses a training checkpoint, trained on a subset of data used for the original GPT-3 training, Checkpoints are used for long running jobs (training sessions, weather simulations, fusion energy simulations, etc) and copy all internal state of a job/system while its running (ok, quiesced) at some interval (say every 8hrs, 24 hrs, 48hrs, etc), so that in case of a failure, one could just restart the activity from the last checkpoint rather than the beginning.

MLCommons GPT-3 checkpoint has been trained on a 10B token data set. The benchmark starts with loading this checkpoint and trains on an even smaller subset of the data for GPT-3 and trains to achieve the accuracy baseline.

Accuracy for text transformers is not as simple as other models (correct image classification, object identification, etc.) and uses “perplexity”. Hugging Face defines perplexity as “the exponentiated average negative log-likelihood of a sequence.”

The 4 on-premises submissions for GPT-3 using 45 minutes (768 NVIDIA H100 GPUs) to 442 minutes (64 Habana Guadi2 GPUS). The 3 cloud submissions all used NVIDIA H100 GPUs and ranged from 768 (@47 minutes to train) to 3584 GPUs (@11 min. to train).

Aside from DataCenter training, MLcommons also released a new round of Tiny (IoT) inferencing benchmarks. These generally use smaller ARM processors and no GPUs with much smaller AI models such as Keyword spotting (“Hey SIRI”), visual wake words (door opening), image classification, etc.

We ended our discussion with me asking David why there was no storage oriented MLcommons benchmark. David said creating a storage benchmarks for AI is much different than inferencing or training benchmarks. But MLCommons has taken this on and now have a storage MLcommons series of benchmarks for storage that uses emulated accelerators.

At the moment, anyone with a storage system can submit MLcommons storage benchmark. After some time, MLcommons will only allow submissions from member companies but early on it’s open for all.

For their storage benchmarks, rather than using accuracy as benchmark criteria they use keeping (emulated) accelerators X% busy. This way storage support of the MLops activities can be isolated from the training and inferencing.

The GreyBeards eagerly anticipate the first round of MLcommons storage benchmark results. Hopefully coming out later this year.

144: Greybeard talks AI IO with Subramanian Kartik & Howard Marks of VAST Data

Sponsored by

Today we talked with VAST Data’s Subramanian Kartik (@phyzzycyst), Global Systems Engineering Lead and Howard Marks (@DeepStorage@mastodon.social, @deepstoragenet) former GreyBeards co-host and now Technologist Extraordinary & Plenipotentiary at VAST. Howard needs no introduction to our listeners but Kartik does. Kartik has supported a number of customers implementing AI apps at VAST and prior companies, so he is well versed in the reality of AI ML DL. Moreover, VAST recently funded Silverton Consulting to write a paper discussing Deep Learning IO.

Although AI ML DL applications have been very popular these days in IT, there’s been a continuing challenge trying to understand its IO requirements. Listen to the podcast to learn more.

AI ML DL Neural Networks (NN) models train with data and lots of it while inferencing is also very data dependent. Kartik said AI model IO consists of small block, random reads with very few writes.

Some models contain huge NNs which consume mountains of data to train while others are relatively small and consume much less. GPT-3(.5), the model behind the original ChatGPT, has ~75B parameters in its ~800GB NN.

As many of us know, the key to AI processing is GPU hardware, which performs most, if not all, of the computations to train models and supply inferences. Moreover, to maximize training throughput, many organizations deploy model parallelism, using 10s to 1000s of GPUs.

For instance, in the paper mentioned earlier, we showed a model training IO chart based on all six storage vendor published NVIDIA DGX-A100 Reference Architecture reports for ResNet-50. On this single chart, all 6 storage systems supplied roughly the same images processed/sec (or ~IO bandwidth) performance to train the model on each of 8, 16 & 32 GPUs configurations. This is very unusual from our perspective but shows that ResNet-50 training is not IO bound.

However, another approach to speeding up NN training is to take advantage of newer, more advanced IO protocols. NVIDIA GPUDirect Storage transfers data directly from storage memory to GPU memory bypassing CPU memory all together which can significantly speed up GPU data consumption. It turns out that one bottleneck for AI training is CPU memory bandwidth

In addition, most AI model training reads data from a single file system mount point. Historically, an NFS mount point was limited to a single TCP connection and a maximum of ~2.5GB/sec of IO bandwidth. Recently, however, NConnect for NFS has been introduced which increased TCP connections to 16 per mount point .

Despite that, VAST Data found that by adding some code to Linux’s NFS TCP stack, they were able to increase NConnect to 64 TCP connections per compute node. Howard mentioned that with these changes and a 16 (compute) node VAST Data storage cluster they sustained 175GB/sec of GPUDirect Storage bandwidth using a DGX-A100 systems .

Subramanian Kartik, Global Systems Engineering Lead, VAST Data

Subramanian Kartik has been the Vice President of Systems Engineering at VAST Data since January of 2020, running the global presales organization. He is part of the incredible success of VAST Data which increased almost 10-fold in valuation and revenue in this period.

An accomplished technologist and executive in the industry, he has a wide array of experience in Cloud Architectures, AI/Machine Learning/Deep Learning, as well as in the  Life Sciences, covering high-performance computing and storage. He has had a lifelong deep passion for studying complex problems in all spheres spanning both workloads and infrastructure at the vanguard of current day technology. 

Prior to his work at VAST Data, he was with EMC (later Dell) for two decades, as both a Distinguished Engineer and global executive running the Converged and Hyperconverged Division  go-to-market. He has a Ph.D in Particle Physics with over 75 publications and 3 patents to his credit over the years. He enjoys mathematics, jazz, cooking and travelling with his family in his non-existent spare time.

Howard Marks, (former GreyBeards Co-Host) Technologist Extraordinary and Plenipotentiary, VAST Data

Howard Marks brings over forty years of experience as a technology architect for hire and Industry observer to his role as VAST Data’s Technologist Extraordinary and Plienopotentary. In this role, Howard demystifies VAST’s technologies for customers and customer requirements for VAST’s engineers.

Before joining VAST, Howard ran DeepStorage an industry test lab and analyst firm. An award-winning speaker, he has appeared at events on three continents including Comdex, Interop and VMworld.

Howard is the author of several books (all gratefully out of print) and hundreds of articles since Bill Machrone taught him journalism at PC Magazine in the 1980s.

Listeners may also remember that Howard was a founding co-Host of the Greybeards-on-Storage Podcast.

127: Annual year end wrap up podcast with Keith, Matt & Ray

[Ray’s sorry about his audio, it will be better next time he promises, The Eds] This was supposed to be the year where we killed off COVID for good. Alas, it was not to be and it’s going to be with us for some time to come. However, this didn’t stop that technical juggernaut we call the GreyBeards on Storage podcast.

Once again we got Keith, Matt and Ray together to discuss the past year’s top 3 technology trends that would most likely impact the year(s) ahead. Given our recent podcasts, Kubernetes (K8s) storage was top of the list. To this we add AI-MLops in the enterprise and continued our discussion from last year on how Covid & WFH are remaking the world, including offices, data centers and downtowns around the world. Listen to the podcast to learn more.

K8s rulz

For some reason, we spent many of this year’s podcasts discussing K8s storage. TK8s was never meant to provide (storage) state AND as a result, any K8s data storage has had to be shoe horned in.

Moreover, why would any IT group even consider containerizing enterprise applications let alone deploy these onto K8s. The most common answers seem to be automatic scalability, cloud like automation and run-anywhere portability.

Keith chimed in with enterprise applications aren’t going anywhere and we were off. Just like the mainframe, client-server and OpenStack applications before them, enterprise apps will likely outlive most developers, continuing to run on their current platforms forever.

But any new apps will likely be born, live a long life and eventually fade away on the latest runtime environment. which is K8s.

Matt mentioned hybrid and multi-cloud as becoming the reason-d’etre for enterprise apps to migrate to containers and K8s. Further, enterprises have pressing need to move their apps to the hybrid- & multi-cloud model. AWS’s recent hiccups, notwithstanding, multi-cloud’s time has come.

Ray and Keith then discussed which is bigger, K8s container apps or enterprise “normal” (meaning virtualized/bare metal) apps. But it all comes down to how you define bigger that matters, Sheer numbers of unique applications – enterprise wins, Compute power devoted to running those apps – it’s a much more difficult race to cal/l. But even Keith had to agree that based on compute power containerized apps are inching ahead.

AI-MLops coming on strong

AI /MLops in the enterprise was up next. For me the most significant indicator for heightened interest in AI-ML was VMware announced native support for NVIDIA management and orchestration AI-MLops technologies.

Just like K8s before it and VMware’s move to Tanzu and it’s predecessors, their move to natively support NVIDIA AI tools signals that the enterprise is starting to seriously consider adding AI to their apps.

We think VMware’s crystal ball is based on

  • Cloud rolling out more and more AI and MLops technologies for enterprises to use. on their infrastructure
  • GPUs are becoming more and more pervasive in enterprise AND in cloud infrastructure
  • Data to drive training and inferencing is coming out of the woodwork like never before.

We had some discussion as to where AMD and Intel will end up in this AI trend.. Consensus is that there’s still space for CPU inferencing and “some” specialized training which is unlikely to go away. And of course AMD has their own GPUs and Intel is coming out with their own shortly.

COVID & WFH impacts the world (again)

And then there was COVID and WFH. COVID will be here for some time to come. As a result, WFH is not going away, at least not totally any time soon. And is just becoming another way to do business.

WFH works well for some things (like IT office work) and not so well for others (K-12 education). If the GreyBeards were into (non-crypto) investing, we’d be shorting office real estate. What could move into those millions of square feet (meters) of downtime office space is anyones guess. But just like the factories of old, cities and downtowns in particular can take anything and make it useable for other purposes.

That’s about it, 2021 was another “interesteing” year for infrastructure technology. It just goes to show you, “May you live in interesting times” is actually an old (Chinese) curse.

Keith Townsend, (@TheCTOadvisor)

Keith is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations. Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek, and can be found on LinkedIN.

Matt Leib, (@MBLeib)

Matt Leib has been blogging in the storage space for over 10 years, with work experience both on the engineering and presales/product marketing. His blog is at Virtually Tied to My Desktop and he’s on LinkedIN.

Ray Lucchesi, (@RayLucchesi)

Ray is the host and co-founder of GreyBeardsOnStorage and is President/Founder of Silverton Consulting, and a prominent (AI/storage/systems technology) blogger at RayOnStorage.com. Signup for SCI’s free, monthly industry e-newsletter here, published continuously since 2007. Ray can also be found on LinkedIn

113: GreyBeards talk storage for next gen. workloads with Liran Zvibel, Co-Founder & CEO WekaIO

Sponsored By:

I’ve known Liran Zvibel, Co-founder and CEO of Weka IO for many years now and it’s the second time he’s been on our show, (see: Episode 56: GreyBeards talk high performance file storage...). In those days, WekaIO was just coming out and hitting the world with this extremely high-performing, scale out unstructured data solution. Well since then, they’ve just gotten better.

Keith and I had a great time talking with Liran again. Liran has deep knowledge about unstructured data and how enterprises use it these days. WekaIO’s story, over the last two years has gone beyond great performance to real world, hybrid cloud offerings e as well as going after the cloud native app’s (read Kubernetes [K8S]) persistent storage. Listen to the podcast to learn more.

We started with a history lesson on WekaIO. Back in those days (which persists today, I might add) there were many IO workloads that required companies to purchase different solutions for different work. For example, they needed DAS or SAN for performance, NAS for ease of access and object for scale. WekaIO came out with an answer to all these problems in a single, scaleable storage system. That is, they performed IO as fast as DAS or SAN block, had all the ease of access of NAS, and could scale as much as object.

However, the real culprit holding the world back was “NFS”. At the outset NFS was designed (back in the 1990s) with the then current networking speeds available (10-100Mbps), which performed just fine at those speeds. But when 10-100GbE came out in the 2000’s, NFS’s metadata overhead was too chatty to support wire speeds. Thus, any storage that depended on NFS protocols couldn’t supply (small) files fast enough for modern applications.

This is why WekaIO has moved to not only support NFS and SMB but also POSIX and NVIDIA® GPUDirect® Storage interfaces. By offering POSIX, WekaIO is able to plug into standard Linux and Windows server systems and provide excellent small file performance. Of course applications that demand small file performance today are mostly data analytics and AI/ML/DL workloads.

Consequently., NVIDIA came out with their GPUDirect Storage protocol to address getting small file (data) into GPUs faster. With GPUDirect, storage systems can RDMA data directly from storage to GPU memory and vice versa, with no OS intervention (other than to set up the transfer). If you happen to have a small file, high performing storage system attached to your fabric that supports GPUDirect , like WekaIO, you can significantly speed up your AI/ML/DL workloads.

Next we started talking K8S storage. WekaIO usestheir POSIX interface in their CSI plugin to support K8S container persistent storage. Again, supplying high performance for small files seems to be tailor made for K8S container applications that exist today and will for the foreseeable future.

Enter the cloud. Almong other things, WekaIO is a AWS primary storage vendor. It also offers snap to cloud. And with both of these in tandem, it’s just become a lot easier to move and access your unstructured data in the cloud. Liran mentioned that WekaIO primary storage in AWS operates across AZ’s. This means it can be configured to support better availability than EBS.

Large BioPharma companies are using WekaIO in AWS to store and process field data and research data, so that this work can be done around the world. Some companies have run out of compute in a single AZ (unbelievable I know but it’s COVID). By offering multi-AZ support unstructured data access with WekaIO, these companies can spread their compute across AZ’s and region and still access their data. And when their products are ready for gov’t certification, having all this data in the cloud, can make provide an easy way to have gov’t access this same data.

Liran Zvibel, Co-founder and CEO WekaIO

As Co-Founder and CEO, Mr. Liran Zvibel guides long term vision and strategy at WekaIO. Prior to creating the opportunity at WekaIO, he ran engineering at social startup and Fortune 100 organizations including Fusic, where he managed product definition, design, and development for a portfolio of rich social media applications.

Liran also held principal architectural responsibilities for the hardware platform, clustering infrastructure and overall systems integration for XIV Storage System, acquired by IBM in 2007.

Mr. Zvibel holds a BSc.in Mathematics and Computer Science from Tel Aviv University.