154: GreyBeards annual VMware Explore wrap-up podcast

Thanks, once again to The CTO Advisor|Keith Townsend, (@CTOadvisor) for letting us record the podcast in his studio. VMware Explore this year was better than last year. The show seemed larger. the show floor busier, the Hub better and the Hands-On Lab much larger than I ever remember before. The show seems to be growing, but still not at the pre-pandemic levels, but the trend is good.

The engineers have been busy at VMware this past year. Announced at the show include Private AI Foundation, a way for enterprises to train open source LLMs on corporate data kept private, a significant re-direct to VMware Edge environments moving from the push model updates to push model updates, and vSAN Max, NSX+, Tanzu App Engine, and more. And we heard that Brocade is clearing more hurdles to the acquisition. Listen to the podcast to learn more.

Private AI plays to VMware’s strengths and its control over on-prem processing. Customers need a safe space and secured data to train corporate ChatBots curated on corporations knowledge base. VMware rolled this out two ways,

  • Reference architecture approach based on Ray cluster management, KubeFlow, PyTorch, VectorDB, GPU Scaling (NVLink/NVswitch), vSAN fast path (RDMA, GPUdirect), and deep learning VMs. There was no discussion of tie ins to the Data Persistence (object) storage.
  • Proprietary NVIDIA approach based on NVIDIA workbench, TensorRT, NeMO, NVIDIA GPU & Network Operator

By having both approaches VMware provides alternatives for those wanting a non-proprietary solution. And with with AI/MLOps moving so fast, the open source may be better able to keep up.

The tie in with NVIDIA is a natural extension of what VMware have been doing with GPUs and DPUs, etc.

Also, VMware announced a technological partnership with Hugging Face. We were somewhat concerned with all the focus on LLM and GenAI but the agreement with Hugging Face goes beyond just LLMs.

VMware Edge solutions are pivoting. Apparently, VMware is moving from the vSphere pull model of code updates in the field which seems to handle 64 server, multi-cluster environments without problem to more of a YAML-GitHub push model of IoT device updates that seems better able to manage fleets of 1K to 100K devices in the field.

With the new model one creates a GitHub repo and a YAML file describing the code update to be done and all your IoT devices just get’s updated to the new level.

Once again the Brocade acquisition is on everyone’s mind. As I got to the show, one analyst asked if this was going to be the last VMware Explore. I highly doubt that, but Brocade will make lots of changes once the transaction closes. One thing mentioned at the show was that Brocade will make an immediate, additional $1B investment in R&D. The deal had provisionally passed the UK regulatory body and was on track to close near the end of October.

Other news from the show:

  • The Tanzu brand is broadening. Tanzu Application Platform (TAP) still exists but they have added a new App Engine is to take the VMware management approach to K8s clusters, other cloud infrastructure and the rest of the IT world. Tanzu Intelligent Services also now supports policy guardrails, cost control, management insight and migration services for other environments.
  • vSAN Max, which supports disaggregation (separation) of storage and compute is available. vSAN Max becomes a full fledged, standalone storage system that just happens to run on top of vSphere. Disaggregated (vSAN Max) storage and (regular vSAN) HCI can co-exist as different mounted datastores and vSAN Max supports PB of storage.
  • Workspace One is updated to provide enhanced digital experience monitoring that adds coverage of what Workspace One users are actually experiencing.
  • NSX+ continues to roll out. VMware mentioned that the number one continuing problem with hybrid cloud/multi-cloud setup is getting the networking right. NSX+ will reduce this complexity by becoming a management/configuration overlay over any and all cloud/on-prem networking for your environment(s).
  • VMware chatbots for Tanzu, Workspace One and NSX+ are now in tech preview and will supply intelligent assistants for these solutions. Based on LLM/GenAI and trained on VMware’s extensive corporate knowledge base, the chatbots will help admins focus on the signal over the noise and will provide recommendations on how to resolve issues. .

Jason Collier, Principal Member of Technical Staff, AMD

Jason Collier (@bocanuts) is a long time friend, technical guru and innovator who has over 25 years of experience as a serial entrepreneur in technology. Jason currently works at AMD focused on emerging technology for IT, IoT and anywhere else in the world and across the universe that needs compute, storage or networking resources.

He was Chief Evangelist, CTO & Co-Founder of Scale Computing and has been an innovator in the field of hyper-convergence and an expert in virtualization, data storage, networking, cloud computing, data centers, and edge computing for years.

He has also been another co-founder, director of research, VP of technical operations and director of operations at other companies over his long career prior to AMD and Scale.

He’s on LinkedIN.

153: GreyBeards annual FMS2023 wrapup with Jim Handy, General Director, Objective Analysis

Jim Handy, General Director, Objective Analysis and I were at the FMS 2023 conference in Santa Clara last week and there were a number of interesting discussions at the show. I was particularly struck with the progress being made on the CXL front. I was just a participant but Jim moderated and was on many panels during the show. He also comes with a much deeper understanding of the technologies. Listen to the podcast to learn more.

We asked for some of Jim’s top takeaways from the show.

Jim thought that the early Tuesday Morning Market sessions on the state of the flash, memory and storage markets were particularly well attended. As these were the first day’s earliest sessions, in the past they weren’t as well attended.

The flash and memory markets both seem to be in a downturn. As the great infrastructure buy out of COVID ends, demand seems to have collapsed. As always, these and other markets go thru cycles, i.e., downturn where demand collapses and prices fall, to price stability as demand starts to pick up, and to supply constrained where demand can’t be satisfied. The general consensus seems to be that we may see a turn in the market by middle of next year.

CXL is finally catching on. At the show there were a couple of vendors showing memory extension/expansion products using CXL 1.1 as well as CXL switches (extenders) based on CXL 2.0. The challenge with memory today, in this 100+ core CPU world, is trying to keep the core to memory bandwidth flat and keep up with application memory demand. CXL was built to deal with both of these concerns

CXL has additional latency but it’s very similar to dual CPUs accessing shared memory. Jim mentioned that Microsoft Azure actually checked to see if they can handle CXL latencies by testing with dual socket systems.

There was a lot of continuing discussion on new and emerging memory technologies. And Jim Handy mentioned that their team has just published a new report on this. He also mentioned that CXL could be the killer app for all these new memory technologies as it can easily handle multiply different technologies with different latencies.

The next big topic were chiplets and the rise of UCIe (universal chiplet interconnect express) links. AMD led the way with their chiplet based, multi-core CPU chips but Intel is there now as well.

Chiplets are becoming the standard way to create significant functionality on silicon. But the problem up to now has been that every vendor had their own proprietary chiplet interconnect.

UCIe is meant to end proprietary interconnects. With UCIe, companies can focus on developing the best chiplet functionality and major manufacturers can pick and choose whichever chiplet offers the best bang for their buck and be assured that it will all talk well over UCIe. Or at least that’s the idea.

Computational storage is starting to become mainstream. Although everyone thought they would become general purpose compute engines, they seem to be having more success doing specialized (data) compute services like compression, transcoding, ransomware detection, etc. They are being adopted by companies that have need to do that type of work.

Computational memory is becoming a thing. Yes memristor, pcm, mram, etc. always offered computational capabilities on their technologies but now, organizations are starting to add compute logic to DIMMs to carry out computations close to the memory. We wonder if this will find niche applications just like computational storage did.

AI continues to drive storage and compute. But we are starting to see some IoT applications of AI as well and Jim thinks it won’t take long to see AI ubiquitous throughout IT, industry and everyday devices. Each with special purpose AI models trained to perform very specific functionality better and faster than general purpose algorithms could do.

One thing that’s starting to happen is that SSD intelligence is moving out of the SSD (controllers) and to the host. We can see this with the use of Zoned Name Spaces but OCP is also pushing flexible data placement so host’s can provide hints as to where to place newly written data.

There was more to the show as well. It was interesting to see the continued investment in 3D NAND (1000 layers by 2030), SSD capacity (256TB SSD coming in a couple of years), and some emerging tech like Memristor development boards and a 3D memory idea, but it’s a bit early to tell about that one.

Jim Handy, Director Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication.

He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media. 

He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com

152: GreyBeards talk agent-less data security with Jonathan Halstuch, Co-Founder & CTO, RackTop Systems

Sponsored By:

Once again we return to our ongoing series with RackTop Systems, and their Co-Founder & CTO, Jonathan Halstuch (@JAHGT). This time we discuss how agent-less, storage based, security works and how it can help secure many organizations with (IoT) end points they may not control or can’t deploy agents on them. But agent-less security can also help other organizations with security agents deployed over their end points. Listen to the podcast to learn more.

The challenge for enterprise’s with agent based security, is that not all end points support them. Jonathan mentioned one health care customer with an older electron microscope that couldn’t be modified. These older, outdated systems are often targeted by cyber criminals because they are seldom updated.

But even the newest IoT devices often can’t be modified by organizations that use them. Agent-less, storage based security can be a final line of defense to any environment with IoT devices deployed.

But security exposures go beyond IoT devices. Agents can sometimes take manual effort to deploy and update. And as such, sometimes they are left un-deployed or improperly configured.

The advantage of a storage based, agent-less security approach is that it’s always on/always present, because it’s in the middle of the data path and is updated by the storage company, where possible. Yes, not every organization may allows this and for those organizations, storage agent updates will be also require manual effort.

Jonathan mentioned the term Data Firewall. I (a networking novice, at best) have always felt firewalls were a configuration nightmare.

But as we’ve discussed previously in our series, RackTop has a “learning” and an “active” mode. During learning, the system automatically configures application/user IO assessors to characterize normal IO activity. Once learning has completed, the RackTop Systems in the environment now understands what sorts of IO to expect from users/applications and can then flag anything outside normal IO patterns.

But even during “learning” mode, the system is actively monitoring for known malware signatures and other previously characterized bad actor IO. These assesors are always active. 

Keith mentioned that most organizations run special jobs on occasion (quarterly, yearly) which might have not been characterized during learning. Jonathan said these will be flagged and may be halted (depending on RackTop’s configuration). But authorized parties can easily approve that applications IO activity, using a web link provided in the storage security alert.

Once alerted, authorized personnel can allow that IO activity for a specific time period (say Dec-Jan), or just for a one time event. When the time period expires, that sort of IO will be flagged again.

Some sophisticated customers have change control and may know, ahead of time, that end of quarter or end of year processing is coming up. If so, they can easily configure RackTop Systems, ahead of time, to authorize the applications IO activity. In this case there wouldn’t be any interruption to the application.

With RackTop Systems, security agents are centrally located, in the data path and are always operating. This has no dependency on your backend storage such as, SAN, cloud, hybrid storage, etc., or any end point. If anything in your environment accesses data, those RackTop System assessors will be active, checking IO activity and securing your data. 

Jonathan Halstuch, Co-Founder and CTO, RackTop Systems

onathan Halstuch is the Chief Technology Officer and co-founder of RackTop Systems. He holds a bachelor’s degree in computer engineering from Georgia Tech as well as a master’s degree in engineering and technology management from George Washington University.

With over 20-years of experience as an engineer, technologist, and manager for the federal government he provides organizations the most efficient and secure data management solutions to accelerate operations while reducing the burden on admins, users, and executives.

151: GreyBeards talk AI (ML) performance benchmarks with David Kanter, Exec. Dir. MLCommons

Ray’s known David Kanter (@TheKanter), Executive Director, MLCommons, for quite awhile now and has been reporting on MLCommons Mlperf AI benchmark results for even longer. MLCommons releases new benchmark results each quarter and this last week they released new Data Center Training (v3.0) and new Tiny Inferencing (v1.1) results. So, the GreyBeards thought it was time to get a view of what’s new in AI benchmarking and what’s coming later this year.

David’s been around the startup community in the Bay Area for a while now and sort of started at MLPerf early on as a technical guru working on submissions and other stuff and worked his way up to being the Executive Director/CEO. The big news this week from MLCommons is that they have introduced a new training benchmark and updated an older one. The new one simulates training GPT-3 and they also updated their Recommendation Engine benchmark. Listen to the podcast to learn more.

MLCommons is an industry association focused on supplying recreatable, verifiable benchmarks for machine learning (ML) and AI which they call MLperf benchmarks. Their benchmark suite includes a number of different categories such as data center training, HPC training, data center inferencing, edge inferencing, mobile inferencing and finally tiny (IoT device) inferencing. David likes to say MLperf benchmarks range from systems consuming Megawatts (HPC, literally a supercomputer) to Microwatts (Tiny) solutions.

The challenge holding AI benchmarking back early on was a few industry players had done their own thing but there was way to compare one to another. MLcommons was born out of that chaos, and sought to create a benchmarking regimen that any industry player could use to submit AI work activity and would allow customers to compare their solution to any other submission on a representative sample of ML model training and inferencing activity

MLCommons has both an Open and Closed class of submissions. For the Closed class, submissions have a very strict criteria for submission. These include known open source AI models and data, accuracy metrics that training and inferencing need to hit, and reporting a standard set of metrics information for the benchmark. All of which need to be done in order to create a repeatable and verifiable submission.

Their Open class is a way for any industry participant to submit whatever model they would like, to whatever accuracy level they want, and it’s typically used to benchmark new hardware, software or AI models.

As mentioned above MLcommons training benchmarks use accuracy specification that must be achieved to have a valid submission. Benchmarks also have to be run 3 times. All submissions list hardware (CPU and Accelerators) and software (AI framework). And these could range from 0 accelerators (e.g. CPU only with no GPUs) to 1000’s of GPUs.

The new GPT-3 model is a very complex AI model, that seemed until yesterday, unlikely to ever be benchmarked. But apparently the developers at MLCommons (and their industry partners) have been working on this for some time now. In this round of results there were 3 cloud submissions and 4 on prem submissions for GPT-3 training.

GPT-3, -3.5 & -4 are all OpenAI solutions which power their ChatGPT text transformer Large Language Model (LLM). GPT-3 has 175B parameters and was trained on TBs of data covering web crawls, book crawls, official documentation, code, etc. OpenAI said, at GPT-3 announcement, it took over $10M and months to train.

MLcommons GPT-3 benchmark is not a full training run of GPT-3 but uses a training checkpoint, trained on a subset of data used for the original GPT-3 training, Checkpoints are used for long running jobs (training sessions, weather simulations, fusion energy simulations, etc) and copy all internal state of a job/system while its running (ok, quiesced) at some interval (say every 8hrs, 24 hrs, 48hrs, etc), so that in case of a failure, one could just restart the activity from the last checkpoint rather than the beginning.

MLCommons GPT-3 checkpoint has been trained on a 10B token data set. The benchmark starts with loading this checkpoint and trains on an even smaller subset of the data for GPT-3 and trains to achieve the accuracy baseline.

Accuracy for text transformers is not as simple as other models (correct image classification, object identification, etc.) and uses “perplexity”. Hugging Face defines perplexity as “the exponentiated average negative log-likelihood of a sequence.”

The 4 on-premises submissions for GPT-3 using 45 minutes (768 NVIDIA H100 GPUs) to 442 minutes (64 Habana Guadi2 GPUS). The 3 cloud submissions all used NVIDIA H100 GPUs and ranged from 768 (@47 minutes to train) to 3584 GPUs (@11 min. to train).

Aside from DataCenter training, MLcommons also released a new round of Tiny (IoT) inferencing benchmarks. These generally use smaller ARM processors and no GPUs with much smaller AI models such as Keyword spotting (“Hey SIRI”), visual wake words (door opening), image classification, etc.

We ended our discussion with me asking David why there was no storage oriented MLcommons benchmark. David said creating a storage benchmarks for AI is much different than inferencing or training benchmarks. But MLCommons has taken this on and now have a storage MLcommons series of benchmarks for storage that uses emulated accelerators.

At the moment, anyone with a storage system can submit MLcommons storage benchmark. After some time, MLcommons will only allow submissions from member companies but early on it’s open for all.

For their storage benchmarks, rather than using accuracy as benchmark criteria they use keeping (emulated) accelerators X% busy. This way storage support of the MLops activities can be isolated from the training and inferencing.

The GreyBeards eagerly anticipate the first round of MLcommons storage benchmark results. Hopefully coming out later this year.

150: GreyBeard talks Zero Trust with Jonathan Halstuch, Co-founder & CTO, RackTop Systems

Sponsored By:

This is another in our series of sponsored podcasts with Jonathan Halstuch (@JAHGT), Co-Founder and CTO of RackTop Systems. You can hear more in Episode #147 on RansomWare protection and Episode #145 on proactive NAS security.

Zero Trust Architecture (ZTA) has been touted as the next level of security for a while now. As such, it spans all of IT infrastructure. But from a storage perspective, it’s all about the latest NFS and SMB protocols together with an extreme level of security awareness that infuses storage systems.

RackTop has, from the git go, always focused on secure storage. ZTA with RackTop, adds on top of protocol logins an understanding of what normal IO looks like for apps, users, & admins and makes sure IO doesn’t deviate from what it should be. We discussed some of this in Episode #145, but this podcast provides even more detail. Listen to the podcast to learn more.

ZTA starts by requiring all participants in an IT infrastructure transaction to mutually authenticate one another. In modern storage protocols this is done via protocol logins. Besides logins, ZTA can establish different timeouts to tell servers and clients when to re-authenticate.

Furthermore, ZTA doesn’t just authenticate user/app/admin identity, it can also require that clients access storage only from authorized locations. That is, a client’s location on the network and in servers is also authenticated and when changed, triggers a system response. .

Also, with ZTA, PBAC/ABAC (policy/attribute based access controls) can be used to associate different files with different security policies. Above we talked about authentication timeouts and location requirements but PBAC/ABAC can also specify different authentication methods that need to be used.

RackTop systems does all of that and more. But where RackTop really differs from most other storage is that it support two modes of operation an observation mode and an enforcement mode. During observation mode, the system observes all the IO a client performs to characterizes its IO history.

Even during observation mode, RackTop has been factory pre-trained with what bad actor IO has looked like in the past. This includes all known ransomware IO, unusual user IO, unusual admin IO, etc. During observation mode, if it detects any of this bad actor IO, it will flagg and report it. For example, admins performing high read/write IO to multiple files will be detected as abnormal, flagged and reported.

But after some time in observation mode, admins can change RackTop into enforcement mode. At this point, the system understands what normal client IO looks like and if anything abnormal occurs, the system detects, flags and reports it.

RackTop customers have many options as to what the system will do when abnormal IO is detected. This can range from completely shutting down client IO to just reporting and logging it.

Jonathan mentioned that RackTop is widely installed in multi-level security enviroments. For example, in many government agencies, it’s not unusual to have top secret, secret, and unclassified information, each with their own PBAC/ABAC enforcement criteria.

RackTop has a long history of supporting storage for these extreme security environments. As such, customers should be well assured that their data can be as secured as any data in national government agencies.

Jonathan Halstuch, Co-Founder & CTO RackTop Systems

onathan Halstuch is the Chief Technology Officer and co-founder of RackTop Systems. He holds a bachelor’s degree in computer engineering from Georgia Tech as well as a master’s degree in engineering and technology management from George Washington University.

With over 20-years of experience as an engineer, technologist, and manager for the federal government he provides organizations the most efficient and secure data management solutions to accelerate operations while reducing the burden on admins, users, and executives.