153: GreyBeards annual FMS2023 wrapup with Jim Handy, General Director, Objective Analysis

Jim Handy, General Director, Objective Analysis and I were at the FMS 2023 conference in Santa Clara last week and there were a number of interesting discussions at the show. I was particularly struck with the progress being made on the CXL front. I was just a participant but Jim moderated and was on many panels during the show. He also comes with a much deeper understanding of the technologies. Listen to the podcast to learn more.

We asked for some of Jim’s top takeaways from the show.

Jim thought that the early Tuesday Morning Market sessions on the state of the flash, memory and storage markets were particularly well attended. As these were the first day’s earliest sessions, in the past they weren’t as well attended.

The flash and memory markets both seem to be in a downturn. As the great infrastructure buy out of COVID ends, demand seems to have collapsed. As always, these and other markets go thru cycles, i.e., downturn where demand collapses and prices fall, to price stability as demand starts to pick up, and to supply constrained where demand can’t be satisfied. The general consensus seems to be that we may see a turn in the market by middle of next year.

CXL is finally catching on. At the show there were a couple of vendors showing memory extension/expansion products using CXL 1.1 as well as CXL switches (extenders) based on CXL 2.0. The challenge with memory today, in this 100+ core CPU world, is trying to keep the core to memory bandwidth flat and keep up with application memory demand. CXL was built to deal with both of these concerns

CXL has additional latency but it’s very similar to dual CPUs accessing shared memory. Jim mentioned that Microsoft Azure actually checked to see if they can handle CXL latencies by testing with dual socket systems.

There was a lot of continuing discussion on new and emerging memory technologies. And Jim Handy mentioned that their team has just published a new report on this. He also mentioned that CXL could be the killer app for all these new memory technologies as it can easily handle multiply different technologies with different latencies.

The next big topic were chiplets and the rise of UCIe (universal chiplet interconnect express) links. AMD led the way with their chiplet based, multi-core CPU chips but Intel is there now as well.

Chiplets are becoming the standard way to create significant functionality on silicon. But the problem up to now has been that every vendor had their own proprietary chiplet interconnect.

UCIe is meant to end proprietary interconnects. With UCIe, companies can focus on developing the best chiplet functionality and major manufacturers can pick and choose whichever chiplet offers the best bang for their buck and be assured that it will all talk well over UCIe. Or at least that’s the idea.

Computational storage is starting to become mainstream. Although everyone thought they would become general purpose compute engines, they seem to be having more success doing specialized (data) compute services like compression, transcoding, ransomware detection, etc. They are being adopted by companies that have need to do that type of work.

Computational memory is becoming a thing. Yes memristor, pcm, mram, etc. always offered computational capabilities on their technologies but now, organizations are starting to add compute logic to DIMMs to carry out computations close to the memory. We wonder if this will find niche applications just like computational storage did.

AI continues to drive storage and compute. But we are starting to see some IoT applications of AI as well and Jim thinks it won’t take long to see AI ubiquitous throughout IT, industry and everyday devices. Each with special purpose AI models trained to perform very specific functionality better and faster than general purpose algorithms could do.

One thing that’s starting to happen is that SSD intelligence is moving out of the SSD (controllers) and to the host. We can see this with the use of Zoned Name Spaces but OCP is also pushing flexible data placement so host’s can provide hints as to where to place newly written data.

There was more to the show as well. It was interesting to see the continued investment in 3D NAND (1000 layers by 2030), SSD capacity (256TB SSD coming in a couple of years), and some emerging tech like Memristor development boards and a 3D memory idea, but it’s a bit early to tell about that one.

Jim Handy, Director Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication.

He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media. 

He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com

151: GreyBeards talk AI (ML) performance benchmarks with David Kanter, Exec. Dir. MLCommons

Ray’s known David Kanter (@TheKanter), Executive Director, MLCommons, for quite awhile now and has been reporting on MLCommons Mlperf AI benchmark results for even longer. MLCommons releases new benchmark results each quarter and this last week they released new Data Center Training (v3.0) and new Tiny Inferencing (v1.1) results. So, the GreyBeards thought it was time to get a view of what’s new in AI benchmarking and what’s coming later this year.

David’s been around the startup community in the Bay Area for a while now and sort of started at MLPerf early on as a technical guru working on submissions and other stuff and worked his way up to being the Executive Director/CEO. The big news this week from MLCommons is that they have introduced a new training benchmark and updated an older one. The new one simulates training GPT-3 and they also updated their Recommendation Engine benchmark. Listen to the podcast to learn more.

MLCommons is an industry association focused on supplying recreatable, verifiable benchmarks for machine learning (ML) and AI which they call MLperf benchmarks. Their benchmark suite includes a number of different categories such as data center training, HPC training, data center inferencing, edge inferencing, mobile inferencing and finally tiny (IoT device) inferencing. David likes to say MLperf benchmarks range from systems consuming Megawatts (HPC, literally a supercomputer) to Microwatts (Tiny) solutions.

The challenge holding AI benchmarking back early on was a few industry players had done their own thing but there was way to compare one to another. MLcommons was born out of that chaos, and sought to create a benchmarking regimen that any industry player could use to submit AI work activity and would allow customers to compare their solution to any other submission on a representative sample of ML model training and inferencing activity

MLCommons has both an Open and Closed class of submissions. For the Closed class, submissions have a very strict criteria for submission. These include known open source AI models and data, accuracy metrics that training and inferencing need to hit, and reporting a standard set of metrics information for the benchmark. All of which need to be done in order to create a repeatable and verifiable submission.

Their Open class is a way for any industry participant to submit whatever model they would like, to whatever accuracy level they want, and it’s typically used to benchmark new hardware, software or AI models.

As mentioned above MLcommons training benchmarks use accuracy specification that must be achieved to have a valid submission. Benchmarks also have to be run 3 times. All submissions list hardware (CPU and Accelerators) and software (AI framework). And these could range from 0 accelerators (e.g. CPU only with no GPUs) to 1000’s of GPUs.

The new GPT-3 model is a very complex AI model, that seemed until yesterday, unlikely to ever be benchmarked. But apparently the developers at MLCommons (and their industry partners) have been working on this for some time now. In this round of results there were 3 cloud submissions and 4 on prem submissions for GPT-3 training.

GPT-3, -3.5 & -4 are all OpenAI solutions which power their ChatGPT text transformer Large Language Model (LLM). GPT-3 has 175B parameters and was trained on TBs of data covering web crawls, book crawls, official documentation, code, etc. OpenAI said, at GPT-3 announcement, it took over $10M and months to train.

MLcommons GPT-3 benchmark is not a full training run of GPT-3 but uses a training checkpoint, trained on a subset of data used for the original GPT-3 training, Checkpoints are used for long running jobs (training sessions, weather simulations, fusion energy simulations, etc) and copy all internal state of a job/system while its running (ok, quiesced) at some interval (say every 8hrs, 24 hrs, 48hrs, etc), so that in case of a failure, one could just restart the activity from the last checkpoint rather than the beginning.

MLCommons GPT-3 checkpoint has been trained on a 10B token data set. The benchmark starts with loading this checkpoint and trains on an even smaller subset of the data for GPT-3 and trains to achieve the accuracy baseline.

Accuracy for text transformers is not as simple as other models (correct image classification, object identification, etc.) and uses “perplexity”. Hugging Face defines perplexity as “the exponentiated average negative log-likelihood of a sequence.”

The 4 on-premises submissions for GPT-3 using 45 minutes (768 NVIDIA H100 GPUs) to 442 minutes (64 Habana Guadi2 GPUS). The 3 cloud submissions all used NVIDIA H100 GPUs and ranged from 768 (@47 minutes to train) to 3584 GPUs (@11 min. to train).

Aside from DataCenter training, MLcommons also released a new round of Tiny (IoT) inferencing benchmarks. These generally use smaller ARM processors and no GPUs with much smaller AI models such as Keyword spotting (“Hey SIRI”), visual wake words (door opening), image classification, etc.

We ended our discussion with me asking David why there was no storage oriented MLcommons benchmark. David said creating a storage benchmarks for AI is much different than inferencing or training benchmarks. But MLCommons has taken this on and now have a storage MLcommons series of benchmarks for storage that uses emulated accelerators.

At the moment, anyone with a storage system can submit MLcommons storage benchmark. After some time, MLcommons will only allow submissions from member companies but early on it’s open for all.

For their storage benchmarks, rather than using accuracy as benchmark criteria they use keeping (emulated) accelerators X% busy. This way storage support of the MLops activities can be isolated from the training and inferencing.

The GreyBeards eagerly anticipate the first round of MLcommons storage benchmark results. Hopefully coming out later this year.

150: GreyBeard talks Zero Trust with Jonathan Halstuch, Co-founder & CTO, RackTop Systems

Sponsored By:

This is another in our series of sponsored podcasts with Jonathan Halstuch (@JAHGT), Co-Founder and CTO of RackTop Systems. You can hear more in Episode #147 on RansomWare protection and Episode #145 on proactive NAS security.

Zero Trust Architecture (ZTA) has been touted as the next level of security for a while now. As such, it spans all of IT infrastructure. But from a storage perspective, it’s all about the latest NFS and SMB protocols together with an extreme level of security awareness that infuses storage systems.

RackTop has, from the git go, always focused on secure storage. ZTA with RackTop, adds on top of protocol logins an understanding of what normal IO looks like for apps, users, & admins and makes sure IO doesn’t deviate from what it should be. We discussed some of this in Episode #145, but this podcast provides even more detail. Listen to the podcast to learn more.

ZTA starts by requiring all participants in an IT infrastructure transaction to mutually authenticate one another. In modern storage protocols this is done via protocol logins. Besides logins, ZTA can establish different timeouts to tell servers and clients when to re-authenticate.

Furthermore, ZTA doesn’t just authenticate user/app/admin identity, it can also require that clients access storage only from authorized locations. That is, a client’s location on the network and in servers is also authenticated and when changed, triggers a system response. .

Also, with ZTA, PBAC/ABAC (policy/attribute based access controls) can be used to associate different files with different security policies. Above we talked about authentication timeouts and location requirements but PBAC/ABAC can also specify different authentication methods that need to be used.

RackTop systems does all of that and more. But where RackTop really differs from most other storage is that it support two modes of operation an observation mode and an enforcement mode. During observation mode, the system observes all the IO a client performs to characterizes its IO history.

Even during observation mode, RackTop has been factory pre-trained with what bad actor IO has looked like in the past. This includes all known ransomware IO, unusual user IO, unusual admin IO, etc. During observation mode, if it detects any of this bad actor IO, it will flagg and report it. For example, admins performing high read/write IO to multiple files will be detected as abnormal, flagged and reported.

But after some time in observation mode, admins can change RackTop into enforcement mode. At this point, the system understands what normal client IO looks like and if anything abnormal occurs, the system detects, flags and reports it.

RackTop customers have many options as to what the system will do when abnormal IO is detected. This can range from completely shutting down client IO to just reporting and logging it.

Jonathan mentioned that RackTop is widely installed in multi-level security enviroments. For example, in many government agencies, it’s not unusual to have top secret, secret, and unclassified information, each with their own PBAC/ABAC enforcement criteria.

RackTop has a long history of supporting storage for these extreme security environments. As such, customers should be well assured that their data can be as secured as any data in national government agencies.

Jonathan Halstuch, Co-Founder & CTO RackTop Systems

onathan Halstuch is the Chief Technology Officer and co-founder of RackTop Systems. He holds a bachelor’s degree in computer engineering from Georgia Tech as well as a master’s degree in engineering and technology management from George Washington University.

With over 20-years of experience as an engineer, technologist, and manager for the federal government he provides organizations the most efficient and secure data management solutions to accelerate operations while reducing the burden on admins, users, and executives.

145: GreyBeards talk proactive NAS security with Jonathan Halstuch, CTO & Co-Founder, RackTop Systems

Sponsored By:

We’ve known about RackTop Systems. since episode 84, and have been watching them ever since. On this episode we, once again, talk with Jonathan Halstuch (@JAHGT), CTO and Co-Founder, RackTop Systems.

RackTop was always very security oriented but lately they have taken this to the next level. As Jonathan says on the podcast, historically security has been mostly a network problem but since ransomware has emerged, security is now often a data concern too. The intent of proactive NAS security is to identify and thwart bad actors before they impact data, rather than after the fact. Listen to the podcast to learn more.

Proactive security for NAS storage includes monitoring user IO and administrator activity and looking for anomalies. RackTop has the ability (via config options) to halt IO activity when things look wrong, that is user/application IO looks differently than what has been seen in the past. They also examine admin activity, a popular vector for ransomware attacks. RackTop IO/admin activity scanning is done in real time as IO is processed and admin commands received.

The customer gets to decide how far to take this. The challenge with automatically halting access is false positives, when say a new application starts taking off. Security admins must have an easy way to see and understand what was anomalous/what not and to quickly let that user/application return to normal activities or take it out.

In addition to just stopping access, they can also just report it to admins/security staff. Moreover, the system can also automatically take snapshots of data when anomalous behavior is detected, to give admins and security a point-in-time view into the data before bad behavior occurs.

RackTop Systems have a number of assessors that look for specific anomalous activity used to detect and act to twart malware. For example, an admin assessor is looking at all admin operations to determine if these are considered normal or not.

RackTop also support special time period access permissions. These provide temporary, time-dependent, unusual access rights to data for admins, users or applications that would normally be considered a breach. Such as having an admin copying lots of data or moving and deleting data. These are for situations that crop up where mass data deletion, movement or copying would be valid. When the time period access permission elapses, the system goes back into monitoring for anomalous behavior.

We talked about the overhead of doing all this scanning and detection in real time and how that may impact system IO performance. For other storage vendors, these sorts of activities are often done with standalone appliances, which of course add additional IO to a storage system to do offline scans.

Jonathan said, with recent Intel Xeon multi-core processors, they can readily afford the CPU cycles/cores required to do their scanning during IO processing, without sacrificing IO performance.

RackTop also supports a number of reports to show system configured data/user/application access rights as well as what accesses have occurred over time. Such reports offer admin/security teams visibility into data access rights and usage.

RackTop can be deployed in hybrid disk-flash solutions, as storage software in public clouds, in an HCI solution, or in edge environments that replicate back to core data centers. And they can also be used as a backup/archive data target for backup systems. RackTop Systems NAS supports CIFS 1.0- SMB 3.1.1, and NFSv3-v4.2.

RackTop Systems have customers in national government agencies, security sensitive commercial sectors, state gov’t, healthcare, and just about anyone subject to ransomware attacks on a regular basis. Which nowadays, is pretty much every IT organization on the planet.

Jonathan Halstuch, CTO & Co-Founder, RackTop Systems

Jonathan Halstuch is the Chief Technology Officer and co-founder of RackTop Systems. He holds a bachelor’s degree in computer engineering from Georgia Tech as well as a master’s degree in engineering and technology management from George Washington University.

With over 20-years of experience as an engineer, technologist, and manager for the federal government he provides organizations the most efficient and secure data management solutions to accelerate operations while reducing the burden on admins, users, and executives.

141: GreyBeards annual 2022 wrap-up podcast

Well it has been another year and time for our annual year end wrap up. Since Covid hit, every year has certainly been interesting. This year we have seen the start of back in person conferences which was a welcome change from the covid lockdown. We are very glad to start seeing everybody again.

From the tech standpoint, the big news this year was CXL. As everyone should recall, CXL is a new-ish PCIe hardware and protocol that supports larger memory sitting out on a PCIe bus and in the future shared memory between servers. All this is to enable a new wave of memory based computing. We spent probably half our time discussing CXL and it’s impact on IT.

The other major topic was the Cloud Native ecosystem. In the past all we talked about was K8s but nowadays the ecosystem that surrounds it is almost as important as K8s itself. The final topic was a bit of a shock earlier this year and yes it was the Broadcom’s acquisition of VMware. Jason and I spend our Explore podcast talking about it (see our 137: VMware Explore wrap-up). Keith has high hopes that the EU will shut it down but the jury’s still out on that one. Listen to the podcast to learn more.

As for CXL, it turns out that AMD have just released full support for CXL hardware and protocols with their latest round of CPU chips. But the new AMD CPUs only support DDR5 memory, (something about there’s only so much logic one can fit on a chip…) which means all those DDR4 DIMs out in the wild need somewhere to land. CXL could supply a new lease on life for DDR4 DIMs.

And it’s not just about shared memory or increased memory sizes, CXL can also provide a tiered memory hierarchy, with gobs of flash behind memory DIMs (see: 136: FMS2022 wrap up …) So, now its no longer a TB or ten of server memory but potentially 100s of TBs. What this means for SAP HANNA, AWS Aurora and other heavy-memory solutions has yet to play out.

Cloud Native won. We see this in the increasing adoption of containers and K8s in the enterprise, cloud and just about anywhere IT happens these days. But the ecosystem surrounding K8s is chaos.

Over time, many of these ecosystem solutions will die off, be purchased, or consolidated but in the mean time, it’s entirely too confusing. Red Hat’s OpenShift is one answer and VMware’s Tanzu is another. And of course all the clouds have their own K8s packaged solution. But just to cover their bets, everyone also supports native K8s and just about every software package that works with it. So, K8s’s ecosystem is in a state of flux and may take time to become a stable set of tools useable by the enterprise IT.

Finally, Broadcom’s acquisition of VMware has everyone up in arms. Customers are concerned the R&D juggernaut that VMware has been, since its very beginning, will be jettisoned in favor of profits. And HCI vendors that always felt Dell EMC had an unfair advantage will all look at Broadcom in a similar light.

Keith says there’s a major difference in how USA regulators view an acquisition and how EU regulators view one. According to Keith, EU views acquisitions in how they help or hurt the customer. USA regulators view acquisitions on show they help or hurt the competition. Will have to wait and see how this all plays for Broadcom-VMware.

On the other hand, speaking of competition, Nutanix seems to be feeling the heat as well. Rumors are it’s up for sale. Who will want it and how the regulators view both of these acquisitions may be as interesting story for 2023

2023 looks to be another year of transition for enterprise IT. The cloud players all seem to be coming around to the view that they can’t be all things to all (IT) people. And the enterprise vendors are finally seeing some modicum of staying power in the face of a relentless push to the cloud. How this plays out over the next few years will be of major interest to everybody.

Happy New Year from the GreyBeards!

Keith Townsend, The CTO Advisor

Keith Townsend (@CTOAdvisor) is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations. Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek, and can be found on LinkedIN.

Jason Collier, Principal Member of Technical Staff, AMD

Jason Collier (@bocanuts) is a long time friend, technical guru and innovator who has over 25 years of experience as a serial entrepreneur in technology. He was founder and CTO of Scale Computing and has been an innovator in the field of hyperconvergence and an expert in virtualization, data storage, networking, cloud computing, data centers, and edge computing for years. He’s on LinkedIN.