137: GreyBeards talk VMware Explore 2022 Wrap-up

Jason Collier Principle Member of Technical Staff, AMD (@bocanuts), a current GreyBeardsOnStorage co-host and I both attended VMware Explore 2022 this past week and we recorded a podcast discussing VMware’s announcements on the show floor. It turns out that Keith Townsend, TheCTOAdvisor (@thectoadvisor) had brought his Airstream &studio and was exhibiting on the show floor. Keith kindly offered the use of his studio to record the podcast.

This one is a video. Let us know what you think. I clearly need a cowboy hat and Jason said (off camera) that I’m showing more grey in my beard than before. I take that as a compliment here.

Here’s the news as we saw it:

  • vSphere 8 – has a number of new features but the ones we thought important were the GA of Project Monterey. This supports new DPUs that now run ESXi out board from the CPU. They are able to offload lot’s of the CPU networking cycles to the DPU freeing up these for other (more important) work. vSphere 8 supports 2 DPUs now, the NVIDIA (Mellanox) BlueField(-2?) DPU and the AMD (Pensando) DPU. AMD recently purchased Pensando and Jason seemed to know an awful lot about this tech. VMware also announced support for concurrent ESXi upgrades which can now allow upgrading ESXi running in DPUs while hosts and clusters continue to operate. Finally, the other item of interest was vSphere is now more API driven. I guess it’s only a matter of time before all VMware functionality is API driven to make it even more cloud-like
  • vSAN 8 – also has a number of new features. The first we discussed was is a faster data path. This means more IOPS, more bandwidth and lower latency for IOs. Next, vSAN 8 now supports single tier storage pools . These will no longer require a caching layer. This should also speed up IO operations (as long as the single tier is at least as fast as the old caching layer). They also announced faster snapshots. Apparently this has been a problem in the past and they’ve done the work to speed this up considerably. Jason mentioned an AMD open source VM migration tool (from somebody else’s X86 CPUs to AMDs) that depends a lot on vSAN snapshots.
  • Cloud Flex Storage – mentioned at the show but not well explained, Jason and I speculated that this was an internal storage service available on for Cloud Foundation users on AWS where customers could subscribe to storage as-a-service in much lower increments (maybe even GB/month) than standing up more vSAN hosts to increase storage.
  • NetApp FsX (ONTAP) storage – along the same line, VMware announced support for NetApp’s FsX as yet another storage option for Cloud Foundation users on AWS. Supplying yet another storage-as-a-service option for this environment.
  • Cloud Flex Compute – also mentioned at the show was their new Compute-As-A-Service for Cloud Foundation users on AWS. This way users could subscribe to more or less compute, on an as needed basis rather than having to spin up new ESXi hosts. I later found out this allows users to run a single VM and pay for it on a subscription basis.
  • Tanzu Application Platform (TAP) – is a new VMware supplied (and supported) “development experience” for K8s on vSphere. Note, it doesn’t include any advanced Tanzu services such as Tanzu K8s Grid (TKG) so it’s a true DevOps bare bones environment.
  • Tanzu K8S Operations (TKO) – another new Tanzu based service which offers operations complete control over the Tanzu services running on vSphere. Note Tanzu Mission Control (TMC) is not part of TKO.
  • Aria management – VMware rebranded vRealize and CloudHealth, which now comes in 3 bundles, Aria Cost (CloudHealth+), Aria Operations and Aria Automation. Which are all built onto of Aria Graph that graphs all the nodes in your VMware clusters with all their connections so that Aria management can traverse this graph to find out what’s where. On top of Aria Graph are Aria Hub, Aria Insights, and Aria Guardrails (sort of like providing boundary’s where services can be deployed).

They also announced Ransomware Recovery [changed 7Sep22, the Eds] as a Service which builds on VMware’s DR-aaS announced last year and Tanzu now works with Red Hat OpenShift

We also discussed the show. I heard somewhere there were 10K people there, Jason heard somewhere between 6K and 9K. In any case much smaller than VMworlds prior to Covid (25kish). And of course the rebranding of the show seemed counter-intuitive at best.

The show floor was much smaller than usual, (not withstanding Keith’s Airstream RV exhibit). And there were a number of storage vendors not at the show?? There was less hardware on the show floor, this could be a Covid thing but there were just as many mini-white boards/class rooms per large exhibiter, so don’t think it was because of Covid.

But the elephant in the room was Broadcom’s acquisition of VMware. At one of the analyst briefings I asked an exec about attrition. He made a couple of comments but in the end said VMware has been bought and sold before and has always come out of it in better shape. This will be no different.

That’s about all from the show.

And Thanks again to Keith and his crew, for lending us his studio to record the show. It’s been a while since I’ve seen an RV on a show floor. Keith seemed to have a ball with it

Tell us how you like our video. If everyone is for it we could do something like this with a Zoom (in this case Zencastr) recording, Or just try this at the next joint conference. .

Jason Collier, Principle Member of Technical Staff at AMD

Jason Collier (@bocanuts) is a long time friend, technical guru and innovator who has over 25 years of experience as a serial entrepreneur in technology.

He was founder and CTO of Scale Computing and has been an innovator in the field of hyperconvergence and an expert in virtualization, data storage, networking, cloud computing, data centers, and edge computing for years.

He’s on LinkedIN. He’s currently working with AMD on new technology and he has been a GreyBeards on Storage co-host since the beginning of 2022

135: Greybeard(s) talk file and object challenges with Theresa Miller & David Jayanathan, Cohesity

Sponsored By:

I’ve known Theresa Miller, Director of Technology Advocacy Group at Cohesity, for many years now and just met David Jayanathan (DJ), Cohesity Solutions Architect during the podcast. Theresa could easily qualify as an old timer, if she wished and DJ was very knowledgeable about traditional file and object storage.

We had a wide ranging discussion covering many of the challenges present in today’s file and object storage solutions. Listen to the podcast to learn more.

IT is becoming more distributed. Partly due to moving to the cloud, but now it’s moving to multiple clouds and on prem has never really gone away. Further, the need for IT to support a remote work force, is forcing data and systems that use them, to move as well.

Customers need storage that can reside anywhere. Their data must be migrate-able from on prem to cloud(s) and back again. Traditional storage may be able to migrate from one location to a select few others or replicate to another location (with the same storage systems present), but migration to and from the cloud is just not easy enough.

Moreover, traditional storage management has not kept up with this widely disbursed data world we live in. With traditional storage, customers may require different products to manage their storage depending on where data resides.

Yes, having storage that performs, provides data access, resilience and integrity is important, but that alone is just not enough anymore.

And to top that all off, the issues surrounding data security today have become just too complex for traditional storage to solve alone, anymore. One needs storage, data protection and ransomware scanning/detection/protection that operates together, as one solution to deal with IT security in today’s world

Ransomware has rapidly become the critical piece of this storage puzzle needing to be addressed. It’s a significant burden on every IT organization today. Some groups are getting hit each day, while others even more frequently. Traditional storage has very limited capabilities, outside of snapshots and replication, to deal with this ever increasing threat.

To defeat ransomware, data needs to be vaulted, to an immutable, air gapped repository, whether that be in the cloud or elsewhere. Such vaulting needs to be policy driven and integrated with data protection cycles to be recoverable.

Furthermore, any ransomware recovery needs to be quick, easy, AND securely controlled. RBAC (role-based, access control) can help but may not suffice for some organizations. For these environments, multiple admins may need to approve ransomware recovery, which will wipe out all current data by restoring a good, vaulted copy of the organizations data.

Edge and IoT systems also need data storage. How much may depend on where the data is being processed/pre-processed in the IoT system. But, as these systems mature, they will have their own storage requirements which is yet another data location to be managed, protected, and secured.

Theresa and DJ had mentioned Cohesity SmartFiles during our talk which I hadn’t heard about. Turns out that SmartFiles is Cohesity’s file and object storage solution that uses the Cohesity storage cluster. Cohesity data protection and other data management solutions also use the cluster to store their data. Adding SmartFiles to the mix, brings a more complete storage solution to support customer data needs. .

We also discussed Helios, Cohesity’s, next generation, data platform that provides a control and management plane for all Cohesity products and services,.

Theresa Miller, Director, Technology Advocacy Group, Cohesity

Theresa Miller is the Director, Technology Advocacy Group at Cohesity.  She is an IT professional that has worked as a technical expert in IT for over 25 years and has her MBA.

She is uniquely industry recognized as a Microsoft MVP, Citrix CTP, and VMware vExpert.  Her areas of expertise include Cloud, Hybrid-cloud, Microsoft 365, VMware, and Citrix.

David Jayanathan, Solutions Architect, Cohesity

David Jayanathan is a Solutions Architect at Cohesity, currently working on SmartFiles. 

DJ is an IT professional that has specialized in all things related to enterprise storage and data protection for over 15 years.

133: GreyBeards talk trillion row databases/data lakes with Ocient CEO & Co-founder, Chris Gladwin

We saw a recent article in Blocks and Files (Storage facing trillion-row db apocalypse), about a couple of companies which were trying to deal with trillion row database queries without taking weeks to respond. One of those companies was Ocient (@Ocient), a Chicago startup, whose CEO and Co-Founder, Chris Gladwin, was an old friend from CleverSafe (now IBM Cloud Object Storage).

Chris and team have been busy creating a new way to perform data analytics on massive data lakes. It’s has a lot to do with extreme parallelism, high core counts, NVMe SSDs, and sophisticated network and compute flow control. Listen to the podcast to learn more.

The key to Ocient’s approach involves NVMe SSDs which have become ubiquitous over the last couple of years which can be deployed to deal with large data problems. Another key to Ocient is multi-core CPUs, which again seem everwhere and if anything, are almost doubling with every new generation of CPU chip.

We let Chris wax a little too long on the SSD revolution in IOPs, especially as pertains too random 4K reads. Put a 20 or so NVMe SSDs in a server with dual 50 core CPU chips and you have one fast random IO machine.

Another key to Ocient is very sophisticated network and bus data flow management. With all this data running any query on it, involves consuming lots of data that all has to be brought into the CPU. PCIe bandwidth helps, as does NVMe SSDs, but you still need to insure that nothing gets bottlenecked moving all that data around a system/server.

Yet another key to Ocient is parallelism. With one 20 NVMe SSD server and 2-50 core CPUs you’ve got a lot of capability but when you are talking about trillion row databases you need more. So in order to respond to queries in anything a second or so, they throw a lot of NVMe servers at the problem.

I asked how they split the data across all these servers and Chris mentioned that at the moment that’s part of their secret sauce and involves professional services.

Ocient supports full ANSI SQL queries against trillion row databases and replies to those queries in a matter of seconds. And we aren’t just talking about SQL selects, Ocient can do splits, joins and updates to this trillion row database at the same time as the SQL select are going on. Chris mentioned that Ocient can be loading 100K JSON files each second, while still performing SQL queries in near real time against the trillion row database.

Ocient supports Reed-Solomon error correction on database data as well as data compression and encryption.

In addition to SQL queries, Chris mentioned that Ocient supports data load and transform activities. He said that most of this data is being generated from IoT applications and often needs to be cleaned up before it can be processed. Doing this in real time, while handling queries to the database is part of their secret sauce.

Chris said there’s probably not that many organizations that have need for trillion row databases. But ad auctions, telecom routers, financial services already use trillion row databases and they all want to be able to process queries faster on this data. Ocient is betting that there will be plenty more like this over time.

Ocient is available on AWS and GCP as a cloud service, can also be used operating in their own Ocient Cloud or can be deployed on premises. Ocient services are billed on a per core pack (500 cores, I think) subscription model.

Chris Gladwin, CEO and Co-founder, Ocient

Chris is the CEO and Co-Founder of Ocient whose mission is to provide the leading platform the world uses to transform, store, and analyze its largest datasets.

In 2004, Chris founded Cleversafe which became the largest and most strategic object storage vendor in the world (according to IDC.)  He raised $100M and then led the company to over a $1.3B exit in 2015 when IBM acquired the company.  The technology Cleversafe created is used by most people in the U.S. every day and generated over 1,000 patents granted or filed, creating one of the ten most powerful patent portfolios in the world. 

Prior to Cleversafe, Chris was the Founding CEO of startups MusicNow and Cruise Technologies and led product strategy for Zenith Data Systems.  He started his career at Lockheed Martin as a database programmer and holds an engineering degree from MIT. 

131: GreyBeards talk native K8s data protection using Veritas NetBackup with Reneé Carlisle

The GreyBeards have been discussing K8s storage services a lot over the last year or so and it was time to understand how container apps and data could be protected. Recently, we saw an article about a Veritas funded survey, discussing the need for data protection in K8s. As such, it seemed a good time to have a talk with Reneé Carlisle (@VeritasTechLLC), Staff Product Manager for NetBackup (K8S), Veritas.

It turns out that Veritas NetBackup (NBU) has just released their 2nd version of K8s data protection. It’s gone completely (K8s) native. That is, Veritas have completely re-implemented all 3 tiers of NBU as K8s micro services. Moreover, the new release still supports all other NBU infrastructure implementations, such as bare metal or VM NBU primary server/media server services. It’s almost like you have all the data protection offered by NBU for the enterprise over the years, now also available for K8s container apps. Listen to the podcast to learn more.

To make use of NBU K8s, backup admins establish named gold, silver, bronze backup policies selecting frequency of backups, retention periods, backup storage, etc. Then DevOps would tag a namespace, pods, containers, or PVs with those data protection policy names. Once this is done, NBU K8S will start protecting that namespace, pod, container, or PV.

In addition, backup admins can include or exclude specific K8s namespace(s), pod(s), container(s), labels (tags), or PVs to be backed up with a specific policy. When that policy is triggered it will go out into the cluster to see if those K8s elements are active and start protecting them or excluding them from protection as requested.

NBU K8s has an Operator service, Data Mover services and other micro services that execute in the cluster. That is, at least one Operator service must be deployed in the cluster (recommended to be in a separate namespace but this is optional). The Operator service is the control plane for NBU K8S services. It will spin up data movers when needed and spin them down when done.

The Operator service supports a CLI but more importantly to DevOps, a complete implemented RESTful API service. Turns out the CLI is implemented ontop of the NBU (Operator) API. With the NBU API DevOps CI/CD tools or other automation can perform all the data protection services to protect K8s.

One historical issue with backup processing is that it can consume every ounce of network/storage and sometimes compute power in an environment. The enterprise class data movers (or maybe the Operator control plane) has various mechanisms to constrain or limit NBU K8S resource consumption so that this doesn’t become a problem.

But as the Operator and its Data Mover are just micro services, if there’s need for more throughput, more can be spun up or if there’s a need to reduce bandwidth, some of them can be spun down, all with no manual intervention whatsoever.

Furthermore, NBU K8s can be used to restore/recover PVs, containers, applications or namespaces to other, CNCF compliant K8s infrastructure. So, if you wanted to say, move your K8s namespace from AKS to GKE or onprem to RedHat OpenShift, it becomes a simple matter of moving the last NBU backup to the target environment, deploying NBU K8s in that environment and restoring the namespace.

NBU K8s can also operate in the cloud just as well as on prem and works in any CNCF compatible K8s environment which includes AKS, EKS, GKE, VMware Tanzu and OpenShift.

In the latest NBU K8s they implemented new, enterprise class Data Movers as micro services in order to more efficiently protect and recover K8S resources. Enterprise class Data Movers can perform virus-scanning/ransomware detection, encryption, data compression, and other services that enterprise customers have come to expect from NBU data protection.

NBU K8S accesses PV data, container, pod and namespace data and metadata using standard CSI storage provider and normal K8s API services.

As mentioned earlier, in the latest iteration of NBU K8s, they have completely implemented their NBU infrastructure, natively as containers. That adds, K8s auto-scaling, full CI/CD automation via APIs, to all the rest of NBU infrastructure operating completely in the K8s cluster.

So, now backup admins can run NBU completely in K8s or run just the Operator and its data mover services connecting to other NBU infrastructure (primary server and media servers) executing elsewhere in the data center.

NBU K8s supports all the various, disk, dedicated backup appliances, object/cloud storage or other backup media options that NBU uses. So that means you can store your K8s backup data on the cloud, in secondary storage appliances, or anyplace else that’s supported by NBU.

Licensing for NBU K8s follows the currently available Veritas licensing such as front end TB protected, subscription and term licensing options are available.

Reneé Carlisle, Staff Product Manager, Veritas NetBackup (K8S)

Reneé (LinkedIn) has been with Veritas Technologies for eleven years in various focus areas within the NetBackup Product Management Team.  In her current role she is the Product Manager responsible for the NetBackup strategic direction of Modern Platforms including Kubernetes and OpenStack.   She has a significant technical background into many of the NetBackup features including Kubernetes, virtualization, Accelerator, and cloud.  

Prior to working for Veritas, she was a customer running a large-scale NetBackup operation as well as a partner implementing, designing, and integrating NetBackup in many different companies.

128: GreyBeards talk containers, K8s, and object storage with AB Periasamy, Co-Founder&CEO MinIO

Sponsored by:

Once again Keith and I are talking K8s storage, only this time it was object storage. Anand Babu (AB) Periasamy, Co-founder and CEO of MinIO, has been on our show a couple of times now and its always an insightful discussion. He’s got an uncommon perspective on IT today and what needs to change.

Although MinIO is an open source, uber-compatible, S3 object store, AB more often talks like a revolutionary, touting the benefits of containerization, scale and automation with K8s. Object storage is just one of the vehicles to help get there. Listen to the podcast to learn more.

We started our discussion on the changing role of object storage in applications. Object storage started out as an archive solution. But then, over time, something happened, modern database startups adopted object storage to hold primary data, then analytics moved over to objects in a big way, and finally AI/ML came out with an unquenchable thirst for data and object storage was its only salvation.

Keith questioned the use of objects in analytics. Both AB and I pointed out that Splunk (and Spark) fully supported objects. But Keith said R (and Python) data scientists prefer to use protocols they learned in school, and these were all about (CSV, JPEGs, JSON) files. AB said what usually happens is this data is stored as object storage and then downloaded onto local disk as files to be processed. That’s not to say, that R or Python can’t process objects directly, but when they don’t, the ultimate source of data truth is object storage.

Somehow, we got onto the multi-cloud question. AB said the multi-cloud is really all about containers and K8s. When customers talk multi-cloud, what they really mean is they want applications that can run anywhere, in any cloud, on premise, or anyplace else for that matter.

I thought multi cloud was a DR solution. But AB reiterated it’s more a solution to vendor lock-in. What containerization gives IT is the option (ability) to run applications anywhere, but IT is not obligated to execute that option unless it makes sense

AB said that dev today doesn’t develop apps in the cloud anymore. They develop locally using minikube, once it’s working there they then add CI/CD tool chains and then move it to its final resting place (the cloud or wherever it ultimately needs to run). It turns out, containers, YAML files, scripts etc. are small and trivial to upload, migrate, or move to any internet location. And with ubiquitous K8s support available everywhere, they can move anywhere unchanged.

But where’s the data. AB said anywhere the app executes. It’s never moved, it takes too much time and effort to move this amount of data. But as applications move, any data it generates grows in that location over time.

We next turned to how MinIO was supported in K8s. AB mentioned they have a DirectPV CSI driver that creates a distributed PV to support MinIO services on local disks. In this way, containers needing access to MinIO S3 object storage can directly allocate data to user storage.

Then we asked about opinionated stacks. AB said most customers don’t want these. They may have some value in preserving an infrastructure environment but they’re better off transitioning to containerization and build any stack within those containers and the K8s cluster services.

On the other hand, MinIO object storage is available with the same S3 API, in bare metal, on VMware, OpenShift, K8s, every public cloud and most private clouds, as well. The advantage of the same, single storage interface, available everywhere can’t be beat.

MinIO recently closed a new funding round of $103M. AB mentioned they had new investments from Intel and Softbank, but I was more interested in plans he had for the new cash. And Keith asked where the new funding left MinIO with respect to its competitors in this space.

AB said it was never about the money, it was more about what you did with your team that mattered in the long run. AB’s imperative was to enter an existing market with a better product and succeed with that. Creating a new market plus a new product always cost more, takes longer and is riskier.

As for the new funds, there are really two ways to go: 1) improve the current product or 2) create a new one. My sense is that AB leans towards improving the current product.

For instance, MinIO is often asked to support a different object storage API. But AB’s perspective is that S3 was an early bet that paid off well by becoming the de facto standard for object storage. Supporting another API would divide his resources and probably make their current product worse not better. AB mentioned they are getting 1.1M downloads of their Docker container version so they seem to be succeeding well with the current product

Anand Babu (AB) Periasamy, Co-founder and CEO

AB Periasamy is the co-founder and CEO of MinIO, an open-source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).

AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat’s Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory’s “Thunder” code, which, at the time was the second fastest in the world.  

AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.