119: GreyBeards talk distributed cloud file systems with Glen Shok, VP Alliances,Panzura

This month we turn to distributed (cloud) filesystems as we talk with Glen Shok (@gshok), VP of Alliances for Panzura. Panzura uses backend (cloud or onprem, S3 compatible) object store with a ring of software (VMs) or hardware (appliance) gateways that provides caching for local files as well as managing and maintaining metadata which creates a global NFS and SMB file system with near local access times.

Glen is an industry (without the grey beard) veteran with the knowledge to back that up. He’s been in the industry so long that we could probably have spent an hour just talking about where people are that we both know. Listen to the podcast to learn more

The interesting part about Panzura is their gateway ring. It not only manages local file caching and metadata maintenance/access, but it provides an out-of-(data path)-band file (byte range) lock coordination service, cache coherency (via delta block changes) and other services. All the metadata (and data) is backed up on backend object storage, but it’s the direct access to the metadata and its out of band control path as well as its caching service that supplies the near local access times for data.

Panzura supports any public (AWS, Azure, GCP & IBM) cloud object storage for backend data storage as well as a few, on prem, solutions (I think Glen mentioned IBM COS & Cloudian and their website mentions Wasabi, Scality and NetApp StorageGrid). Glen said they are on each of the public cloud’s marketplaces and with virtual gateways, its very easy to spin up and try.

Their system provides global (local, at the gateway) dedupe to reduce backend storage footprint and (both out of band and from backend storage) delta block changes for local cache updates. So in the event that an old version of the file happens to be present in their local cache gateway, it only needs retrieve the changed data from the object storage backend (or another gateway). All this local caching, dedupe and changed block tracking, helps to reduce cloud egress charges.

Data written to backend storage is immutable and versioned. So customers can retrieve any version of any file that was ever destaged to their backend. Glen said they write huge objects, presumably to help reduce storage footprint, IO overhead and API calls.

Glen claimed what with 3-way replication within a cloud region and 1-way replication outside the cloud region, customers no longer have to backup data. I respectively disagreed. He believes over time, customers will come to realize their use of backups for restores, becomes so rare that they can reduce backup frequency, if not eliminate it altogether. Some follow on discussion ensued, but in the end we seemed to agree to disagree on this topic.

Panzura also supports cross cloud mirroring. So, one could have their data mirrored from one cloud to another. One of these clouds will be used as a primary and only in the event that a majority of the gateway rings agree that the primary is DOWN and the secondary is UP, will they all automatically cut over to using the secondary storage cloud. While failover is automated, fail back requires operator intervention.

Panzura is charged for on managed data capacity. But cloud or on prem object storage is in addition to this and is charged for separately by the object storage provider.

As far what size file systems they support, Glen mentioned that they are ZFS internally, so any size imaginable. But he did concede, that at some point, metadata management becomes a problem and that they often suggest splitting apart 20PB file systems into 2 10PB (gateway rings) file systems to deal with this issue.

As for other solutions offered by Panzura, they have a K8s container block storage for persistent volumes that scales in capacity/performance using K8s services/resources.

Glen Shok, VP Alliances, Panzura

Glen Shok has been in the data center and storage industry for over 20 years.

Starting his career at Cisco in the late 90s. Moving to a few startups which were acquired by Brocade and Oracle. Glen has held positions in sales, sales leadership, product management and marketing, and Office of the CTO at Zones, prior to coming to Panzura.

He can’t decide what he likes to do, but at Panzura, he’s the VP of Strategic Alliances.

118: GreyBeards talks cloud-native object storage with Greg DiFraia, Scality and Stephen Bacon, HPE

Sponsored By:

And

Keith and I have talked with Stephen Bacon, Senior Director, Big Data Category, HPE, before (not on our podcast) but not Greg DiFraia, GM Americas, Scality. Both were very knowledgeable about how containerization is changing IT and the role of object storage in this transition. Scality’s ARTESCA, takes this changed world view to its logical conclusion, with a new, light-weight, cloud-native object storage system for Kubernetes (K8s) that is optimized to store and manage data, edge to core to cloud.

This is a significant joint Scality-HPE solution that has been a long time coming. As evidence of the level of the partnership between the two, ARTESCA will be exclusive to HPE for the next six months. Listen to the podcast to learn more.

We started our discussion on where the new IT world is going. It’s more of an application centric view, that spans multiple distinct infrastructure environments. New applications that live in this new IT world consume lots of data and more often than not, that data resides on object storage. And just like the developers are creating K8s container apps so they can scale easily, any object storage those apps access , needs similar scalability.

This means that edge solutions like smart cars, smart drones, smart sensors, etc. are doing some serious work. Some smart cars are producing a TB of data a day, all of which needs to be analyzed to adjust safe driving algorithms or to re-train AI/ML/DL neural networks.

Moreover, edge apps today are increasingly deploying embedded AI/ML/DL inferencing. That means the days of frozen/archived data are going away. With embedded AI, there’s an ongoing need to re-train on new (and old) data which requires all data to be readily (and speedily) accessible.

Scality has always been strong in high performance, multi-PB environments that needed rock-solid reliability and availability. And over time, they expanded their solution to access cloud data storage resources as well. But ARTESCA goes after a new market entirely. with a light weight, edge to core to cloud deployable object store, that can run anywhere, start small and grow as large as needed.

ARTESCA, Scality’s new cloud native object storage solution comes as a full stack, distributed collection of container based micro-services that runs on K8s. This includes not only object storage services but also a data management control plane.

The management control plane allows for the ingestion and use of any S3 compatible storage to hold ARSTECA data. But it also includes workflow actions that supply automated data movement between locations, data synchronization between sites and other services used to deploy, coordinate, and manage an edge, core and cloud solution as a single object storage environment.

ARSTECA runs on a number of different HPE hardware platforms from scale-out 1U servers optimized for server density, good compute-storage performance for general purpose workloads to a cluster in a box, 2U 4 server solutions that can provide huge amounts of compute-storage horsepower in a small form-factor environment. 

Stephen Bacon,  Senior Director, Big Data Category, HPE,

Stephen leads the Big Data Category, a rapidly growing multi-hundred million dollar business within HPE Storage comprised of the Apollo 4000 family of intelligent data storage servers, data analytics solutions, and the Complete Program of software partner-based solutions including with Cohesity, Commvault, Qumulo, Scality, and Veeam.

His responsibilities span Product Management, Engineering Program Management, Integration Engineering, Partner Go-To-Market, and Partner Operations.

Stephen has held a variety of worldwide, Asia Pacific and Japan region, and New Zealand country roles spanning software, servers, storage, and partnerships in his more than 20 year IT industry career.

Greg DiFraia, GM Americas, Scality

Greg has been working in the Enterprise IT Solutions Market for over 20 years and brings a unique blend of technical and business leadership to the team at Scality.

Before joining Scality, Greg was VP of Strategic Alliances at Turbonomic, a hybrid cloud workload automation developer. There, he had a front-row seat to see the emerging challenges of multi-cloud; experience that has huge value in today’s multi-cloud world.

Having spent the 13 years prior to that with EMC/Dell EMC as Global Sales leader for Object Storage, Director of Sales Strategy for Mid Market business, and, most recently, CTO, Elastic Cloud Storage (ECS), Greg led the global sales strategy for the ECS software-defined storage platform.

113: GreyBeards talk storage for next gen. workloads with Liran Zvibel, Co-Founder & CEO WekaIO

Sponsored By:

I’ve known Liran Zvibel, Co-founder and CEO of Weka IO for many years now and it’s the second time he’s been on our show, (see: Episode 56: GreyBeards talk high performance file storage...). In those days, WekaIO was just coming out and hitting the world with this extremely high-performing, scale out unstructured data solution. Well since then, they’ve just gotten better.

Keith and I had a great time talking with Liran again. Liran has deep knowledge about unstructured data and how enterprises use it these days. WekaIO’s story, over the last two years has gone beyond great performance to real world, hybrid cloud offerings e as well as going after the cloud native app’s (read Kubernetes [K8S]) persistent storage. Listen to the podcast to learn more.

We started with a history lesson on WekaIO. Back in those days (which persists today, I might add) there were many IO workloads that required companies to purchase different solutions for different work. For example, they needed DAS or SAN for performance, NAS for ease of access and object for scale. WekaIO came out with an answer to all these problems in a single, scaleable storage system. That is, they performed IO as fast as DAS or SAN block, had all the ease of access of NAS, and could scale as much as object.

However, the real culprit holding the world back was “NFS”. At the outset NFS was designed (back in the 1990s) with the then current networking speeds available (10-100Mbps), which performed just fine at those speeds. But when 10-100GbE came out in the 2000’s, NFS’s metadata overhead was too chatty to support wire speeds. Thus, any storage that depended on NFS protocols couldn’t supply (small) files fast enough for modern applications.

This is why WekaIO has moved to not only support NFS and SMB but also POSIX and NVIDIA® GPUDirect® Storage interfaces. By offering POSIX, WekaIO is able to plug into standard Linux and Windows server systems and provide excellent small file performance. Of course applications that demand small file performance today are mostly data analytics and AI/ML/DL workloads.

Consequently., NVIDIA came out with their GPUDirect Storage protocol to address getting small file (data) into GPUs faster. With GPUDirect, storage systems can RDMA data directly from storage to GPU memory and vice versa, with no OS intervention (other than to set up the transfer). If you happen to have a small file, high performing storage system attached to your fabric that supports GPUDirect , like WekaIO, you can significantly speed up your AI/ML/DL workloads.

Next we started talking K8S storage. WekaIO usestheir POSIX interface in their CSI plugin to support K8S container persistent storage. Again, supplying high performance for small files seems to be tailor made for K8S container applications that exist today and will for the foreseeable future.

Enter the cloud. Almong other things, WekaIO is a AWS primary storage vendor. It also offers snap to cloud. And with both of these in tandem, it’s just become a lot easier to move and access your unstructured data in the cloud. Liran mentioned that WekaIO primary storage in AWS operates across AZ’s. This means it can be configured to support better availability than EBS.

Large BioPharma companies are using WekaIO in AWS to store and process field data and research data, so that this work can be done around the world. Some companies have run out of compute in a single AZ (unbelievable I know but it’s COVID). By offering multi-AZ support unstructured data access with WekaIO, these companies can spread their compute across AZ’s and region and still access their data. And when their products are ready for gov’t certification, having all this data in the cloud, can make provide an easy way to have gov’t access this same data.

Liran Zvibel, Co-founder and CEO WekaIO

As Co-Founder and CEO, Mr. Liran Zvibel guides long term vision and strategy at WekaIO. Prior to creating the opportunity at WekaIO, he ran engineering at social startup and Fortune 100 organizations including Fusic, where he managed product definition, design, and development for a portfolio of rich social media applications.

Liran also held principal architectural responsibilities for the hardware platform, clustering infrastructure and overall systems integration for XIV Storage System, acquired by IBM in 2007.

Mr. Zvibel holds a BSc.in Mathematics and Computer Science from Tel Aviv University.

107: GreyBeards talk MinIO’s support of VMware’s new Data Persistence Platform with AB Periasamy, CEO MinIO

Sponsored by:

The GreyBeards have talked with Anand Babu (AB) Periasamy (@ABPeriasamy), CEO MinIO, before (see 097: GreyBeards talk open source S3… episode). And we also saw him earlier this year, at their headquarters for Storage Field Day 19 (SFD19) where AB gave a great discussion of what they were doing and how it worked (see MinIO’s SFD18 presentation videos).

The podcast runs ~26 minutes. AB is very technically astute and always a delight to talk with. He’s extremely knowledgeable about the cloud, containerized applications and high performing S3 compatible object storage. And now with MinIO and vSAN Data Persistence under VCF Tanzu, very knowledgeable about the virtualized IT environment as well. Listen to the podcast to learn more. [We’re trying out a new format placing the podcast up front. Let us know what you think; The Eds.]


VMware VCF vSAN Data Persistence Platform with MinIO

Earlier this month VMware announced a new capability available with the next updates of vSAN, vSphere & VCF called the vSAN Data Persistence Platform. The Data Persistence Platform is a VMware framework designed to integrate stateful, independent vendor software defined storage services in vSphere. By doing so, VCF can provide API access to persistent storage services for containerized applications running under Tanzu Kubernetes (k8s) Grid service clusters.

At the announcement, VMware identified three object storage and one (Cassandra) database technical partners that had been integrated with the solution.  MinIO was an object storage, open source partner.

VMware’s VCF vSAN Data Persistence framework allows vCenter administrators to use vSphere cluster infrastructure to configure and deploy these new stateful storage services, like MinIO, into namespaces and enables app developers direct k8s API access to these storage namespaces to provide persistent, stateful object storage for applications. 

With VCF Tanzu and the vSAN Data Persistence Platform using MinIO, dev can have full support for their CiCd pipeline using native k8s tools to deploy and scale containerized apps on prem, in the public cloud and in hybrid cloud, all using VCF vSphere.

MinIO on the Data Persistence Platform

AB said MinIO with Data Persistence takes advantage of a new capability called vSAN Direct which gives vSAN almost JBOF types of IO control and performance. With MinIO vSAN Direct, storage and k8s cluster applications can co-reside on the same ESX node hardware so that IO activity doesn’t have to hop off host to be performed. In addition, can now populate ESX server nodes with lots (100s to 1000s?) of storage devices and be assured the storage will be used by applications running on that host.

As a result, MinIO’s object storage IO performance on VCF Tanzu is very good due to its use of vSAN Direct and MinIO’s inherent superior IO performance for S3 compatible object storage.

With MinIO on the VCF vSAN Data Persistence Platform, VMware takes over all the work of deploying MinIO software services on the VCF cluster. This way customers can take advantage of MiniO’s fully compatible S3 object storage system operating in their VCF cluster. For app developers they get the best of all worlds, infrastructure configured, deployed and managed by admins but completely controllable, scaleable and accessible through k8s API services.

If developers want to take advantage of MinIO specialized services such as data security or replication, they can do so directly using MinIOs APIs, just like they would when operating bare metal or in the cloud.

AB said the VMware development team was very responsive during development of Data Persistence. AB was surprised to see such a big company, like VMware, operate with almost startup like responsiveness. Keith mentioned he’s seen this in action as vSAN has matured very rapidly to a point of almost feature parity, with just about any storage system out there today .

With MinIO object storage, container applications that need PB of data, now have a home on VCF Tanzu. And it’s as easily usable as any public cloud storage. And with VCF Tanzu configuring and deploying the storage over its own infrastructure, and then having it all managed and administered by vCenter admins, its simple to create and use PB of object storage.

MinIO is already the most popular S3 compatible object storage provider for applications running in the cloud and on prem. And VMware is easily the most popular virtualization platform on the planet. Now with the two together on VCF Tanzu, there seems to be nothing in the way of conquering containerized applications running in IT as well.

With that, MinIO is available everywhere containers want to run, natively available in the cloud, on prem and hybrid cloud or running with VCF Tanzu everywhere as well.


AB Periasamy, CEO MinIO

AB Periasamy is the CEO and co-founder of MinIO. One of the leading thinkers and technologists in the open source software movement,

AB was a co-founder and CTO of GlusterFS which was acquired by RedHat in 2011. Following the acquisition, he served in the office of the CTO at RedHat prior to founding MinIO in late 2015.

AB is an active angel investor and serves on the board of H2O.ai and the Free Software Foundation of India.

He earned his BE in Computer Science and Engineering from Annamalai University.


This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png
This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png


105: Greybeards talk new datacenter architecture with Pradeep Sindhu, CEO & Co-founder, Fungible

Neither Ray nor Keith has met Pradeep before, but Ray was very interested in Fungible’s technology. Turns out Pradeep Sindhu, CEO and Co-founder, Fungible has had a long and varied career in the industry starting at Xerox Parc, then co-founding and becoming chief scientist at Juniper, and now reachitecting the data center with Fungible. Pradeep mentioned this at the end of the podcast, he has always been drawn to hard problems with the potential to open up immense possibilities. What he did at Juniper and what he is planning to accomplish with Fungible both fit that pattern.

Today, in a typical data center, we have servers, networking and storage equipment all connected through a fabric. But from Pradeep’s perspective none of it works well in support of data centric computing. What we have today is operating like changing a screw with a pliers. But if there existed some hardware that can execute data centric computing (or to follow the metaphor, a screw driver) well, the data center would operate much more efficiently, with more performance and better resource use.

Fungible was founded in 2015 with the idea that the industry is moving to a data centric computing paradigm and today’s data center is ill equipped to take IT there.

What is data centric computing

The IT industry has been moving to a new type of computing, that is focused on short bursts of CPU activity with relatively small packets of data coming off the network (from sensors/outside world, from storage, from other servers, etc.). Those workloads are often transient, short lived, are intended to be performed quickly and may not leave any persistent state.

We can see this in the emergence of micro-services architectures with Docker and k8s containers. But you don’t have to be using containers. It’s also present in machine learning where the update cycle of the neural network (with accelerators) takes lot’s of small bursts of computation while it consumes lots of small data items (pictures, text documents, ticker/status logs, etc. ).

Furthermore, the move to commodity hardware has taken the same x86/ARM core CPUs and used them to execute these small bursts of computation. And for some of these operations that may still make sense. But when the data center uses these same cores to perform data path packet processing. It bogs down the network. It consumes a lot of power, adds overhead (higher latencies), leads to packet loss, injects network jitter and a host of other problems.

So, in order to get the data packets to where they need to be with out those problems, networking endpoints need to be changed out to something designed to support data path critical workloads. Pradeep calls these data path critical work items “run to complete” code.

The critical question is what proportion of IT workloads are “data centric’ vs. not. While it might not be that high today, Pradeep and Fungible are betting that it’s going to be getting much higher over time. If we look at hyper-scalars today they are the forefront of this computing paradigm change and much of their workloads are moving to containerized execution.

The DPU enables data centric computing

Fungible plans to add a DPU that supports a power efficient, “run-to-complete” programming engine to the data center. By using DPUs, they can create a true fabric (using IPoE) that’s low latency, low jitter, lossless and provides full cross-sectional bandwidth.

The problem as Pradeep sees it is that the X86 and ARM cores are just not made to execute run-to-comple workloads well and this is required to provide a true fabric. Whereas Fungible has designed the DPU from the start to execute run-to-complete work.

Pradeep sees the data center of tomorrow utilizing JBoF(lash) & JBoD(isk) boxes with DPU(s) in front of them providing storage server services (block, file and object), JBoGP(Us) or JBoFP(GAs) boxes with DPU(s) in front of them providing accelerator/graphics server services, and compute boxes with DPU(s) and x86/ARM cores with DRAM-Optane PMEM in them providing CPU server and client services. All the DPUs together in a cluster would in total provide true fabric services.

Essentially, the DPUs would take over all data path operations and the storage, GPUS, CPUs would handle everything else. In effect, segregating data path and control path services in the data center.

Greenfield, brownfield or both

Keith and I both assumed this would be great for a green field deployments. But,. Pradeep said it’s designed to be incrementally added to servers, JBoFs, JBoDs, JBoGs/JBoFPs and start providing data path services within current data center fabric environments. Even as the rest of the data center remain unchanged.

At some point we talked about the programming model of the DPU. The DPU offers a bring your own Linux OS that can be programmed in any language you choose. But the critical, data-path functionalityi is coded in “C” to run as fast and as efficiently as possible.

Fungible has designed this hardware themselves. We didn’t get to talk about how they plan to market their product to the data center.

Pradeep also said to stay-tuned, and they were just about to announce their first product offering based on the DPU.

The podcast ran ~38 minutes. Pradeep, given his education and experience, is a very knowledgeable individual about the data center environment today. He’s certainly one of the most interesting IT tecnologist we have talked with in a while on the GreyBeards podcast. To say what Fungible is trying to do is aggressive and bold is an understatement. But Pradeep feels this is the only way forward to liberate the data center from its data path chains today. Both Keith and I thought we needed at least another hour or so to truly understand what they are doing and where they are going with it. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png

This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png
This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png

Pradeep Sindhu, CEO and Co-Founder, Fungible

Pradeep Sindhu is CEO and Co-Founder of Fungible, a Santa Clara-based startup providing at-scale, next-generation solutions for the data center, cloud and IT industries. He has been at the forefront of the network and processing industry for over three decades.

As the co-founder and CTO of Juniper Networks, he played a central role in the architecture, design and development of Juniper’s M40 router – the M series was the first of its kind, offering the industry true decoupling of the control plane and the forwarding plane.

Prior to Juniper, he was a Principal Scientist and Distinguished Engineer at the Computer Science Lab at Xerox’s Palo Alto Research Center (PARC) pushing the envelope on what silicon could do for networking and processing.

He is passionate about new ways to support our growing data-centric world with the right combination of hardware and software to build the infrastructure our future needs.