128: GreyBeards talk containers, K8s, and object storage with AB Periasamy, Co-Founder&CEO MinIO

Sponsored by:

Once again Keith and I are talking K8s storage, only this time it was object storage. Anand Babu (AB) Periasamy, Co-founder and CEO of MinIO, has been on our show a couple of times now and its always an insightful discussion. He’s got an uncommon perspective on IT today and what needs to change.

Although MinIO is an open source, uber-compatible, S3 object store, AB more often talks like a revolutionary, touting the benefits of containerization, scale and automation with K8s. Object storage is just one of the vehicles to help get there. Listen to the podcast to learn more.

We started our discussion on the changing role of object storage in applications. Object storage started out as an archive solution. But then, over time, something happened, modern database startups adopted object storage to hold primary data, then analytics moved over to objects in a big way, and finally AI/ML came out with an unquenchable thirst for data and object storage was its only salvation.

Keith questioned the use of objects in analytics. Both AB and I pointed out that Splunk (and Spark) fully supported objects. But Keith said R (and Python) data scientists prefer to use protocols they learned in school, and these were all about (CSV, JPEGs, JSON) files. AB said what usually happens is this data is stored as object storage and then downloaded onto local disk as files to be processed. That’s not to say, that R or Python can’t process objects directly, but when they don’t, the ultimate source of data truth is object storage.

Somehow, we got onto the multi-cloud question. AB said the multi-cloud is really all about containers and K8s. When customers talk multi-cloud, what they really mean is they want applications that can run anywhere, in any cloud, on premise, or anyplace else for that matter.

I thought multi cloud was a DR solution. But AB reiterated it’s more a solution to vendor lock-in. What containerization gives IT is the option (ability) to run applications anywhere, but IT is not obligated to execute that option unless it makes sense

AB said that dev today doesn’t develop apps in the cloud anymore. They develop locally using minikube, once it’s working there they then add CI/CD tool chains and then move it to its final resting place (the cloud or wherever it ultimately needs to run). It turns out, containers, YAML files, scripts etc. are small and trivial to upload, migrate, or move to any internet location. And with ubiquitous K8s support available everywhere, they can move anywhere unchanged.

But where’s the data. AB said anywhere the app executes. It’s never moved, it takes too much time and effort to move this amount of data. But as applications move, any data it generates grows in that location over time.

We next turned to how MinIO was supported in K8s. AB mentioned they have a DirectPV CSI driver that creates a distributed PV to support MinIO services on local disks. In this way, containers needing access to MinIO S3 object storage can directly allocate data to user storage.

Then we asked about opinionated stacks. AB said most customers don’t want these. They may have some value in preserving an infrastructure environment but they’re better off transitioning to containerization and build any stack within those containers and the K8s cluster services.

On the other hand, MinIO object storage is available with the same S3 API, in bare metal, on VMware, OpenShift, K8s, every public cloud and most private clouds, as well. The advantage of the same, single storage interface, available everywhere can’t be beat.

MinIO recently closed a new funding round of $103M. AB mentioned they had new investments from Intel and Softbank, but I was more interested in plans he had for the new cash. And Keith asked where the new funding left MinIO with respect to its competitors in this space.

AB said it was never about the money, it was more about what you did with your team that mattered in the long run. AB’s imperative was to enter an existing market with a better product and succeed with that. Creating a new market plus a new product always cost more, takes longer and is riskier.

As for the new funds, there are really two ways to go: 1) improve the current product or 2) create a new one. My sense is that AB leans towards improving the current product.

For instance, MinIO is often asked to support a different object storage API. But AB’s perspective is that S3 was an early bet that paid off well by becoming the de facto standard for object storage. Supporting another API would divide his resources and probably make their current product worse not better. AB mentioned they are getting 1.1M downloads of their Docker container version so they seem to be succeeding well with the current product

Anand Babu (AB) Periasamy, Co-founder and CEO

AB Periasamy is the co-founder and CEO of MinIO, an open-source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).

AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat’s Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory’s “Thunder” code, which, at the time was the second fastest in the world.  

AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.

126: GreyBeards talk k8s storage with Alex Chircop, CEO, Ondat

Keith and I had an interesting discussion with Alex Chircop (@chira001), CEO of Ondat, a kubernetes storage provider. They have a high performing system, laser focused on providing storage for k8s stateful container applications. Their storage is entirely containerized and has a number of advanced features for data availability, performance and security that developers need the run stateful container apps. Listen to the podcast to learn more.

We started by asking Alex how Ondats different from all the other k8s storage solutions out there today (which we’ve been talking with lately). He mentioned three crucial capabilities:

  • Ondat was developed from the ground up to run as k8s containers. Doing this would allow any k8s distribution to run their storage to support stateful container apps. .
  • Ondat was designed to allow developers to run any possible container app. Ondat supports both block as well as file storage volumes.
  • Ondat provides consistent, superior performance, at scale, with no compromises. Sophisticated data placement insures that data is located where it is consumed and their highly optimized data path provides low-latency access that data storage.

Ondat creates a data mesh (storage pool) out of all storage cluster nodes. Container volumes are carved out of this data mesh and at creation time, data and the apps that use them are co-located on the same cluster nodes.

At volume creation, Dev can specify the number of replicas (mirrors) to be maintained by the system. Alex mentioned that Ondat uses synchronous replication between replica clusters nodes to make sure that all active replica’s are up to date with the last IO that occurred to primary storage.

Ondat compresses all data that goes over the network as well as encrypts data in flight. Dev can easily specify that the data-at-rest also be compressed and/or encrypted. Compressing data in flight helps supply consistent performance where networks are shared.

Alex also mentioned that they support both the 1 reader/writer, k8s block storage volumes as well as multi-reader/multi-writer, k8s file storage volumes for containers.

In Ondat each storage volume includes a mini-brain used to determine primary and replica data placement. Ondat also uses desegregated consensus to decide what happens to primary and replica data after a k8s split cluster occurs. After a split cluster, isolated replica’s are invalidated and replicas are recreate, where possible, in the surviving nodes of the cluster portion that holds the primary copy of the data.

Also replica’s can optionally be located across AZs if available in your k8s cluster. Ondat doesn’t currentlysupport replication across k8s clusters.

Ondat storage works on any hyperscaler k8s solution as well as any onprem k8s system. I asked if Ondat supports VMware TKG and Alex said yes but when pushed mentioned that they have not tested it yet.

Keith asked what happens when things go south, i.e., an application starts to suffer worse performance. Alex said that Ondat supplies system telemetry to k8s logging systems which can be used to understand what’s going on. But he also mentioned they are working on a cloud based, Management-aaS offering, to provide multi-cluster operational views of Ondat storage in operation to help understand, isolate and fix problems like this.

Keith mentioned he had attended a talk by Google engineers that developed kubernetes and they said stateful containers don’t belong under kubernetes. So why are stateful containers becoming so ubiquitous now.

Alex said that may have been the case originally but k8s has come a long way from then and nowadays as many enterprises shift left enterprise applications from their old system environment to run as containers they all require state for processing. Having that stateful information or storage volumes accessible directly under k8s makes application re-implementation much easier.

What’s a typical Ondat configuration? Alex said there doesn’t appear to be one. Current Ondat deployments range from a few 100 to 1000s of k8s cluster nodes and 10 to 100s of TB of usable data storage.

Ondat has a simple pricing model, licensing costs are determined by the number of nodes in your k8s cluster. There’s different node pricing depending on deployment options but other than that it’s pretty straightforward.

Alex Chircop, CEO Ondat

Alex Chircop is the founder and CEO of Ondat (formerly StorageOS), which makes it possible to easily deploy and manage stateful Kubernetes applications with persistent data volumes. He also serves as co-chair of the CNCF (Cloud Native Computing Foundation) Storage Technical Advisory Group.

Alex comes from a technical background working in IT that includes more than 10 years with Nomura and Goldman Sachs.

125: GreyBeard talk K8s storage with Tad Lebeck, US CTO for ionir

We had some technical difficulties with Matt getting on the podcast so, Ray had to fly solo. This month we continue our investigations into K8s storage with a discussion with Tad Lebeck (@TadLebeck) US CTO, ionir, a software defined storage system that only runs under K8s. ionir Kubernetes Data Services platform is an outgrowth of Reduxio a “tin-wrapped” software defined storage system which pivoted to K8s as the environment to target and left the tin behind.

ionir offers a deduplicating, continuous data protection storage system for PVs (persistent volumes) under K8s that uses 3 way mirroring, across data nodes for data protection. Their solution offers a number of unique services that we haven’t seen in other K8s storage systems. Listen to the podcast to learn more.

Tad opened with a long spiel on what ionir is and we spent the next 40 minutes unpacking that to understand what exactly they were doing.

Let’s start with why stateful containers are all the rage these days. Tad had a slightly different rationale than we’ve heard before. From his perspective, it all comes from current enterprise applications that used database servers/machines. As these apps are re-factored to run as K8s containerized micro services, developers need and want their data be containerized right along with the application.

ionir constructs a block storage system across K8s data nodes or K8s worker nodes with direct attached storage. In the cloud, this storage can be ephemeral (storage that only exists as long as the compute instance operates) or normal block storage (e.g., EBS in AWS). It’s unclear how ephemeral works on-prem. But in any case, they cluster together a set of data nodes into one massive block storage and map PVs onto that. K8s data nodes can be added to the ionir cluster while it’s operating.

As mentioned earlier, they use 3-way mirroring for data protection and ionir insures the 3 copies are stored on different data nodes. As such, when one data node goes down, copies of PV data are available from the other 2 nodes and the data can then be rewritten elsewhere to insure 3-way mirroring continues. We suppose this means a minimum configuration requires at least 3 data nodes.

ionir also provides deduplicating block storage, which should theoretically reduce physical storage footprint for any PV. Data blocks are deduplicated across the cluster. ionir also has a metadata service (also 3 way replicated, to different data space) that records the manifest for all blocks associated with a PV, their hashes and (logical/physical) locations.

There was no mention of data compression or encryption so those are probably not present. We find deduplication very effective for backup storage but less effective for primary storage. Any deduplication ratio for ionir primary storage is likely specific to data being stored, i.e. columnar database, row database, text, office files, etc. Each of these would likely have different dedupe ratios for primary storage.

Furthermore, ionir supplies continuous data protection (CDP) for PV data. PV data written to ionir is immutable, i.e., never modified AND they keep previous versions of PV blocks in storage until they age out. This allows ionir to provide any prior version (well most recent ones) of a PV. ionic uses a timestamp to distinguish different PV versions. So, if ransomware attacked your site, users could ask for a PV version just prior to the time of the attack and you’d have that version of the PV to restart operations. Customer’s can limit how far back ionir saves prior versions of blocks for PVs.

Having CDP for PVs, makes DevOps qualification and testing significantly faster. Normally DevOps would need to copy production data to test environments in order to validate new app code. But ionir can easily instantiate a separate copy of any PV (at any time in their saved set) in a matter of seconds. This can take DevOps deployment testing down from days to minutes or less.

In addition, ionir can teleport PV data to other, remote K8s clusters running ionir. Essentially, this copies PV metadata and it’s “hot” blocks over to any remote ionir cluster. During teleportation, the remote cluster can access PV data as soon as all PV metadata has been copied. The remote site accesses this PV data from the originating cluster (albeit much slower than accesses within the cluster) while “hot” blocks are being copied. Any writes, at the remote site, to PV data would be considered new data, deduplicated at the remote site, and only available at the remote site. Somewhat surprisingly, all of the PV’s data is never copied to the remote system, leaving the PV in a permanent teleported access mode.

Not sure we like the implications of teleporting PVs, from a data integrity perspective. It does make for near-instant access to PV data from other clusters and offers a solution to data gravity (it takes forever to move TB of data across the web), it’s incomplete, as the data is never fully copied to the remote site. Once hot blocks have been copied, remote cluster PV access should run faster. But If there’s 20% of the requested blocks, not in the heat map, those IOs will take 100s mseclonger, depending on wire distance between the sites, to perform. And the write’s at the remote site cause the two copies (one at source site and one at remote site) of the PV to diverge.

Their storage system is priced on a per data node basis which makes it easy to price out their various deployment options. And it works on any K8s standard environment, although Tad admits they haven’t tested VMware Tanzu yet, but they have tested it on GCP, Microsoft Azure, AWS, and Red Hat OpenShift.

They offer a fully functional free trial of ionir storage, only capped at the number of data nodes in use. So, if you only need a small amount of storage (ok 3 data nodes with 24 14TB SSDs each make for large amount of storage) for your K8s environment, you can probably run forever on the free version.

Tad Lebeck, US CTO, ionir

Tad Lebeck is a global technology executive with over two decades of experience in startups and large vendors. Prior to ionir, he founded and led Nuvoloso, an innovator in Kubernetes data services. Earlier, Lebeck served as CTO at Huawei Symantec Technologies, Vice President at Symantec/Veritas, co-founder/CTO at Invio, and CTO at Legato Systems, where he helped create the modern enterprise data-protection market.

Tad was a founding member of the SNIA Technical Council. He earned an MS/CS from the University of Wisconsin, and a combined MBA from the Columbia, London, and HKU Schools of Business.

121: GreyBeards talk Cloud NAS with Peter Thompson, CEO & George Dochev, CTO LucidLink

GreyBeards had an amazing discussion with Peter Thompson (@Lucid_Link), CEO & co-founder and George Dochev (@GDochev), CTO & co-founder of LucidLink. Both Peter and George were very knowledgeable and easy to talk with.

LucidLink’s Cloud NAS creates a NAS storage system out of cloud (any S3 compatible AND Azure Blob) object storage. LucidLink is made up of client software, LucidLink SaaS (metadata service) and data on object storage. Their client software runs on any Linux, MacOS, or Windows desktop/laptop. LucidLink provides streaming, collaborative access to remote users for (file) data on object storage.

Just when 90% of the workforce was sent home for the pandemic, LucidLink emerged to provide all those users secure file access to any and all corporate data in the cloud. Peter mentioned one M&E customer who had just sent 300 video editors home with laptops and a disk drive which would last them all of 2 weeks. But they needed an ongoing solution for after that. The customer started with 300 users and ~100TB of file storage on LucidLink and a few months later, they had 1000 users with a PB+ of LucidLink data and was getting rid of all their NAS boxes. Listen to the podcast to learn more.

They are finding a lot of success in M&E, engineering design, Oil&Gas exploration, geo-spatial design firms and just about anywhere user collaboration on file data is required outside al data center.

LucidLink constructs a  FileSpace for customer file (object) data, which represents a drive letter or mount point that remote users can use to access files from the cloud. LucidLink supports a POSIX compliant file service for that data.

LucidLink data and user generated metadata is encrypted, using client owned/stored keys. So, data-at-rest (and -in-flight) can always be secure. They also support LDAP security and other standard SSO solutions to secure user access to data.

The LucidLink SaaS (metadata) service runs in a hyperscaler and links clients to file data on object storage. It also supports user distributed, byte range locking of file data.

One interesting nuance is that when a client locks a file, the system changes from an eventual to strongly consistent POSIX compliant file system. This ensures that the object storage is always the single source of truth.

The key that differentiates LucidLink from cloud gateways or file synch & share systems is that they 1) are not intended to operate in a data center, (yes, object storage can be located on prem but users are remote) and 2) don’t copy files from one user/access point to another. 

George said latency is enemy number one. LucidLink’s secret is prefetching. Each client uses a customer configured local persistent cache which can range from 5GB to a TB or more. LucidLink maintains a data and (in the next version) metadata working set for the user in their local cache.

Customer file data is split across multiple objects, that way LucidLink can stream data from all of them, in parallel, if needed. And doing so can supply extreme throughput when needed.

As for GDPR and data compliance, the customer controls who has access to the LucidLink SaaS as well as encryption keys.

LucidLink considers their solution “fault tolerant” or DR ready, because customers can load client software on any device and access any LucidLink file data. They also consider themselves “highly available” because their metadata/LucidLink SaaS service runs in a hyper scaler and object backing storage can be configured as highly available.

As mentioned earlier, LucidLink customers can use any S3 compatible or Azure Blob object storage, on prem or in the cloud. But when using cloud object storage, one pays egress charges. LucidLink’s local caching can minimize but cannot eliminate egress charges.

LucidLink offers two licensing models: 1) a BYO (bring your own) object storage and LucidLink provides the software to support your Cloud NAS or 2) LucidLink supplies both the object storage as well as the LucidLink service that glues it all together. The later is a combination of IBM COS and LucidLink that offers less expensive egress charges.

The LucidLink service is billed on capacity under management and user count basis. Capacity is billed on a GB/day, summed over a month. Their minimum solution is 5TB/5 users but they have customers with 1000s of users and PB+ of data. They offer a free 2-week trial period where customers can try LucidLink out.

Peter Thompson, CEO and Co-founder

Peter Thompson, co-founder, and CEO of LucidLink is a passionate and experienced leader and business builder. Thompson has over 30 years of experience in driving business expansion, key programs, and partnerships across regions such as APAC and the Americas mostly in the storage and file system market.

With over 14 years at DataCore Software, most recently as VP of Emerging and Developing Markets, Thompson drove DataCore’s expansion into China working with key industry partners, technology alliances and global teams to develop programs and business focused on emerging markets. Thompson also held the role of Managing Director, APAC responsible for the bottom-line of all Asia operations. He also was President and Representative Director of DataCore Japan, acquiring the majority of ownership and running it as a standalone entity as a beachhead of marquis customers in Japan.

Thompson studied Japanese, history, and economics at Kansai Gaidai and has a BA in International Management, Psychology, Japanese, from Gustavus Adolphus College, is a graduate of Stanford University Business School’s MSx program, with a focus on entrepreneurial finance, design thinking, and the soft skills required to build and lead world-class, high performing teams.

George Dochev, CTO and Co-Founder

George Dochev, co-founder and CTO of LucidLink, is a storage and file system expert with extensive experience in bringing emerging technologies to market. Dochev has over 20 years of success leading the development of complex virtualization products for the storage industry. He specializes in research and development in the fields of high-performance distributed systems, storage infrastructure software, and cloud technologies. 

Dochev was co-founder and principal member of the engineering team at DataCore Software for nearly 17 years. While at Datacore, Dochev helped transform that company from a start-up into a global leader in software-defined storage. Underscoring Dochev’s impact as an entrepreneur is the fact that DataCore Software now powers the data centers of 10,000+ large enterprises around the world.

Dochev holds a degree in Mathematics from Sofia University St. Kliment Ohridski in Bulgaria, and an MS in Computer Science from the University of National and World Economy, in Sofia, Bulgaria.

119: GreyBeards talk distributed cloud file systems with Glen Shok, VP Alliances,Panzura

This month we turn to distributed (cloud) filesystems as we talk with Glen Shok (@gshok), VP of Alliances for Panzura. Panzura uses backend (cloud or onprem, S3 compatible) object store with a ring of software (VMs) or hardware (appliance) gateways that provides caching for local files as well as managing and maintaining metadata which creates a global NFS and SMB file system with near local access times.

Glen is an industry (without the grey beard) veteran with the knowledge to back that up. He’s been in the industry so long that we could probably have spent an hour just talking about where people are that we both know. Listen to the podcast to learn more

The interesting part about Panzura is their gateway ring. It not only manages local file caching and metadata maintenance/access, but it provides an out-of-(data path)-band file (byte range) lock coordination service, cache coherency (via delta block changes) and other services. All the metadata (and data) is backed up on backend object storage, but it’s the direct access to the metadata and its out of band control path as well as its caching service that supplies the near local access times for data.

Panzura supports any public (AWS, Azure, GCP & IBM) cloud object storage for backend data storage as well as a few, on prem, solutions (I think Glen mentioned IBM COS & Cloudian and their website mentions Wasabi, Scality and NetApp StorageGrid). Glen said they are on each of the public cloud’s marketplaces and with virtual gateways, its very easy to spin up and try.

Their system provides global (local, at the gateway) dedupe to reduce backend storage footprint and (both out of band and from backend storage) delta block changes for local cache updates. So in the event that an old version of the file happens to be present in their local cache gateway, it only needs retrieve the changed data from the object storage backend (or another gateway). All this local caching, dedupe and changed block tracking, helps to reduce cloud egress charges.

Data written to backend storage is immutable and versioned. So customers can retrieve any version of any file that was ever destaged to their backend. Glen said they write huge objects, presumably to help reduce storage footprint, IO overhead and API calls.

Glen claimed what with 3-way replication within a cloud region and 1-way replication outside the cloud region, customers no longer have to backup data. I respectively disagreed. He believes over time, customers will come to realize their use of backups for restores, becomes so rare that they can reduce backup frequency, if not eliminate it altogether. Some follow on discussion ensued, but in the end we seemed to agree to disagree on this topic.

Panzura also supports cross cloud mirroring. So, one could have their data mirrored from one cloud to another. One of these clouds will be used as a primary and only in the event that a majority of the gateway rings agree that the primary is DOWN and the secondary is UP, will they all automatically cut over to using the secondary storage cloud. While failover is automated, fail back requires operator intervention.

Panzura is charged for on managed data capacity. But cloud or on prem object storage is in addition to this and is charged for separately by the object storage provider.

As far what size file systems they support, Glen mentioned that they are ZFS internally, so any size imaginable. But he did concede, that at some point, metadata management becomes a problem and that they often suggest splitting apart 20PB file systems into 2 10PB (gateway rings) file systems to deal with this issue.

As for other solutions offered by Panzura, they have a K8s container block storage for persistent volumes that scales in capacity/performance using K8s services/resources.

Glen Shok, VP Alliances, Panzura

Glen Shok has been in the data center and storage industry for over 20 years.

Starting his career at Cisco in the late 90s. Moving to a few startups which were acquired by Brocade and Oracle. Glen has held positions in sales, sales leadership, product management and marketing, and Office of the CTO at Zones, prior to coming to Panzura.

He can’t decide what he likes to do, but at Panzura, he’s the VP of Strategic Alliances.