130: GreyBeards talk high-speed database access using Apache Arrow Flight, with James Duong and David Li

We had heard about Apache Arrow and Arrow Flight as being a hi-performing database with access speeds to match for a while now and finally got a chance to hear what it was all about with James Duong, Co-Fourder of Bit Quill Technologies/Senior Staff Developer at Dremio and David Li (@lidavidm), Apache PMC and software developer at Voltron Data.

First, Apache Arrow is an open source, in memory data base (GitHub repo) for columnar data that enables lightening fast access and processing of data. Apache Arrow Flight is a set of interfaces, protocols, and services that parallelizes access to load and unload Arrow data over the network, from storage to memory and back, very fast. Listen to the podcast to learn more.

Columnar databases are all the rage these days and have more or less taken over from row oriented data bases. With row based database, data is stored (and accessed) row by row. In a columnar database, data is stored in columns, i.e, all data for one column is stored in sequence and then the next column is stored in sequence. Columnar databases can be queried/processed faster than row databased (depending on whether you are looking at/accessing multiple columns per row or not). And columnar data should compress better as all the data in a single column is of the same type..

Also the fact that columns are located contiguous in memory means if you process a column at a time, CPU data caches should work better. This is because they can grab a whole vector (columns worth of data) with one request.

Arrow data is processed and accessed in record batches. These are 2D segments which represent all the columns in a sequence/set of rows. And record batches are the unit of parallelism in Arrow and Arrow Flight. So an Arrow client operating on a CPU thread/core/chip or server could be processing one record batch while another CPU thread/core/CPU or server could process a different record batch.

Arrow Flight (GitHub RPC format doc repo) is an RPC framework that includes API’s, protocols, standards (for on storage, on wire and in memory) and libraries used to transfer Arrow data and metadata (record batches) across the network. For the typical system there exists Flight clients and Flight services in a system.

Arrow Flight currently uses Google’s gRPC for data transfers. gRPC is a open source remote procedure call (RPC) service that supports within data center, across data centers and out to the edge processing services. Although Arrow Flight is currently implemented on top of gRPC, other network protocols will be supported in the future.

What makes Arrow Flight so fast is its ability to support parallel transfers. That is customers can configure Arrow (Flight) clients across clusters of servers and Arrow (Flight) services residing on one or more other servers. Any client can request metadata and record batches from any end point (Flight service) in the data center. And yes Arrow data can be supplied from multiple end points by being mirrored/replicated. All data transfers can operate in parallel across all Flight client and services, with no known bottleneck other than the network.

A single stream of Arrow Flight data was able to deliver 20GB/sec. The fact that you can have any (?) number of Arrow Flight data streams in operation at the same time makes that a very interesting number.

Also, Arrow data can be stored on or sourced from typical data lakes such as Azure Data Lake, AWS S3, Google Cloud storage, etc.

Another advantage of Arrow Flight is the ability to use the same format on the wire and in storage. Normally JDBC (and ODBC) have on storage and on wire formats which require format conversion (serialization) to move data from storage/memory to wire and another conversion (deserialization) to move data from on wire format to in storage/memory format. Arrow Flight does away with serialization and deserialization of data all together and uses the same format for on wire and in storage.

Arrow Flight SQL allows Arrow processing of SQL database data. My understanding is that customers using non Arrow databases such as Oracle, SQL Server, Postgres, etc. can use Arrow Flight SQL to provide Arrow in-memory database processin/query execution for their data.

Arrow and Arrow flight are primarily used to process data analytics workloads but Arrow also has a new execution engine, the Arrow Gandiva project, that enables vectorized processing of Arrow data. This is a special execution engine for Arrow that supports X86 cores with AVX instructions, (NVIDIA) GPUs, and FPGAs.

There’s also an open source package, Fletcher, used to create Arrow and Arrow flight processing HDLs so that customers can add Arrow data processing and Arrow Flight data transfer functionality to custom built FPGAs.

One challenge with open source software is support for problems/bugs that crop up. An active developer community helps, but enterprise customers require professional, on call 7×24 (5×12?) support for all their critical (and most non-critical) software. Voltron Data (David’s) company provides paid for support for Arrow Flight and Arrow data services.

The other major problem with open source software has been use complexity. At the moment the Arrow Flight team is very responsive in clarifying documentation and are trying to make it easier to use. But at the moment Arrow Flight is mostly a set of APIs, libraries and connectors that end users can use to standup Arrow (Flight) clients and servers to transfer Arrow data between them.

James Duong, Co-Founder Bit Quill Technologies & Sr. Staff Developer at Dremio

An Apache Arrow contributor, cofounder at Bit Quill Technologies, and contributor to Dremio Corporation projects, James Duong has worked with databases for over 15 years, from backend query engines to drivers and protocols. He’s worked with a variety of relational, big data, and cloud databases including Dremio, SQL Server, Redshift, and Hive.

Previously at Simba Technologies, James architected and built connectors for sources, as well as designing the Simba Engine SDK for developing connectivity solutions for any data source.

Bit Quill Technologies, the company James helped co-found, builds back end software in the data and cloud space. Bit Quill has built a name for itself as a producer of high-quality software, a collaborative approach to design and development, and a love for good tech and happy people.

Balancing his passion for the data ecosystem with a young family, James occasionally steps away from it all to go hiking.

David Li, Apache Arrow PMC and software engineer at Voltron Data

David is a PMC member for Apache Arrow and a software engineer at Voltron Data (formerly known as Ursa Computing). Prior to that, he worked on data services and Apache Arrow at Two Sigma.

David holds an M.Eng. in Computer Science from Cornell University.

129: GreyBeards talk composable infrastructure with GigaIO’s, Matt Demas, Field CTO

We haven’t talked composable infrastructure in a while now but it’s been heating up lately. GigaIO has some interesting tech and I’ve been meaning to have them on the show but scheduling never seemed to work out. Finally, we managed to sync schedules and have Matt Demas, field CTO at GigaIO (@giga_io) on our show.

Also, please welcome Jason Collier (@bocanuts), a long time friend, technical guru and innovator to our show as another co-host. We used to have these crazy discussions in front of financial analysts where we disagreed completely on the direction of IT. We don’t do these anymore, probably because the complexities in this industry can be hard to grasp for some. From now on, Jason will be added to our gaggle of GreyBeard co-hosts.

GigaIO has taken a different route to composability than some other vendors we have talked with. For one, they seem inordinately focused on speed of access and reducing latencies. For another, they’re the only ones out there, to our knowledge, demonstrating how today’s technology can compose and share memory across servers, storage, GPUs and just about anything with DRAM hanging off a PCIe bus. Listen to the podcast to learn more.

GigaIO started out with pooling/composing memory across PCIe devices. Their current solution is built around a ToR (currently Gen4) PCIe switch with logic and a party of pooling appliances (JBoG[pus], JBoF[lash], JBoM[emory],…). They use their FabreX fabric to supply rack-scale composable infrastructure that can move (attach) PCIe componentry (GPUs, FPGAs, SSDs, etc.) to any server on the fabric, to service workloads.

We spent an awful long time talking about composing memory. I didn’t think this was currently available, at least not until the next version of CXL, but Matt said GigaIO together with their partner MemVerge, are doing it today over FabreX.

We’ve talked with MemVerge before (see: 102: GreyBeards talk big memory … episode). But when last we met, MemVerge had a memory appliance that virtualized DRAM and Optane into an auto-tiering, dual tier memory. Apparently, with GigaIO’s help they can now attach a third tier of memory to any server that needs it. I asked Matt what the extended DRAM response time to memory requests were and he said ~300ns. And then he said that the next gen PCIe technology will take this down considerably.

Matt and Jason started talking about High Bandwidth Memory (HBM) which is internal to GPUs, AI boards, HPC servers and some select CPUs that stacks synch DRAM (SDRAM) into a 3D package. 2nd gen HBM silicon is capable of 256 GB/sec per package. Given this level of access and performance. Matt indicated that GigaIO is capable of sharing this memory across the fabric as well.

We then started talking about software and how users can control FabreX and their technology to compose infrastructure. Matt said GigaIO has no GUI but rather uses Redfish management, a fully RESTfull interface and API. Redfish has been around for ~6 yrs now and has become the de facto standard for management of server infrastructure. GigaIO composable infrastructure support has been natively integrated into a couple of standard cluster managers. For example. CIQ Singularity & Fuzzball, Bright Computing cluster managers and SLURM cluster scheduling. Matt also mentioned they are well plugged into OCP.

Composable infrastructure seems to have generated new interest with HPC customers that are deploying bucketfuls of expensive GPUs with their congregation of compute cores. Using GigaIO, HPC environments like these can overnight, go from maybe 30% average GPU utilization to 70%. Doing so can substantially reduce acquisition and operational costs for GPU infrastructure significantly. One would think the cloud guys might be interested as well.

Matt Demas, Field CTO, GigaIO

Matt’s career spans two decades of experience in architecting innovative IT solutions, starting with the US Air Force. He has built federal, healthcare, and education-based vertical solutions at companies like Dell, where he was a Senior Solutions Architect. Immediately prior to joining GigaIO, he served as Field CTO at Liqid. 

Matt holds a Bachelor’s degree in Information Technology from American InterContinental University, and an MBA from Concordia University Austin.

128: GreyBeards talk containers, K8s, and object storage with AB Periasamy, Co-Founder&CEO MinIO

Sponsored by:

Once again Keith and I are talking K8s storage, only this time it was object storage. Anand Babu (AB) Periasamy, Co-founder and CEO of MinIO, has been on our show a couple of times now and its always an insightful discussion. He’s got an uncommon perspective on IT today and what needs to change.

Although MinIO is an open source, uber-compatible, S3 object store, AB more often talks like a revolutionary, touting the benefits of containerization, scale and automation with K8s. Object storage is just one of the vehicles to help get there. Listen to the podcast to learn more.

We started our discussion on the changing role of object storage in applications. Object storage started out as an archive solution. But then, over time, something happened, modern database startups adopted object storage to hold primary data, then analytics moved over to objects in a big way, and finally AI/ML came out with an unquenchable thirst for data and object storage was its only salvation.

Keith questioned the use of objects in analytics. Both AB and I pointed out that Splunk (and Spark) fully supported objects. But Keith said R (and Python) data scientists prefer to use protocols they learned in school, and these were all about (CSV, JPEGs, JSON) files. AB said what usually happens is this data is stored as object storage and then downloaded onto local disk as files to be processed. That’s not to say, that R or Python can’t process objects directly, but when they don’t, the ultimate source of data truth is object storage.

Somehow, we got onto the multi-cloud question. AB said the multi-cloud is really all about containers and K8s. When customers talk multi-cloud, what they really mean is they want applications that can run anywhere, in any cloud, on premise, or anyplace else for that matter.

I thought multi cloud was a DR solution. But AB reiterated it’s more a solution to vendor lock-in. What containerization gives IT is the option (ability) to run applications anywhere, but IT is not obligated to execute that option unless it makes sense

AB said that dev today doesn’t develop apps in the cloud anymore. They develop locally using minikube, once it’s working there they then add CI/CD tool chains and then move it to its final resting place (the cloud or wherever it ultimately needs to run). It turns out, containers, YAML files, scripts etc. are small and trivial to upload, migrate, or move to any internet location. And with ubiquitous K8s support available everywhere, they can move anywhere unchanged.

But where’s the data. AB said anywhere the app executes. It’s never moved, it takes too much time and effort to move this amount of data. But as applications move, any data it generates grows in that location over time.

We next turned to how MinIO was supported in K8s. AB mentioned they have a DirectPV CSI driver that creates a distributed PV to support MinIO services on local disks. In this way, containers needing access to MinIO S3 object storage can directly allocate data to user storage.

Then we asked about opinionated stacks. AB said most customers don’t want these. They may have some value in preserving an infrastructure environment but they’re better off transitioning to containerization and build any stack within those containers and the K8s cluster services.

On the other hand, MinIO object storage is available with the same S3 API, in bare metal, on VMware, OpenShift, K8s, every public cloud and most private clouds, as well. The advantage of the same, single storage interface, available everywhere can’t be beat.

MinIO recently closed a new funding round of $103M. AB mentioned they had new investments from Intel and Softbank, but I was more interested in plans he had for the new cash. And Keith asked where the new funding left MinIO with respect to its competitors in this space.

AB said it was never about the money, it was more about what you did with your team that mattered in the long run. AB’s imperative was to enter an existing market with a better product and succeed with that. Creating a new market plus a new product always cost more, takes longer and is riskier.

As for the new funds, there are really two ways to go: 1) improve the current product or 2) create a new one. My sense is that AB leans towards improving the current product.

For instance, MinIO is often asked to support a different object storage API. But AB’s perspective is that S3 was an early bet that paid off well by becoming the de facto standard for object storage. Supporting another API would divide his resources and probably make their current product worse not better. AB mentioned they are getting 1.1M downloads of their Docker container version so they seem to be succeeding well with the current product

Anand Babu (AB) Periasamy, Co-founder and CEO

AB Periasamy is the co-founder and CEO of MinIO, an open-source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).

AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat’s Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory’s “Thunder” code, which, at the time was the second fastest in the world.  

AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.

127: Annual year end wrap up podcast with Keith, Matt & Ray

[Ray’s sorry about his audio, it will be better next time he promises, The Eds] This was supposed to be the year where we killed off COVID for good. Alas, it was not to be and it’s going to be with us for some time to come. However, this didn’t stop that technical juggernaut we call the GreyBeards on Storage podcast.

Once again we got Keith, Matt and Ray together to discuss the past year’s top 3 technology trends that would most likely impact the year(s) ahead. Given our recent podcasts, Kubernetes (K8s) storage was top of the list. To this we add AI-MLops in the enterprise and continued our discussion from last year on how Covid & WFH are remaking the world, including offices, data centers and downtowns around the world. Listen to the podcast to learn more.

K8s rulz

For some reason, we spent many of this year’s podcasts discussing K8s storage. TK8s was never meant to provide (storage) state AND as a result, any K8s data storage has had to be shoe horned in.

Moreover, why would any IT group even consider containerizing enterprise applications let alone deploy these onto K8s. The most common answers seem to be automatic scalability, cloud like automation and run-anywhere portability.

Keith chimed in with enterprise applications aren’t going anywhere and we were off. Just like the mainframe, client-server and OpenStack applications before them, enterprise apps will likely outlive most developers, continuing to run on their current platforms forever.

But any new apps will likely be born, live a long life and eventually fade away on the latest runtime environment. which is K8s.

Matt mentioned hybrid and multi-cloud as becoming the reason-d’etre for enterprise apps to migrate to containers and K8s. Further, enterprises have pressing need to move their apps to the hybrid- & multi-cloud model. AWS’s recent hiccups, notwithstanding, multi-cloud’s time has come.

Ray and Keith then discussed which is bigger, K8s container apps or enterprise “normal” (meaning virtualized/bare metal) apps. But it all comes down to how you define bigger that matters, Sheer numbers of unique applications – enterprise wins, Compute power devoted to running those apps – it’s a much more difficult race to cal/l. But even Keith had to agree that based on compute power containerized apps are inching ahead.

AI-MLops coming on strong

AI /MLops in the enterprise was up next. For me the most significant indicator for heightened interest in AI-ML was VMware announced native support for NVIDIA management and orchestration AI-MLops technologies.

Just like K8s before it and VMware’s move to Tanzu and it’s predecessors, their move to natively support NVIDIA AI tools signals that the enterprise is starting to seriously consider adding AI to their apps.

We think VMware’s crystal ball is based on

  • Cloud rolling out more and more AI and MLops technologies for enterprises to use. on their infrastructure
  • GPUs are becoming more and more pervasive in enterprise AND in cloud infrastructure
  • Data to drive training and inferencing is coming out of the woodwork like never before.

We had some discussion as to where AMD and Intel will end up in this AI trend.. Consensus is that there’s still space for CPU inferencing and “some” specialized training which is unlikely to go away. And of course AMD has their own GPUs and Intel is coming out with their own shortly.

COVID & WFH impacts the world (again)

And then there was COVID and WFH. COVID will be here for some time to come. As a result, WFH is not going away, at least not totally any time soon. And is just becoming another way to do business.

WFH works well for some things (like IT office work) and not so well for others (K-12 education). If the GreyBeards were into (non-crypto) investing, we’d be shorting office real estate. What could move into those millions of square feet (meters) of downtime office space is anyones guess. But just like the factories of old, cities and downtowns in particular can take anything and make it useable for other purposes.

That’s about it, 2021 was another “interesteing” year for infrastructure technology. It just goes to show you, “May you live in interesting times” is actually an old (Chinese) curse.

Keith Townsend, (@TheCTOadvisor)

Keith is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations. Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek, and can be found on LinkedIN.

Matt Leib, (@MBLeib)

Matt Leib has been blogging in the storage space for over 10 years, with work experience both on the engineering and presales/product marketing. His blog is at Virtually Tied to My Desktop and he’s on LinkedIN.

Ray Lucchesi, (@RayLucchesi)

Ray is the host and co-founder of GreyBeardsOnStorage and is President/Founder of Silverton Consulting, and a prominent (AI/storage/systems technology) blogger at RayOnStorage.com. Signup for SCI’s free, monthly industry e-newsletter here, published continuously since 2007. Ray can also be found on LinkedIn

126: GreyBeards talk k8s storage with Alex Chircop, CEO, Ondat

Keith and I had an interesting discussion with Alex Chircop (@chira001), CEO of Ondat, a kubernetes storage provider. They have a high performing system, laser focused on providing storage for k8s stateful container applications. Their storage is entirely containerized and has a number of advanced features for data availability, performance and security that developers need the run stateful container apps. Listen to the podcast to learn more.

We started by asking Alex how Ondats different from all the other k8s storage solutions out there today (which we’ve been talking with lately). He mentioned three crucial capabilities:

  • Ondat was developed from the ground up to run as k8s containers. Doing this would allow any k8s distribution to run their storage to support stateful container apps. .
  • Ondat was designed to allow developers to run any possible container app. Ondat supports both block as well as file storage volumes.
  • Ondat provides consistent, superior performance, at scale, with no compromises. Sophisticated data placement insures that data is located where it is consumed and their highly optimized data path provides low-latency access that data storage.

Ondat creates a data mesh (storage pool) out of all storage cluster nodes. Container volumes are carved out of this data mesh and at creation time, data and the apps that use them are co-located on the same cluster nodes.

At volume creation, Dev can specify the number of replicas (mirrors) to be maintained by the system. Alex mentioned that Ondat uses synchronous replication between replica clusters nodes to make sure that all active replica’s are up to date with the last IO that occurred to primary storage.

Ondat compresses all data that goes over the network as well as encrypts data in flight. Dev can easily specify that the data-at-rest also be compressed and/or encrypted. Compressing data in flight helps supply consistent performance where networks are shared.

Alex also mentioned that they support both the 1 reader/writer, k8s block storage volumes as well as multi-reader/multi-writer, k8s file storage volumes for containers.

In Ondat each storage volume includes a mini-brain used to determine primary and replica data placement. Ondat also uses desegregated consensus to decide what happens to primary and replica data after a k8s split cluster occurs. After a split cluster, isolated replica’s are invalidated and replicas are recreate, where possible, in the surviving nodes of the cluster portion that holds the primary copy of the data.

Also replica’s can optionally be located across AZs if available in your k8s cluster. Ondat doesn’t currentlysupport replication across k8s clusters.

Ondat storage works on any hyperscaler k8s solution as well as any onprem k8s system. I asked if Ondat supports VMware TKG and Alex said yes but when pushed mentioned that they have not tested it yet.

Keith asked what happens when things go south, i.e., an application starts to suffer worse performance. Alex said that Ondat supplies system telemetry to k8s logging systems which can be used to understand what’s going on. But he also mentioned they are working on a cloud based, Management-aaS offering, to provide multi-cluster operational views of Ondat storage in operation to help understand, isolate and fix problems like this.

Keith mentioned he had attended a talk by Google engineers that developed kubernetes and they said stateful containers don’t belong under kubernetes. So why are stateful containers becoming so ubiquitous now.

Alex said that may have been the case originally but k8s has come a long way from then and nowadays as many enterprises shift left enterprise applications from their old system environment to run as containers they all require state for processing. Having that stateful information or storage volumes accessible directly under k8s makes application re-implementation much easier.

What’s a typical Ondat configuration? Alex said there doesn’t appear to be one. Current Ondat deployments range from a few 100 to 1000s of k8s cluster nodes and 10 to 100s of TB of usable data storage.

Ondat has a simple pricing model, licensing costs are determined by the number of nodes in your k8s cluster. There’s different node pricing depending on deployment options but other than that it’s pretty straightforward.

Alex Chircop, CEO Ondat

Alex Chircop is the founder and CEO of Ondat (formerly StorageOS), which makes it possible to easily deploy and manage stateful Kubernetes applications with persistent data volumes. He also serves as co-chair of the CNCF (Cloud Native Computing Foundation) Storage Technical Advisory Group.

Alex comes from a technical background working in IT that includes more than 10 years with Nomura and Goldman Sachs.