133: GreyBeards talk trillion row databases/data lakes with Ocient CEO & Co-founder, Chris Gladwin

We saw a recent article in Blocks and Files (Storage facing trillion-row db apocalypse), about a couple of companies which were trying to deal with trillion row database queries without taking weeks to respond. One of those companies was Ocient (@Ocient), a Chicago startup, whose CEO and Co-Founder, Chris Gladwin, was an old friend from CleverSafe (now IBM Cloud Object Storage).

Chris and team have been busy creating a new way to perform data analytics on massive data lakes. It’s has a lot to do with extreme parallelism, high core counts, NVMe SSDs, and sophisticated network and compute flow control. Listen to the podcast to learn more.

The key to Ocient’s approach involves NVMe SSDs which have become ubiquitous over the last couple of years which can be deployed to deal with large data problems. Another key to Ocient is multi-core CPUs, which again seem everwhere and if anything, are almost doubling with every new generation of CPU chip.

We let Chris wax a little too long on the SSD revolution in IOPs, especially as pertains too random 4K reads. Put a 20 or so NVMe SSDs in a server with dual 50 core CPU chips and you have one fast random IO machine.

Another key to Ocient is very sophisticated network and bus data flow management. With all this data running any query on it, involves consuming lots of data that all has to be brought into the CPU. PCIe bandwidth helps, as does NVMe SSDs, but you still need to insure that nothing gets bottlenecked moving all that data around a system/server.

Yet another key to Ocient is parallelism. With one 20 NVMe SSD server and 2-50 core CPUs you’ve got a lot of capability but when you are talking about trillion row databases you need more. So in order to respond to queries in anything a second or so, they throw a lot of NVMe servers at the problem.

I asked how they split the data across all these servers and Chris mentioned that at the moment that’s part of their secret sauce and involves professional services.

Ocient supports full ANSI SQL queries against trillion row databases and replies to those queries in a matter of seconds. And we aren’t just talking about SQL selects, Ocient can do splits, joins and updates to this trillion row database at the same time as the SQL select are going on. Chris mentioned that Ocient can be loading 100K JSON files each second, while still performing SQL queries in near real time against the trillion row database.

Ocient supports Reed-Solomon error correction on database data as well as data compression and encryption.

In addition to SQL queries, Chris mentioned that Ocient supports data load and transform activities. He said that most of this data is being generated from IoT applications and often needs to be cleaned up before it can be processed. Doing this in real time, while handling queries to the database is part of their secret sauce.

Chris said there’s probably not that many organizations that have need for trillion row databases. But ad auctions, telecom routers, financial services already use trillion row databases and they all want to be able to process queries faster on this data. Ocient is betting that there will be plenty more like this over time.

Ocient is available on AWS and GCP as a cloud service, can also be used operating in their own Ocient Cloud or can be deployed on premises. Ocient services are billed on a per core pack (500 cores, I think) subscription model.

Chris Gladwin, CEO and Co-founder, Ocient

Chris is the CEO and Co-Founder of Ocient whose mission is to provide the leading platform the world uses to transform, store, and analyze its largest datasets.

In 2004, Chris founded Cleversafe which became the largest and most strategic object storage vendor in the world (according to IDC.)  He raised $100M and then led the company to over a $1.3B exit in 2015 when IBM acquired the company.  The technology Cleversafe created is used by most people in the U.S. every day and generated over 1,000 patents granted or filed, creating one of the ten most powerful patent portfolios in the world. 

Prior to Cleversafe, Chris was the Founding CEO of startups MusicNow and Cruise Technologies and led product strategy for Zenith Data Systems.  He started his career at Lockheed Martin as a database programmer and holds an engineering degree from MIT. 

106: Greybeards talk Intel’s new HPC file system with Kelsey Prantis, Senior Software Eng. Manager, Intel

We had talked with Intel at Storage Field Day 20 (SFD20), about a month ago. At the virtual event, Intel’s focus was on their Optane PMEM (persistent memory) technology. Kelsey Prantis (@kelseyprantis), Senior Software Engineering Manager, Intel was on the show and gave an introduction into Intel’s DAOS (Distributed Architecture Object Storage, DAOS.io) a new HPC (high performance computing, super computers) file system they developed from scratch to use leading edge, Intel technologies, and Optane PMEM was one of them.

Kelsey has worked on LUSTRE and other HPC file systems for a long time now and came into the company from the acquisition of Whamcloud. Currently, she manages the development team working on DAOS. DAOS is a new HPC object storage file system which is completely open source (available on GitHub).

DAOS was designed from the start to take advantage of NVMe SSDs and Optane PMEM. With PMEM, current servers can support up to 20TB of memory. Besides the large memory sizes, Optane PMEM also offers non-volatile memory and byte addressability (just like DRAM). These two characteristics opens up new functionality that allows DAOS to move beyond legacy, block oriented, storage architectures that have been the only storage solution for HPC (and the enterprise) for decades now.

What’s different about DAOS

DAOS uses PMEM for all metadata and for storing small files. HPC IO has always focused on heavy bandwidth (IO using large blocks) oriented but lately newer applications have emerged, such as AI/ML/DL, data analytics and others, that use smaller files/blocks. Indeed, most new HPC clusters and supercomputers are deploying almost as many GPUs as CPUs in their configurations to support AI activities.

The problem is that these newer applications typically consume much smaller files. Matt mentioned one HPC client he worked with was processing small batches of seismic data, to predict, in real time, earthquakes that were happening around the world.

By using PMEM for metadata and small files, DAOS can be much more responsive to file requests (open, close, delete, status) as well as provide higher performing IO for small files. All this leads to a much better performing system for the new HPC workloads as well as great sustainable performance for the more traditional large file workloads.

DAOS storage

DAOS provides a cluster storage system that can be configured with from 1 (no data protection), but more normally 3 nodes (with data protection) at a minimum to 512 nodes (lab tested). Data protection in DAOS is currently based on mirroring data and can use from 0 to the number of nodes in a cluster as data mirrors.

DAOS system nodes are homogeneous. That is they all come with the same amount of PMEM and NVMe SSDs. Note, DAOS doesn’t support disk drives. Kelsey mentioned DAOS node hardware can be tailored to suit any particular application environment. But they typically require an average of 6% of overall DAOS system capacity in PMEM for metadata and small file activity.

DAOS current supports their own API, POSIX, HDFS5, MPIIO and Apache Spark storage protocols. Kelsey mentioned that standard POSIX uses a pessimistic conflict resolution mode which leads to performance bottlenecks during parallel access. In contrast, DAOS’s versos of POSIX uses optimistic conflict resolution, which means DAOS starts writes assuming there’s no conflict, but if one occurs it handles the conflict in real time. Of course with all the metadata byte addressable and in PMEM this doesn’t take up a lot of (IO) time.

As mentioned earlier, DAOS data protection uses mirror-replicas. However, unlike most other major file systems, DAOS mirroring can be done at the object level. DAOS internally is an object store. Data organization on DAOS starts at the pool level, underneath that is data containers, and then under that are objects. Any object in DAOS can have its own mirroring configuration. DAOS is working towards supporting Erasure Coding as another form of data protection for a future release.

DAOS performance

There’s a new storage benchmark that was developed specifically for HPC, called the IO500. The IO500 benchmark simulates a number of different HPC workloads, measures performance for each of them, and computes an (aggregate) performance score to rank HPC storage systems.

IO500 ranks system performance using two lists: one is for any sized configuration that typically range from 50 to 1000s of nodes and their other list limits the configuration to 10 nodes. The first performance ranking can sometimes be gamed by throwing more hardware into a cluster. The 10 node rankings are much harder to game this way and from our perspective, show a fairer comparison of system performance.

As presented (virtually) at ISC 2020, DAOS took the top spot on the IO500 any size configuration list and performed better than 2X the next best solution. And on the IO500 10 node list, Intel’s DAOS configuration, Texas Advanced Computing (TAC) DAOS configuration, and Argonne Nat Labs DAOS configuration took the top 3 spots and had 3X better performance than the next best, non-DAOS storage system.

The Argonne National Labs has already stated that they will be using DAOS in their new HPC system to be deployed in the near future. Early specifications for storage at the new Argonne Lab required support for 230PB of data and 25TB/sec of bandwidth.

The podcast ran ~43 minutes. Kelsey was great to talk with and very knowledgeable about HPC systems and HPC IO in particular. Matt has worked at Argonne in the past so understood these systems better than I. Sadly, we lost Matt’s end of the conversation about 1/2 way into the recording. Both Matt and I thought that DAOS represents the birth of a new generation of HPC storage. Listen to the podcast to learn more.


This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png

This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png
This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png

Kelsey Prantis, Senior Software Engineering Manager, Intel

 Kelsey Prantis heads the Extreme Storage Architecture and Development division at Intel Corporation. She leads the development of Distributed Asynchronous Object Storage (DAOS), an open-source, low-latency and high IOPS object store designed from the ground up for massively distributed Non-Volatile Memory (NVM).

She joined Intel in 2012 with the acquisition of Whamcloud, where she led the development of the Intel Manager for Lustre* product.

Prior to Whamcloud, she was a software developer at personal genomics and biotechnology company 23andMe.

Prantis holds a Bachelor’s degree in Computer Science from Rochester Institute of Technology

104: GreyBeards talk new cloud defined (shared) storage with Siamak Nazari, CEO Nebulon

Ray has known Siamak Nazari (@NebulonInc), CEO Nebulon for three companies now but has rarely had a one (two) on one discussion with him. With Nebulon just emerging from stealth (a gutsy move during the pandemic), the GreyBeards felt it was a good time to get Siamak on the show to tell us what he’s been up to. Turns out he and Nebulon decided it was time to completely rethink/rearchitect shared storage for the new data center.

At his prior company, Siamak spent a lot of time with many customers discussing the problems they had dealing with the complexity of managing, provisioning and maintaining multiple shared storage arrays. Somewhere in all those discussions Siamak saw this as a problem that needed a radical solution. If we could just redo shared storage from the ground up, there might be a solution to all these problems.

Redefining shared storage

Nebulon’s new approach to shared storage starts with an SPU card which replaces SAS RAID cards in a server. But instead of creating SAS RAID groups, the SPU creates a shareable, enterprise class, pool of storage across a throng of servers.

They call a collection of servers with SPUs, Cloud Defined Storage (CDS) and it creates a Nebulon nPod. An nPod essentially consists of multiple servers with SPU cards, with or without attached SSD storage, that are provisioned, managed and monitored via the cloud. Nebulon nPod servers are elements or nodes of a shared storage pool across all interconnected SPU servers in a data center.

In an SPU server with local (SAS, SATA, NVMe) SSD storage, the SPU creates an erasure coded pool of storage which can be used to serve (SAS) LUNs to this or any other SPU attached server in the nPod. In a SPU server without local SSD storage, the SPU provides access to any other SPU server shared storage in the nPod. Nebulon nPods only works with flash storage, it doesn’t support spinning media.

The SPU can supply boot storage for its server. There’s no need to have the CPU running OS code to use nPod shared storage. Yes, the SPU needs power and an active PCIe bus to work, but the functionality of an SPU doesn’t require an operational OS to work. The SPU provides a SAS LUN interface to server CPUs.

Each SPU has dual port access to an inter-cluster (25GbE) interconnect that connects all SPUs to the nPod. The nPod inter-cluster protocol is proprietary but takes advantage of standard TCP/IP services across the network with standard 25GbE switching.

The SPU firmware insures that it stays connected as long as power is available to the server. Customers can have more than one SPU in a server but these would be used for more IO performance. Each SPU also has 32GB of NVRAM for caching purposes and it’s also used for power fail fault tolerance.

In the unlikely case that the server and SPU are completely down (e.g. power outage), clients can still access that SPUs data storage, if it was mirrored (see below). When the SPU server comes back up, it will be resynched with any data that had been changed.

Other Nebulon storage features

Nebulon supports data-at-rest encryption, compression and deduplication for customer data. That way customer data is never in plain text as it travels across the nPod or even within the server from the SPU to SSD storage. Also any customer data written to an nPod can be optionally mirrored and as noted above, is protected via erasure coding.

The SPU also supports snapshotting of customer LUN data. So clients can take copies of LUNs and use these for backups, test, dev, etc. SPUs also support asynchronous or synchronous replication between nPods. For synchronous replication and mirrored data, the originating host only sees the IO complete after the data has been received at the target SPU or nPod.

Metadata for the nPod that defines LUN configurations and which server has LUN data is kept across the cluster in each SPU. But metadata on the location of user data within a server is only kept in that server’s SPU.

We asked Siamak whether nPods support SCM (storage class memory). He said not yet, but they’re looking at SCM NVMe storage for use as a potential metadata and data cache for SPUs.

Nebulon Application Centric storage

All the above storage features are present in most enterprise class storage systems. But what sets Nebulon apart from all other shared storage arrays is that their control plane is entirely in the cloud. That is customers point their browser to Nebulon’s control plane and use it to configure, provision and manage the nPod storage pool. Nebulon supports application templates that can be used to configure nPod storage to support standardized applications, such as VMware VMs, MongoDB, persistent storage for K8S containers, bare metal Linux apps, etc.

With the nPod’s control plane in the cloud it makes provisioning, managing and monitoring storage services much more agile. Nebulon can literally roll out new control plane updatesy to their install base on an almost daily basis. Just like any other cloud based or SAAS application. Customers receive the updated nPod control plane functionality by simply refreshing their browser page.

Nebulon’s GoToMarket

Near the end of our podcast, we asked Siamak about how Nebulon was going to access the market. Nebulon’s goto market is to use server OEMs. That is, they have signed agreements with two (and working on a third) server vendors to sell SPU cards with Nebulon control plane access.

During server purchases, customers configure their servers but now along with SAS RAID card options they will now see an Nebulon SPU option. OEM server vendors will bundle SPU hardware and Nebulon control plane access along with all other server components such as CPU’s, SSDs, NICs, etc, This way, the customer will receive a pre-installed SPU card in their server and will be ready to configure nPod LUNs as soon as the server powers on in their network.

Nebulon will go GA in the 3rd quarter.

The podcast ran ~43 minutes. Siamak has always been a pleasure to talk with and is very knowledgeable about the problems customers have in today’s data center environments. Nebulon has given him and his team the way to rethink storage and address these serious issues. Matt and I had a good time talking with Siamak. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png
This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png

Siamak Nazari, CEO Nebulon

Siamak Nazari is the CEO and Co-founder of Nebulon. Siamak has over 25 years of experience working on distributed and highly available systems.

In his position as HPE Fellow and VP, he was responsible for setting technical direction for HPE 3PAR and its portfolio of software and hardware. He worked on HPE 3PAR technology from 2000 to 2018, responsible for designing and implementing distributed memory management and the high availability features of the system.

Prior to joining 3PAR, Siamak was the technical lead for distributed highly available Proxy Filesystem (pxfs) of Sun Cluster 3.0.

0100: GreyBeards talk with Colin Gallagher, VP Dig. Infra. Prod. Mkt. @ Hitachi Vantara

Sponsored By:

We have known Colin Gallagher (@worldc3), VP, Digital Infrastructure Product Marketing at Hitachi Vantara, for a long time and he has always been an all around smart storage guy. Colin’s team at Hitachi Vantara are bringing out a brand new, midrange storage system and we thought it would be a good time to catch up with him and learn about it.

The new Hitachi Vantara VSP E990 Storage System is an all NVMe SSD array for medium sized enterprises that need predictable, high IOPS-low latency performance with enterprise class functionality and world class reliability/availability. We asked Colin why they needed all NVMe levels of performance. Colin replied that many of these data centers are starting to use advanced HPC, AI, and data analytics applications together with their standard Oracle, SAP and Microsoft solutions. These combined workloads have an acute need for predictable, high end performance and enterprise class functionality in order to work well.

The VSP E99O comes from a long heritage of enterprise storage at Hitachi, most recently embodied in the Hitachi VSP 5000. In fact, the VSP E990 uses the same storage OS as the VSP 5000, with changes made to streamline it for use with higher performing, all NVMe storage on a dual controller architecture.

This means all the advanced storage functionality of the high end enterprise VSP 5000 are available on the VSP E990 midrange system, minus some items not pertinent to midrange such as mainframe attach.

Many of the software changes involved cache and cache management. In the VSP E990, cache is now automatically shared and distributed across controllers reducing the performance impact of mirroring. Further, Hitachi has added more cores and higher performing processors as well. As a result, the VSP E990 all NVMe array can provide up to 5.8M IOPS and a best in any networked storage system, IO response time as low as 64 µsec. Colin also mentioned that they have reduced flash drive rebuild times by 80%.

The VSP E990 comes in a 4U base configuration and can offer from ~6TB to up to over 6PB of virtual capacity with drive expansion. In 8U plus controller (on the audio, it was incorrectly stated as 6U, The Eds.), the VSP E990 provides slots for up to 96 NVMe SSDs. Just like all VSP storage, the VSP E990 also offers the Hitachi 100% Data Availability Guarantee, the world’s oldest. Further, the VSP E990 supports 6-9s (99.9999%) reliability.

In addition the VSP E990 also supports Hitachi Adaptive Data Reduction, which compresses and deduplicates data to increase virtual capacity and reduce physical footprint. In the VSP E990, Adaptive Data Reduction uses AI to determine the best time to deduplicate data while at the same time optimizing host IO performance and effective storage capacity.

Hitachi Ops Center

During the last year or so Hitachi Vantara introduced its new Hitachi Ops Center solution to better administer and manage storage and other digital infrastructure. Ops Center now comes with 4 components: Administrator, Protector (copy data management), Automator and Analyzer.

  • Administrator supplies an element manager for VSP, other storage, and digital infrastructure in the data center.
  • Protector provides enterprise class, copy data management to protect, migrate, and archive VSP data storage.
  • Analyzer supports AI analysis of the data center’s storage operations to monitor SLAs, troubleshoot shoot problems, and improve storage performance as well as 3rd party compute, network and storage.
  • Automator supplies a series of templates and services to automate mundane, manual storage and other digital infrastructure tasks required to configure, operate and manage these systems in the data center. Automator provides a number of templates which customers can tailor to automate infrastructure operations such as provisioning an ESXi data store. The templates together with Automator services automatically carry out all the OS, fabric and storage/digital infrastructure tasks and activities required to perform these functions.

Hitachi EverFlex consumption models

Hitachi Vantara is also introducing EverFlex, a new series of consumption models, that any customer can use to provide more financial flexibility in their data center digital infrastructure acquisitions, deployments, and management.

EverFlex offers customers the option to purchase, lease or buy on a pay-as-you-go, cloud-like basis any Hitachi Vantara storage or digital infrastructure. Colin mentioned there were two ways that pay-as-you-go can operate,

  1. Customers pay on pure capacity over time basis. Here the customer would contract for a certain capacity and Hitachi Vantara would install storage/digital infrastructure capacity and would bill them monthly for it.
  2. Customers pay on an SLA over time basis. Here they would contract for a specific SLA, such as IOPS or other performance characteristic and Hitachi Vantara would install and maintain any storage/digital infrastructure to meet that SLA and bill them monthly for it.

Colin said that all Hitachi, world-class services are also now available to be purchased under EverFlex.

The podcast ran ~24 minutes. Colin has always been easy to talk with and very knowledgeable about storage. We were very impressed with the performance and innovation in the VSP E990 as well as Ops Center and EverFlex. Keith and I had fun discussing these solutions with Colin. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Colin Gallagher, VP Digital Infrastructure Product Marketing at Hitachi Vantara

Colin is Vice President for Digital Infrastructure Product Marketing at Hitachi Vantara where he leads product marketing for storage systems, storage software, and converged/hyper-converged solutions.

Over his 25-year career he has lead marketing and product management team teams at several major storage companies. Colin has a passion for telling compelling stories about technical products that help customers solve both business and personal pain – and he enjoys the challenge of telling them in creative ways.

He holds a bachelor’s degree from Georgetown University and an MBA from Northeastern University. Colin tries to put as many miles on his bike as possible, “hangs out” on twitter as @worldc3, and (unlike the GreyBeards) is team Oxford comma.

096: GreyBeards YE2019 IT Industry Trends podcast

In this, our yearend industry wrap up episode, the GreyBeards discuss trends and technologdies impacting the IT industry in 2019 and what’s ahead for 2020. This year we have Matt and Keith on the podcast along with Ray. Just like last year, we start off with NVMeoF.

NVMeoF unleashed

This year just about every major storage vendor announced new systems that either have support for NVMeoF or currently offer NVMeoF on their storage systems. Most offer FC based NVMeoF but a few offer NVMeoF/Ethernet, fewer still offer both.

All of the NVMeoF/Ethernet seem to be using RoCE or iWARP. Unclear if one is more often used that the other, so for now both continue to be used in the market. Some storage vendors are offering NVMeoF as an internal fabric to access storage while still using iSCSI or FC/SCSI to access the data. This works better than SAS but won’t provide all the performance you can get from end-to-end NVMeoF.

NVMeoF is all about increasing IOPS and reducing response times. That and getting ready for SCM SSDs. In the mean time the SSD industry has introduced some very attractive NVMe (NAND) SSDs that in NVMeoF storage system can increase IOPS and reduce latencies.

We talked last year about NVMeoF standards finally stabilizing and this year the rollout across enterprise storage systems is testament to that.

SCM hits the enterprise

Most of us attended an Intel Data Center Event earlier this past yea,r where Optane DC PM was introduced. Optane DC PM is the memory version of Optane SCM (3DX Crosspoint) technology. Intel offers two distinct modes of accessing Optane DC PM as memory: 1) App Direct mode, where data in Optane DC PM persists across power cycles but requires one to use a special AP; and 2) Memory mode where Optane DC PM is cleared during a power cycle, (see our RayOnStorage post Need memory, Intel’s Optane DC PM…).

Vendors seem to be using Optane both memory and SCM technology differently. Pure is using Optane SSDs plugged into their FlashArray as sort of a read cache for customer IO. They suggest for well behaved applications this can reduce IO response times considerably.

Dell EMC introduced SCM as a storage tier and are using their automated storage tiering to move the hottest data to SCM. Oracle’s latest Exadata appliance uses Optane DC PM as both a read and write caching layer.

It won’t be long before every enterprise vendor offers SCM drives in their storage systems with a few offering Optane DC PM as in memory caching technology.

Of course, the big news for Optane DC PM is its use in memory databases, specifically SAP HANA. HANA can take advantage of the (6) TB of memory to to handle larger databases. Keith mentioned that even Microsoft SQL server can take advantage of the additional memory to provide faster responses to queries.

Keith also mentioned that there are some systems out there that can be configured to share Optane memory (or storage). When SAP or other databases use this solution they are able to amortize the cost of the technology over more use cases.

Of course, Optane DC PM are only available on the lastest generation Intel processors. None of us have heard anything from AMD (or Micron) on providing a second source for support of Optane DC PM (or the memory technology itself). Presumably most customers would want a second source for Optane DC PM processor support (as well as the technology)

Cloud enterprise storage hits mainstream

The other thing we saw more of this year is enterprise vendors offering versions of storage in public cloud environments. NetApp was an early proponent of doing this.

We saw at Pure that they have a new Cloud Block Store witch is a re-architected version of FlashArray//X storage using AWS hardware and networking services. We were very impressed with what they have accomplished and it was the subject of more than one late night discussion. Listen to the Keith & Ray show at Pure//Accelerate2019 podcast to learn more.

Matt mentioned Nimble’s cloud volume storage which is cloud adjacent. Most enterprise vendors offer something similar today. They differentiate on how easy it is to configure, use and where (which regions) it’s available in.

NetApp has arguably been at this the longest and has the deepest offerings available from cloud adjacent file and block storage, to offering native enterprise file services for all public cloud environments, to supplying a suite of dedicated data services to surround all of their storage technology operating in public clouds and on premises.

While Dell EMC may have missed the turn to the cloud, they are quickly trying to catch up. Keith mentioned Faction, a Dell partner that offers cloud storage services using VMware with VMC. With Faction and vSAN customers have access to software defined storage that uses cloud hardware to support data services.

What’s driving data growth

There seems to be no end for the need for storage to store data. The GreyBeards point to three trends driving data growth today.

  1. IoT seems to have no bounds. A recent RayOnStorage post Internet of Tires discussed how tire companies were tying their tires to the internet. And that’s just the start, pretty soon every artifact, every device, every manufactured item will have a number of sensors attached all of which will be creating massive amounts of data.
  2. AI ML DL has an insatiable appetite for data. IoT is being used largely to optimize products and services. But it’s DL, with a large dollop of data, that is behind much of that optmization.
  3. SaaS applications is a relatively new application approach that’s being rolled out to more arenas and as it’s online and user oriented, seems to generate lots of data.

Containers storage debate

We closed the podcast with a heavy debate on whether container applications have need for storage. Keith was adamant that containers by their very nature are stateless and that Kubernetes ability to stop and start container applications at will almost requires stateless operations.

Ray was a bit more theoretical on the topic and believed that most container applications today take advantage of some sort of database or other services to store state and that state is just another word for storage.

Keith mentioned encoding as a typical container app. Encoding containers can be fired up and taken down at will without hurting anything but throughput. Yes, but those encoder container apps must access some database or other state information to find out what work is left to do and as they complete their work they update this data as well as store their newly encoded segments. This all involves the use of state information.

In the end, I think we were talking about the same thing but using different terminology. Keith believes that persistent state information is needed and Ray says that this is just another word for (containers) storage. Matt said we probably need Nigel (@NigelPoulton) on the podcast to straighten us both out.

The podcast ran a bit long and could have run longer. Keith and Matt bring systems level perspective to what’s happening in the storage market. But they come at it from different sides. Ray seems to frame everything from a storage perspective. Diverse perspectives lead to a more fuller and interesting discussion. Listen to the podcast to learn more.


This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png
This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Ray Lucchesi ( @RayLucchesi) is the host of GreyBeardsOnStorage and is President/Founder of Silverton Consulting, and a prominent blogger at RayOnStorage.com.

Keith Townsend (@CTOAdvisor) is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations. Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek

Matt Leib (@MBLeib), one of our co-hosts, has been blogging in the storage space for over 10 years, with work experience both on the engineering and presales/product marketing. His blog is at Virtually Tied to My Desktop.