0101: Greybeards talk with Howard Marks, Technologist Extraordinary & Plenipotentiary at VAST

As most of you know, Howard Marks (@deepstoragenet), Technologist Extraordinary & Plenipotentiary at VAST Data used to be a Greybeards co-host and is still on our roster as a co-host emeritus. When I started to schedule this podcast, it was going to be our 100th podcast and we wanted to invite Howard and the rest of the co-hosts to be on the call to discuss our podcast. But alas, the 100th Greybeards podcast came and went, before we could get it done. So we decided to refocus this podcast back on VAST Data.

We talked with Howard last year about VAST and some of this podcast covers the same ground (see last year’s podcast with Howard on VAST Data) but I highlighted below different aspects of their product that we also discussed.

For starters, VAST just finalized a recent round of funding, which if I recall, valued them at over $1B USD, or yet another data storage unicorn.

VAST is a scale out, disaggregated, unstructured data platform that takes advantage of the economics of QLC SSD (from Intel) combined with the speed of 3D XPoint storage class memory (Optane SSD, also from Intel) to support customer data. Intel is an investor in VAST.

VAST uses mutliple front end (controller) servers, with one or more HA NVMe drive module(s) connected via a dual infiniband or 100Gbps Ethernet RDMA cluster interconnect. The HA NVMe drive module has two (IO modules) adapter cards, one for each connection that takes IO and data requests and transfers them across a PCIe bus which connects to QLC and Optane SSDs. They also have a Mellanox (another investor) switch on their backend with a (round robin) DNS router to connect hosts to their storage (front-end) servers.

Each backend HA NVMe drive module has 12 1.5TB Optane U.2 SSDs and 44 15.4TB QLC SSDs, for a total of 56 drives. Customer data is first written to Optane and then destaged to QLC SSD.

QLC has the advantage of being 4 bits per cell (for a lower $/GB stored) but it’s endurance or drive writes/day (dw/d)) is significantly worse than TLC. So VAST has had to work to increase QLC endurance in their system.

Natively, QLC offers ~0.2 dw/d when doing random 4K writes. However, if your system does 128KB sequential writes, it offers 4.0 dw/d. VAST destages data from Optane SSDs to QLC in 1MB chunks which both optimizes endurance and reduces garbage collection write amplification within the drive.

Howard mentioned their frontend servers are stateless, i.e., maintain no state information about any IO activity going on. Any IO state information is maintained by their system in Optane SSDs. Each server maintains a work log (like) structure on Optane that describes what they are doing in support of host IO and other activities. That way, if one front end server goes down, another one can access its log and take over its activity.

Metadata is also maintained only on Optane SSDs. Howard called their metadata structure a V-tree (B-tree). VAST mirrors all meta-data and customer data to two Optane SSDs. So if one Optane SSD goes down, its pair can be used to continue operations.

In last years podcast we talked at length about VAST data protection and data reduction capabilities so we won’t discuss these any further here.

However, one thing worth noting is that VAST has a very large RAID (erasure code protection) stripe. Data is written to the QLC SSDs in a VAST designed, locally decodable erasure coding format.

One problem with large stripes is rebuild time. VAST’s locally decodable parity codes help with this but the other thing that helps is distributing rebuild IO activity to all front end servers in the system.

The other problem with large stripe sizes is garbage collection. VAST segregates customer data by “temporariness” based on their best guess. In this way all data in one stripe should have similar lifetimes. When it’s time for stripe garbage collection, having all temporary data allows VAST to jettison the whole stripe (or most of it) rather than having to collect and re-write old stripe data to another new stripe.

VAST came out supporting NFSv3 and S3 object storage protocols, Their next release adds support for SMB 2.2, data-at-rest encryption and snapshotting to an external S3 store. As you may recall SMB is a stateful protocol. In VAST’s home grown, SMB implementation, front end servers can take over SMB transactions from other failed servers, without having to fail the whole transaction and start over again.

VAST uses a fail in place, maintenance policy. That is failed SSDs are not normally replaced in customer deployments, rather blocks, pages, or SSDs are marked as failed and the spare capacity available in the drive enclosure is used to provide space for any needed rebuilt data.

VAST offers a 10 year maintenance option where the customer keeps the same storage for 10 full years. That way customers don’t have to migrate data from one system to another until their 10 years are up.

The podcast runs a little under 44 minutes. Howard and I can talk forever. He is always a pleasure to talk with as well as extremely knowledgeable about (VAST) storage and other industry solutions.  The co-hosts and I had a great time talking with him again. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Howard Marks, Technologist Extraordinary and Plenipotentiary, VAST Data, Inc.

Howard Marks brings over forty years of experience as a technology architect for hire and Industry observer to his role as VAST Data’s Technologist Extraordinary and Plienopotentary. In this role, Howard demystifies VAST’s technologies for customers and customer requirements for VAST’s engineers.

Before joining VAST, Howard ran DeepStorage an industry test lab and analyst firm. An award-winning speaker, he has appeared at events on three continents including Comdex, Interop and VMworld.

Howard is the author of several books (all gratefully out of print) and hundreds of articles since Bill Machrone taught him journalism at PC Magazine in the 1980s.

Listeners may also remember that Howard was a founding co-Host of the Greybeards-on-Storage Podcast.

098: GreyBeards talk data protection & visualization for massive unstructured data repositories with Christian Smith, VP Product at Igneous

Sponsored By:

Even before COVID-19 there was a lot of file data being created and mined, but with the advent of the pandemic, this has accelerated considerably. As such, it seemed an appropriate time to talk with Christian Smith, VP of Product at Igneous, (@IgneousIO) a company that targets the protection and visibility of massive quantities of unstructured data, on premise, in the cloud, or just about anywhere else it may live.

Let me state at the outset, that my belief had always been, that you don’t backup 10PB of data, rather you bite the (big expense) bullet to replicate it and hope for the best. After talking with Christian and Igneous I am going to have to modify that belief by a couple of more orders of magnitude.

All this data is coming from: LIDAR, RADAR, audio, video, pictures, medical film, MRI/CAT Scans, etc., and as noted above, it’s exploding. Christian talked about one customer of theirs that supplies aerial photography/LIDAR/RADAR scans of areas on request. This can used to better understand crop, forest, wildlife, land health and use. One surprise Igneous found with this customer is that the data is typically archived after first use, but within a month or so it’s moved back online for some other purpose.

Igneous heritage

Many of the people who started up and currently work at Igneous have been around file storage for some time having, primarily coming from (Dell EMC) Isilon, NetApp, Qumulo and other industry heavyweights. When they started Igneous, they realized the world didn’t need another NAS box or file system. Rather, with the advent of 10-100PB unstructured data farms, what was needed was an effective way to protect and understand that data.

When they considered how to protect and visualize 100PB of unstructured data, the only they found to do this was to build a scale-out solution that used on premise and cloud infrastructure and was offered as a service.

Igneous DataProtect solution

With 10PB or 100PB of files, located across a gaggle of heterogeneous file servers, with billions of files across ~100s of servers, each of with has ~1K or more file shares, just scanning all the file servers would take weeks, if not longer and then you need to move the data someplace to protect it. Seems like an impossible task.

Igneous immediately figured out the first thing they needed was a radically new, scale out architecture to rapidly scan of the file servers. Thus was born ActiveScan. Christian said it was designed to scan a trillion files and they have customers with a billion files using their service today. ActiveScan doesn’t use NFS/SMB/Object (S3) access protocols to talk with file servers rather it uses internal APIs to access file metadata. DataProtect currently supports APIs for NetApp, Dell EMC Isilon, Pure FlashBlade, Qumulo, Gluster, Lustre, & GPFS (IBM Spectrum Scale) file systems. They use ActiveScan to build a file index database.

Their other major concern was hot to move PBs of data rapidly across to the cloud and other locations. Again they created a scale out, multi-threaded service to do this and also made use of internal APIs rather than standard file or object protocols. This became IntelliMove. That same customer above with billions of files, has 6PB of file data to protect.

Normal data movement is fine for largish, files but bogs down with lots of small files or extremely large files to back up. DataProtect gathers together small files into a large chunks and splits up extremely large files into smaller chunks and moves these chunks to secondary storage.

Data expiration is another problem, especially when you chunk files together. Here they came up with an intelligent garbage collection algorithm which only collects free space when it makes the most sense but deletes data access at the time of expiration.

DataProtect uses a cloud based, SaaS control plane that manages and coordinates its activities across data centers, sites and cloud instances. It also has a client VM (OVA, with 8 core CPU, 32GB DRAM, ~100MB) that runs in the customers infrastructure, on site, in CoLo’s or in the cloud that is used to scan-move-protect customer unstructured data. If more scan and data movement performance is needed, the VM can spawn additional threads automatically and more VMs can be added to provide even more throughput.

DataDiscover solution

The other service that Igneous offers is DataDiscover a data visualization tool. DataDiscover uses ActiveScan and its database to provide customers a way to understand the file data that resides in their massive unstructured data farms across the data center, cloud or wherever else it resides.

We didn’t discuss this solution as much but having a way to better understand the files in a 10-100PB unstructured data farm could be very useful and a great way to keep that 100PB from growing to 1EB faster than it has too.

As part of their outreach to the world, Igneous is giving away free DataProtect services to organizations that are focused on COVID-19 research. Check out their offer here

The podcast ran ~24 minutes. Christian was extremely knowledgeable about the problems that happen with very large unstructured data farms and how Igneous solutions can provide a better way to protect and visualize that data. Matt and I had a fun time discussing Igneous’s approach with Christian. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Christian Smith, VP Product at Igneous

Christian is VP of Product, responsible for product management, solutions, and customer success. Prior to Igneous, Christian spent 15 years running field engineering organizations at EMC, Isilon Systems, NetApp and Silicon Graphics.

Christian has been working with organizations that work with file data since working at Silicon Graphics. Before that Christian was co-founder of a small management consulting company associated with Y2K and deregulation.

Christian received dual bachelor’s degrees in Chemistry and Computer Science from the University of Missouri-Columbia. Christian is an avid camper, skier and traveler and has long since traveled through all of the continental 48 states.

096: GreyBeards YE2019 IT Industry Trends podcast

In this, our yearend industry wrap up episode, the GreyBeards discuss trends and technologdies impacting the IT industry in 2019 and what’s ahead for 2020. This year we have Matt and Keith on the podcast along with Ray. Just like last year, we start off with NVMeoF.

NVMeoF unleashed

This year just about every major storage vendor announced new systems that either have support for NVMeoF or currently offer NVMeoF on their storage systems. Most offer FC based NVMeoF but a few offer NVMeoF/Ethernet, fewer still offer both.

All of the NVMeoF/Ethernet seem to be using RoCE or iWARP. Unclear if one is more often used that the other, so for now both continue to be used in the market. Some storage vendors are offering NVMeoF as an internal fabric to access storage while still using iSCSI or FC/SCSI to access the data. This works better than SAS but won’t provide all the performance you can get from end-to-end NVMeoF.

NVMeoF is all about increasing IOPS and reducing response times. That and getting ready for SCM SSDs. In the mean time the SSD industry has introduced some very attractive NVMe (NAND) SSDs that in NVMeoF storage system can increase IOPS and reduce latencies.

We talked last year about NVMeoF standards finally stabilizing and this year the rollout across enterprise storage systems is testament to that.

SCM hits the enterprise

Most of us attended an Intel Data Center Event earlier this past yea,r where Optane DC PM was introduced. Optane DC PM is the memory version of Optane SCM (3DX Crosspoint) technology. Intel offers two distinct modes of accessing Optane DC PM as memory: 1) App Direct mode, where data in Optane DC PM persists across power cycles but requires one to use a special AP; and 2) Memory mode where Optane DC PM is cleared during a power cycle, (see our RayOnStorage post Need memory, Intel’s Optane DC PM…).

Vendors seem to be using Optane both memory and SCM technology differently. Pure is using Optane SSDs plugged into their FlashArray as sort of a read cache for customer IO. They suggest for well behaved applications this can reduce IO response times considerably.

Dell EMC introduced SCM as a storage tier and are using their automated storage tiering to move the hottest data to SCM. Oracle’s latest Exadata appliance uses Optane DC PM as both a read and write caching layer.

It won’t be long before every enterprise vendor offers SCM drives in their storage systems with a few offering Optane DC PM as in memory caching technology.

Of course, the big news for Optane DC PM is its use in memory databases, specifically SAP HANA. HANA can take advantage of the (6) TB of memory to to handle larger databases. Keith mentioned that even Microsoft SQL server can take advantage of the additional memory to provide faster responses to queries.

Keith also mentioned that there are some systems out there that can be configured to share Optane memory (or storage). When SAP or other databases use this solution they are able to amortize the cost of the technology over more use cases.

Of course, Optane DC PM are only available on the lastest generation Intel processors. None of us have heard anything from AMD (or Micron) on providing a second source for support of Optane DC PM (or the memory technology itself). Presumably most customers would want a second source for Optane DC PM processor support (as well as the technology)

Cloud enterprise storage hits mainstream

The other thing we saw more of this year is enterprise vendors offering versions of storage in public cloud environments. NetApp was an early proponent of doing this.

We saw at Pure that they have a new Cloud Block Store witch is a re-architected version of FlashArray//X storage using AWS hardware and networking services. We were very impressed with what they have accomplished and it was the subject of more than one late night discussion. Listen to the Keith & Ray show at Pure//Accelerate2019 podcast to learn more.

Matt mentioned Nimble’s cloud volume storage which is cloud adjacent. Most enterprise vendors offer something similar today. They differentiate on how easy it is to configure, use and where (which regions) it’s available in.

NetApp has arguably been at this the longest and has the deepest offerings available from cloud adjacent file and block storage, to offering native enterprise file services for all public cloud environments, to supplying a suite of dedicated data services to surround all of their storage technology operating in public clouds and on premises.

While Dell EMC may have missed the turn to the cloud, they are quickly trying to catch up. Keith mentioned Faction, a Dell partner that offers cloud storage services using VMware with VMC. With Faction and vSAN customers have access to software defined storage that uses cloud hardware to support data services.

What’s driving data growth

There seems to be no end for the need for storage to store data. The GreyBeards point to three trends driving data growth today.

  1. IoT seems to have no bounds. A recent RayOnStorage post Internet of Tires discussed how tire companies were tying their tires to the internet. And that’s just the start, pretty soon every artifact, every device, every manufactured item will have a number of sensors attached all of which will be creating massive amounts of data.
  2. AI ML DL has an insatiable appetite for data. IoT is being used largely to optimize products and services. But it’s DL, with a large dollop of data, that is behind much of that optmization.
  3. SaaS applications is a relatively new application approach that’s being rolled out to more arenas and as it’s online and user oriented, seems to generate lots of data.

Containers storage debate

We closed the podcast with a heavy debate on whether container applications have need for storage. Keith was adamant that containers by their very nature are stateless and that Kubernetes ability to stop and start container applications at will almost requires stateless operations.

Ray was a bit more theoretical on the topic and believed that most container applications today take advantage of some sort of database or other services to store state and that state is just another word for storage.

Keith mentioned encoding as a typical container app. Encoding containers can be fired up and taken down at will without hurting anything but throughput. Yes, but those encoder container apps must access some database or other state information to find out what work is left to do and as they complete their work they update this data as well as store their newly encoded segments. This all involves the use of state information.

In the end, I think we were talking about the same thing but using different terminology. Keith believes that persistent state information is needed and Ray says that this is just another word for (containers) storage. Matt said we probably need Nigel (@NigelPoulton) on the podcast to straighten us both out.

The podcast ran a bit long and could have run longer. Keith and Matt bring systems level perspective to what’s happening in the storage market. But they come at it from different sides. Ray seems to frame everything from a storage perspective. Diverse perspectives lead to a more fuller and interesting discussion. Listen to the podcast to learn more.


This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png
This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Ray Lucchesi ( @RayLucchesi) is the host of GreyBeardsOnStorage and is President/Founder of Silverton Consulting, and a prominent blogger at RayOnStorage.com.

Keith Townsend (@CTOAdvisor) is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations. Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek

Matt Leib (@MBLeib), one of our co-hosts, has been blogging in the storage space for over 10 years, with work experience both on the engineering and presales/product marketing. His blog is at Virtually Tied to My Desktop.


095: GreyBeards talk file sync&share with S. Azam Ali, VP Customer Success at CentreStack

We haven’t talked with a file synch and share vendor in a while now and Matt was interested in the technology. He had been talking with CentreStack, and found that they had been making some inroads in the enterprise. So we contacted S. Azam Ali, VP of Customer Success at CentreStack and asked if he wanted to talk about their product on our podcast.

File synch and share, is part collaboration tool, part productivity tool. With file synch & share many users share the same files, across many different environments and end point devices. It’s especially popular with road warriors that need access to the same files on the road that reside in corporate data centers. With this technology, files updated anywhere would be available to all.

Most file synch&share systems require you to use their storage. But CentreStack just provides synch and share access to NFS and SMB storage that’s already in the data center.

CentreStack doesn’t use VPNs to access data, many other vendor do. But with CentreStack, one just log’s into a website (with AD credentials) and they have immediate browser access to files.

CentreStack uses a gateway VM, that runs in the corporate data center, configured to share files/file directories/shares. We asked whether they were in the data path and Azam said no. However, the gateway does register for file system notifications (e.g. when files are updated, outside CentreStack, they get notified).

CentreStack does maintain meta-data on the files, directories, shares that are under it’s control. Presumably, once an admin sets it up, it goes out and access the file systems that have shared files and populates their meta-data for those files.

CentreStack works with any NFS and SMB file system as well as NAS servers that support these two. It’s unclear whether customers can have more than one gateway server in their data center supporting synch and share but Azam did say that it wasn’t unusual for customers with multi-data centers to have a gateway in each, to support synch&share requirements for each data center.

They use client software on end point devices, which presents the shared files as an external drive (to Mac), presumably a cloud drive for Windows PCs and similar services (in an App) for other systems (IOS, Android phones, iPad, etc.). We believe Azam said Linux was coming soon.

The client software can be configured in cache mode or offline mode:

  • Cache mode – the admin can configure how much space to use on the endpoint device and the software will cache the most recently used files in that space for faster access
  • Offline mode – the software moves all files that the endpoint login can access, to the device.

In cache mode, when users open a file (not in the most recently used cache), there will be some delay as the system retrieves data from the internet and copies it to the endpoint device. Unclear what the delay might be but it’s probably a function of internet speed and load on the gateway, with possibly some overhead for the NFS/SMB/NAS system to supply the data. If there’s not enough space to hold the file, the oldest non-open file is erased from the cache.

In both modes, Centrestack supports cross domain locking. That is, if one client has a file open (for update), all other systems/endpoints may only access the file in read-only mode. After the file is closed. the file can then be opened for update by other users.

When CentreStack clients are used to update files, the data is stored back in the original file systems with versioning. This way if the data is corrupted, admins can easily return back to a known good copy version.

CentreStack also offers a cloud backup and DR service. Gateway admins can request that synch&share files be backed up to cloud storage (AWS S3, Azure Blob and Wasabi). When CentreStack backups file data to the cloud, it also includes metadata information about the files so they can be re-constituted anywhere.

A CentreStack cloud gateway VM can be activated in the cloud to supply access to backed up files. Unclear whether the CentreStack cloud backup has to be restored to block or file storage first or if it just accesses the data on cloud storage directly. But one customers using CentreStack cloud DR would need to run client software in their applications accessing these files.

Wasabi seemed an odd solution to have on their list of supported cloud storage providers, but Azam said for their market, the economics of Wasabi storage were hard to ignore. See our previous podcast with David Friend, Co-Founder& CEO, Wasabi, to learn more about Wasabi.

CentreStack is licensed on a per user basis, not storage capacity bucking industry trends. But they don’t actually own the storage so it makes sense. For CentreStack cloud backup, customers also have to supply the cloud storage.

They also offer a 30 day free trial on their website with unlimited users. We assume this uses CentreStacks cloud gateway and customers bring their own cloud storage to support it.

The podcast runs about 35 minutes. Azam was a bit more marketing than we are used to, but he warmed up once we started asking questions. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

S. Azam Ali, VP of Customer Success, CentreStack

S. Azam Ali, is VP of Customer Success at CentreStack and is an executive with extensive experience in managing global teams including sales, support and consulting services.

Azam’s channel experience includes on-boarding new partners including creation of marketing and training collateral for the partners. Azam is an executive with a passion for customer success and establishing long term relationships and partnerships.

Azam is also an advisor to startups as well as established technology companies.

93: GreyBeards talk HPC storage with Larry Jones, Dir. Storage Prod. Mngmt. and Mark Wiertalla, Dir. Storage Prod. Mkt., at Cray, an HPE Enterprise Company

Supercomputing Conference 2019 (SC19) is coming to Denver next week and in anticipation of that show, we thought it would be a good to talk with some HPC storage group. We contacted HPE and given their recent acquisition of Cray, they offered up Larry and Mark to talk about their new ClusterStor E1000 storage system.

There are a number of components that go into Cray supercomputers and besides the ClusterStor, Larry and Mark mentioned their new SlingShot cluster interconnect which is Ethernet based with significant enhancements to congestion handling. But the call focused on ClusterStor.

What is ClusterStor

ClusterStor, is a Lustre file system hardwareappliance. Lustre has always been popular with the HPC crowd as it offered high bandwidth file services. But Lustre often took a team of (PhD) scientists to configure, deploy and run properly because of all the parameters that had to be setup for optimum performance.

Cray’s ClusterStor was designed to make configuring, deploying and running Lustre a lot simpler with a GUI and system defaults that provided an optimal running environment. But if customers still want access to all Lustre features and functionality, all the Lustre parameters can still be tweaked to personalize it.

What sort of appliance

The ClusterStore team has created a Lustre storage appliance using two systems, a 2U-24 NVMe SSD system and a 4U-106 disk drive system. Both systems use PCIe Gen 4 buses which offer 2X the bandwidth of Gen 3 and NVMe Gen 4 SSDs. Each ClusterStore E1000 appliance comes with 2 servers for HA and the storage behind it.

Larry said the 2U NVMe Gen 4 appliance offers 80GB/sec of read and 60GB/sec of write data bandwidth. And a full rack of these, could support ~2.5TB/sec of data bandwidth. One TB/sec seems like an awful lot to the GreyBeards, 2.5TB/sec, out of this world.

We asked if it supported InfiniBAND interconnects? Yes, they said it supports the latest generation of InfiniBAND but it also offers Cray’s own (SlingShot) Ethernet interconnect, unusual for HPC environments. And as in any Lustre parallel file system, servers accessing storage use Lustre client software.

ClusterStor Data Services

But on the backend, where normally one would see only LDISKFS for backend storage, ClusterStor also offers ZFS. Larry and Mark said that LDISKFS is faster but ZFS offers more functionality like snapshots and data compression.

Many of the Top 100 & Top 500 supercomputing environments are starting to deploy ML DL (machine learning-deep learning) workloads along with their normal HPC activities. But whereas HPC work has historically depended on bandwidth to read, write and move large files around, ML DL deals with small files and needs high IOPS. ClusterStor was designed to satisfy both high bandwidth and high IOPS workloads.

In previous HPC Lustre flash solutions, customers had to deal with the complexity of where to place data, such as on flash or on disk. But with net ClusterStor E1000, the system can do all this for you. That is it will move data from disk to flash when it sees an advantage to doing so and move it back again when that advantage is gone. But, just as with Lustre configuration parameters above, customers can still pre-stage data to flash.

The other challenge for HPC environments is extreme size. Cray and others are starting to see requirements for Exascale (exabyte, 10**18) byte) storage systems. In fact, Cray has a couple of ClusterStor E1000 configurations of 400PB or more already, As these systems age they may indeed grow to exceed an exabyte.

With an exabyte of data, systems need to support billions of files/inodes and better metadata services and indexing. ClusterStor offers optimized inode indexing and search to enable HPC users to quickly find the data they are looking for. Further, ClusterStor offers, data at rest encryption and supports virtual file systems, for multi-tenancy.

With a ZFS backend, ClusterStor can supply data compression and snapshots. Cray has tested ZFS compression on HPC scientific ( mostly already application compressed) data and still see ~30% reduction is storage footprint. At an exabyte of storage 30% can be a significant cost reduction

The podcast ran long, ~46 minutes. Larry and Mark had a good knowledge of the HPC storage space and were easy to talk with. Matt’s an old ZFS hand, so wanted to take even more about ZFS. I had a good time discussing ClusterStor and Lustre features/functionalit and how the HPC workloads are changing. Listen to the podcast to learn more. [The podcast was recorded on November 6th, not the 5th as mentioned in the lead in, Ed.]

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Larry Jones, Director Storage Product Management

Larry Jones is a director of storage product management for Cray, a Hewlett Packard Enterprise company.

Jones previously held senior product management roles at Seagate, DDN and Panasas.

Mark Wiertalla, Director Storage Product Marketing

Mark Wiertalla is a product marketing director for Cray, a Hewlett Packard Enterprise company.

Prior to Cray, Wiertalla held product manager roles at EMC and SGI.