49: Greybeards talk open convergence with Brian Biles, CEO and Co-founder of Datrium

Sponsored By:

In this episode we talk with Brian Biles, CEO and Co-founder of Datrium. We last talked with Brian and Datrium in May of 2016 and at that time we called it deconstructed storage. These days, Datrium offers a converged infrastructure (C/I) solution, which they call “open convergence”.

Datrium C/I

Datrium’s C/I  solution stores persistent data off server onto data nodes and uses onboard flash for a local, host read-write IO cache. They also use host CPU resources to perform some other services such as compression, local deduplication and data services.

In contrast to hyper converged infrastructure solutions available on the market today, customer data is never split across host nodes. That is data residing on a host have only been created and accessed by that host.

Datrium uses on host SSD storage/flash as a fast access layer for data accessed by the host. As data is (re-)written, it’s compressed and locally deduplicated before being persisted (written) down to a data node.

A data node is a relatively light weight dual controller/HA storage solution with 12 high capacity disk drives. Data node storage is global to all hosts running Datrium storage services in the cluster. Besides acting as a permanent repository for data written by the cluster of hosts, it also performs global deduplication of data across all hosts.

The nice thing about their approach to CI is it’s easily scaleable — if you need more IO performance just add more hosts or more SSDs/flash to servers already connected in the cluster. And if a host fails it doesn’t impact cluster IO or data access for any other host.

Datrium originally came out supporting VMware virtualization and acts as an NFS datastore for VMDKs.

Recent enhancements

In July, Datrium released new support for RedHat and KVM virtualization alongside VMware vSphere. They also added Docker persistent volume support to Datrium. Now you can have mixed hypervisors KVM, VMware and Docker container environments, all accessing the same persistent storage.

KVM offered an opportunity to grow the user base and support Redhat enterprise accounts  Redhat is a popular software development environment in non-traditional data centers. Also, much of the public cloud is KVM based, which provides a great way to someday support Datrium storage services in public cloud environments.

One challenge with Docker support is that there are just a whole lot more Docker volumes then VMDKs in vSphere. So Datrium added sophisticated volume directory search capabilities and naming convention options for storage policy management. Customers can define a naming convention for application/container volumes and use these to define group storage policies, which will then apply to any volume that matches the naming convention. This is a lot easier than having to do policy management at a volume level with 100s, 1000s to 10,000s distinct volume IDs.

Docker is being used today to develop most cloud based applications. And many development organizations have adopted Docker containers for their development and application deployment environments. Many shops do development under Docker and production on vSphere. So now these shops can use Datrium to access development as well as production data.

More recently, Datrium also scaled the number of data nodes available in a cluster. Previously you could only have one data node using 12 drives or about 29TB raw storage of protected capacity which when deduped and compressed gave you an effective capacity of ~100TB. But with this latest release, Datrium now supports up to 10 data nodes in a cluster for a total of 1PB of effective capacity for your storage needs.

The podcast runs ~25 minutes. Brian is very knowledgeable about the storage industry, has been successful at many other data storage companies and is always a great guest to have on our show. Listen to the podcast to learn more.

Brian Biles, Datrium CEO & Co-founder

Prior to Datrium, Brian was Founder and VP of Product Mgmt. at EMC Backup Recovery Systems Division. Prior to that he was Founder, VP of Product Mgmt. and Business Development for Data Domain (acquired by EMC in 2009).

46: Greybeards discuss Dell EMC World2017 happenings on vBrownBag

In this episode Howard and I were both at Dell EMC World2017 this past month and Alastair Cooke (@DemitasseNZ) asked us to do a talk at the show for the vBrownBag group (Youtube video here). The GreyBeards asked for a copy of the audio for this podcast.

Sorry about the background noise, but we recorded live at the show, with a huge teleprompter in the background that was re-broadcasting keynotes/interviews from the show.

At the show

Howard was at Dell EMC World2017 on a media pass and I was at the show on an industry analyst pass. There were parts of the show that he saw, that I didn’t and vice versa, but all keynotes and major industry outreach were available to both of us.

As always the Dell EMC team put on a great show, and kudos have to go to their AR and PR teams for having both of us there and creating a great event. There were lots of news at the show and both of us were impressed by how well Dell EMC have come together, in such a short time.

In addition, there were a number of Dell partners at the show. Howard met  Datadobi on the show floor who have a file migration tool that walks a filesystem tree and migrates files as well as reports on files it can’t. And we both saw Datrium (who we talked with last year).

Servers and other news

We both liked Dell’s new 14th generation server. But Howard objected to the lack of technical specs on it. Apparently, Intel won’t let specs be published until they announce their new CPU chipsets, sometime later this year. On the other hand, there were a few server specs discussed. For example, I was impressed the new servers would support many more NVMe cards. Howard liked the new server support for NV-DIMMs, mainly for the potential latency reduction that could provide software defined storage.

That led us on a tangent discussion about whether there is a place for non-software defined storage anymore.  Howard mentioned the downside of HCI/software defined storage on upgrading server (DIMM, PCIe card) hardware.

However, appliance hardware seems to be getting easier to upgrade. The new Unity AFA storage can be upgraded, non-disruptively from the low end to high end appliance by just swapping out controller hardware canisters.

Howard was also interested in Dell EMC’s new CloudFlex purchasing model for HCI solutions. This supplies an almost cloud-like purchasing option for customers. Where for a one year commitment,  you pay as you go (no money down, just monthly payments) rather than an up front capital purchase. After the year’s commitment expires you can send the hardware back to Dell EMC and stop paying.

We talked about Tier 0 storage. EMC DSSD was an early attempt to provide Tier 0 but came with lots of special purpose hardware. When commodity hardware and software emerged last year with NVMe SSD speed, DSSD was no longer viable at the premium pricing needed for all that hardware and was shut down. Howard and I discussed how doing special hardware requires one to be much faster (10-100X) than commodity hardware solutions to succeed and the gap has to be continued.

The other big storage news was the new VMAX 950F AFA and its performance numbers. Dell EMC said the new VMAX could do 6.7M IOPS of RRH (random read hit) and had a 350µsec response time. Howard noted that Dell EMC didn’t say at what IO load they achieved the 350µsec response time. I told him it almost didn’t matter, even if it was a single IO at that response time, it was significant.

The podcast runs about 40 minutes. It’s just Howard and I talking about what we saw/heard at the show and the occasional, tangental topic.  Listen to the podcast to learn more.


Howard Marks, DeepStorage

Howard Marks is the Founder and Chief Scientist of howardmarksDeepStorage, a prominent blogger at Deep Storage Blog and can be found on twitter @DeepStorageNet.

Ray Lucchesi, Silverton Consulting

Ray Lucchesi is the President and Founder of Silverton Consulting, a prominent blogger at RayOnStorage Blog, and can be found on twitter @RayLucchesi.

GreyBeards deconstruct storage with Brian Biles and Hugo Patterson, CEO and CTO, Datrium

In this our 32nd episode we talk with Brian Biles (@BrianBiles), CEO & Co-founder and Hugo Patterson, CTO & Co-founder of Datrium a new storage startup. We like to call it storage deconstructed, a new view of what storage could be based on today and future storage technologies.  If I had to describe it succinctly, I would say it’s a hybrid between software defined storage, server side flash and external disk storage.  We have discussed server side flash before but this takes it to a whole another level.

Their product, the DVX consists of Hyperdriver host software and a NetShelf, external disk storage unit. The DVX was designed from the ground up based on the use of host/server side flash or non-volatile memory as a given and built everything else around that. I hesitate to say this but the DVX NetShelf backend storage is pretty unintelligent, just a dual controller disk storage with a multi-task coordinator. In contrast, the DVX Hyperdriver host software used to access their storage system is pretty smart and is installed as a VIB in vSphere. Customers can assign up to 8TB of host-based, server side flash/non-volatile memory to the storage system per server. The Datrium DVX does the rest.

The Hyperdriver leverages host flash, DRAM and compute cores to act as a caching layer for read and write IO and as a data management engine. Write data is write-thru straight from the server side flash to the NetShelf storage system which has Non-volatile DRAM (NVRAM) caching. Once write data is in NetShelf cache, it’s in two places, one on the host server side flash and the other in storage NVRAM. Reads are easier to handle, just being cached from the NetShelf storage in the server side flash. There’s no unique data residing in the hosts.

The Hyperdriver looks like a NFS mount to vSphere and the DVX uses a proprietary protocol to talk with the backend DVX NetShelf. Datrium supports up to 32 hosts and you can define the amount of Flash, DRAM and host compute allocated to the DVX Hyperdriver activity.

But the other interesting part about DVX is that much of the storage management functionality and storage control logic is partitioned between the host  Hyperdriver and NetShelf, with both participating to do what they do best.

For example,  disk rebuilds are done in combination with the host Hyperdriver. DVX RAID rebuild brings data from the backend into host cache, computes rebuild data and writes the reconstructed data back out to the NetShelf backend. This way rebuild performance can scale up with the number of hosts active in a cluster.

DVX data are compressed and deduplicated at the host before being sent to the NetShelf. The NetShelf backend also does a global deduplication on the host data. Hashing computations and data compression activities are all done on the host and passed on to the NetShelf.  Brian and Hugo were formerly with EMC Data Domain, and know all about data deduplication.

At the moment DVX is missing some storage functionality but they have an extensive roadmap with engineering resources to match and are plugging away at all of it. On the other hand, very few disk storage devices offer deduped/compressed data storage and warm server side caches during vMotion. They also support QoS functionality to limit the amount of host resources consumed by DVX Hyperdriver software

The podcast runs ~41 minutes and episode covers a lot of ground about how the new DVX product came about, how they separated storage functionality between host and backend and other aspects of DVX storage.  Listen to the podcast to learn more.

AAEAAQAAAAAAAAK8AAAAJGQyODQwNjg1LWI3NTMtNGY0OC04MGVmLTc5Nzg3N2IyMmEzYQBrian Biles, Datrium CEO & Co-founder

Prior to Datrium, Brian was Founder and VP of Product Mgmt. at EMC Backup Recovery Systems Division. Prior to that he was Founder, VP of Product Mgmt. and Business Development for Data Domain (acquired by EMC in 2009).

Hugo Patterson, Datrium CTO & Co-founderAAEAAQAAAAAAAANZAAAAJDhiMTI2NzMyLTdkZDAtNDE5Yy1hMTM5LTNiMWM2MWM3NTlmMA

Prior to Datrium, Hugo was an EMC Fellow serving as CTO of the EMC Backup Recovery Systems Division, and the Chief Architect and CTO of Data Domain (acquired by EMC in 2009), where he built the first deduplication storage system. Prior to that he was the engineering lead at NetApp, developing SnapVault, the first snap-and-replicate disk-based backup product. Hugo has a Ph.D. from Carnegie Mellon.