GreyBeards talk VMware agentless backup with Chris Wahl, Tech Evangelist, Rubrik

In this edition we discuss Rubrik’s converged data backup with Chris Wahl (@ChrisWahl), Tech Evangelist for Rubrik.  You may recall Chris as a blogger on a number of Tech, Virtualization and Storage Field Days (VFD2, TFD extra at VMworld2014, SFD4, etc.) which is where  I met him. Chris is one of the bloggers that complains about me pounding on my laptop keyboard so loud at SFDs ;/

Chris had only been with Rubrik about 3 weeks when we  talked with him but both Howard and I thought it was time to find out what Rubrik was up to.

Rubrik provides an agentless, scale-out backup appliance for VMware vSphere clusters. It uses VADP to tap into VM data stores and obtain changed blocks for backup data. Rubrik deduplicates and compresses VM backup data and customers define a SLA  policy at the VM, folder or vSphere cluster level to determine when to backup VMs.

Rubrik supports cloud storage (any S3 or SWIFT provider) for long term archive storage of VM backups. With Rubrik, customers can search the backup catalog (for standard VM, NFS file, and backup metadata) that spans the Rubrik cluster data as well as S3/SWIFT storage backups.  Moreover, Rubrik can generate compliance reports to indicate how well your Rubrik-vSphere backup environment has met requested backup SLAs, over time.

Aside from the standard recovery facilities, Rubrik offers some interesting recovery options such as “instant restore” which pauses a VM and reconfigures its storage to come up on the Rubrik cluster (as a set of NFS VMDKs). Another option is “instant mount”, which runs a completely separate copy of a VM using Rubrik storage as its primary storage. In this case the VM’s NIC is disconnected so that the VM gets an error when it fires up, which has to be resolved to run the VM.

Rubrik hardware comes in a 2U package with 4 nodes. Each node has one flash SSD and 3 4 or 8TB SATA disks for customer data. The SSD is used for ingest caching and metadata. Data is triple mirrored across SATA disks in different nodes.

The latest release of Rubrik supports (compressed/deduped) data replication to other Rubrik clusters located up to asynchronous distances away.

This months edition runs just under 42 minutes and gets somewhat technical in places. We had fun with Chris on our call and hope you enjoy the podcast.

Chris Wahl, Tech Evangelist, Rubrik

chris_wahl_beard_800px

Chris Wahl, author of the award winning Wahl Network blog and Technical Evangelist at Rubrik, focuses on creating content that revolves around virtualization, automation, infrastructure, and evangelizing products and services that benefit the technology community.

In addition to co-authoring “Networking for VMware Administrators” for VMware Press, he has published hundreds of articles and was voted the “Favorite Independent Blogger” by vSphere-Land three years in a row (2013 – 2015).

Chris also travels globally to speak at industry events, provide subject matter expertise, and offer perspectives to startups and investors as a technical adviser.

GreyBeards talk VVOLs with “Father of VVOLs”, Satyam Vaghani, CTO PernixData

In this podcast we discuss VMware VVOLs with Satyam Vaghani, Co-Founder and CTO PernixData. In Satyam’s previous job, he was VMware’s Principal Engineer and Storage CTO   and worked extensively on VVOLs and other VMware storage enhancements. He also happens to be the GreyBeards second repeat guest.

With vSphere 6 coming out by the end  of this quarter, it’s a good time to talk about VVOLs and VASA 2.0.

In the podcast, Ray and Howard got a bit wild on the terminology we used to describe how VMware VVOLs work. Satyam wanted to be sure that we at least provided a decoder ring to get us back to proper VMware terminology.

  • So in the podcast when we discuss the magic LUN, control LUN or the container LUN, VMware calls this the  Protocol Endpoint (PE). VMware uses the+ PE for a message passing interface to inform a storage system what IO to perform. Although technically in block storage the PE is a LUN, it really has no data storage behind it, rather it’s only used as a message box to perform IO on other storage objects.
  • In the podcast when we talk about micro-LUNs, sub-LUNs or VM data objects. VMware calls these items a Virtual Volume (VVOL). VVOLs represent a new version of VMDK. But because VVOLs  no longer have to reside with other VVOLs (VMDKs) on the same LUN, they can be replicated, snapshotted, cloned, etc., all by themselves, without having to impact other VVOLs in the storage system.

VMware is also releasing VASA 2.0 to provide an easier, more standardized approach to provisioning VVOLs. Together, VVOLs and VASA 2.0 should theoretically greatly reduce the burden on VMware storage administration.

We go into more detail how block storage VVOLs work, the benefits of VVOLs-VASA 2.0, and many other items in our discussions with Satyam.  Listen to the podcast to learn more…

This months episode runs about 45 minutes. 

Satyam Vaghani Bio’s

Satyam Vaghani, Co-founder and CTO Pernixdata
Satyam Vaghani is Co-Founder and CTO at PernixData, a company that leverages server flash to enable scale-out storage performance that is independent of capacity. Earlier, he was VMware’s Principal Engineer and Storage CTO where he spent 10 years delivering fundamental storage innovations in vSphere. He is most known for creating the Virtual Machine File System (VMFS) that set a storage standard for server virtualization. He has authored 50+ patents, led industry-wide changes in storage systems and standards via VAAI, and has been a regular speaker at VMworld and other industry and academic forums. Satyam received his Masters in CS from Stanford and Bachelors in CS from BITS Pilani.

Greybeards talk server DRAM IO caching with Peter Smith, Director, Product Management at Infinio

Welcome to our sixth episode. We once again dive into the technical end of the pool with  an in-depth discussion of DRAM based server side caching with Peter Smith, Director of Product Management at Infinio. Unlike PernixData (checkout Episode 2, with Satyam Vaghani, CTO PernixData) and others in the server side caching business, Infinio supplies VMware server side storage caching using DRAM for NFS VMDKs. It got a bit technical fairly fast in the podcast, sorry about that.

This months podcast comes in at a little over 40 minutes and was recorded on 20 February 2014. The overall sound quality is much better than Episode 5 but we are still working out some of the kinks, so bear with us.  

Peter comes from a number of different IT infrastructure and co-location services and brings a wealth of knowledge on IO caching within a VMware server environment. With all the DRAM supplied in ESX servers these days and the increasing compute power that’s now available, the time seems ripe to implement a deduplicated, DRAM cache for VMware IO.

Infinio clusters together segments of ESX DRAM, across nodes in a VMware cluster to supply an IO cache. The software installs across the VMware cluster non-disruptively (~ = Vmotion) and Infinio clusters can be expanded without operational impact.

There was some discussion on the odds of a (random) SHA-1 hash collisions happening in our lifetimes (as Greybeards our lifetimes may be shorter than yours). I tried to get Peter or Howard to give me commensurate odds on this happening but alas, no takers.

Listen to the podcast to learn more…

Peter Smith

peter-smith-headshotDirector of Product Management. Peter brings more than 10 years of expertise as an infrastructure architect and IT operations director. In previous companies such as Harvard Business School and Endeca Technologies, Peter managed full-service datacenters and colocation spaces. Most recently Peter led infrastructure services for Endeca, and has also directed operations of customer-hosting infrastructure for clients including American Express, Fidelity UK, Bank of America, and Nike.

GreyBeards talk scale-out storage with Coho Data CTO/Co-founder, Andy Warfield

Welcome to our fifth episode. We return now to an in-depth technical discussion of leading edge storage systems, this time with Andrew Warfield, CTO and Co-founder of Coho Data. Coho Data supplies VMware scale-out storage solutions with PCIe SSDs and disk storage using the NFS protocol. Howard and I talked with Andy and Coho Data at Storage Field Day 4 last November but we thought he was so interesting, he deserved a second conversation.

This months podcast comes in at a little over 40 minutes. I apologize for the occasional poor sound quality. I used WiFi while recording the call while recuperating from foot surgery. Hopefully, next month I will be back to my normal office and using LAN.

Andy comes at storage from a stint at XenSource and Citrix Systems and sees many parallels between server virtualization and storage. In the case of servers, CPUs had become so powerful that in order to take advantage of all that speed you needed to run multiple independent workloads using a non-intrusive hypervisor to coordinate it all. In storage, the case can be made that PCIe SSDs can now supply more IOPS and throughput than most single application can possibly use and the way to take effective advantage of that performance is to support multiple IO workloads using a non-intrusive storage hyper/supervisor to coordinate it all. For Coho Data, all IO lands on PCIe SSD first and then is only migrated to Disk if it’s cold enough not to warrant flash residency.

The other interesting thing about Coho Data was their inclusion of an OpenFlow SDN switch in their scale-out storage system. They use SDN switching to help implement the NFS presentation layer,  balance IO workload across different nodes and direct IO to an appropriate node.

Although, I may have made mention of using 8″ floppies to gather data from storage systems in the old days, contrary to popular myth I never played frisbee with them.

Listen to the podcast to learn more…

Andrew Warfield, CTO/Co-founder Coho Data


Andy is an established researcher in computer systems, specializing in storage, virtualization, and security. At Coho Data, Andy leads the technology vision and directs the engineering team in building elegant and functional systems that enable customers to focus on the data and applications instead of the underlying infrastructure that drives them. As a PhD student at the University of Cambridge, he was one of the original authors of the Xen hypervisor, and has since done award-winning research in virtualization and high availability. At XenSource and Citrix Systems, he was the Technical Director for Storage & Emerging Technologies.