34: GreyBeards talk Copy Data Management with Ash Ashutosh, CEO Actifio

In this episode, we talk with Ash Ashutosh (@ashashutosh), CEO of Actifio a copy data virtualization company. Howard met up with Ash at TechFieldDay11 (TFD11) a couple of weeks back and wanted another chance to talk with him.  Ash seems to have been around forever, the first time we met I was at a former employer and he was with AppIQ (later purchased by HP).  Actifio is populated by a number of industry veterans and since being founded in 2009 is doing really well, with over 1000 customers.

So what’s copy data virtualization (management) anyway?  At my former employer, we did an industry study that determined that IT shops (back in the 90’s) were making 9-13 copies of their data. These days,  IT is making, even more, copies of the exact same data.

Data copies proliferate like weeds

Engineers use snapshots for development, QA and validation. Analysts use data copies to better understand what’s going on in their customer-partner interactions, manufacturing activities, industry trends, etc. Finance, marketing , legal, etc. all have similar needs which just makes the number of data copies grow out of sight. And we haven’t even started to discuss backup.

Ash says things reached a tipping point when server virtualization become the dominant approach to running applications, which led to an ever increasing need for data copies as app’s started being developed and run all over the place. Then came along data deduplication which displaced tape in IT’s backup process, so that backup data (copies) now could reside on disk.  Finally, with the advent of disk deduplication, backups no longer had to be in TAR (backup) formats but could now be left in-app native formats. In native formats, any app/developer/analyst could access the backup data copy.

Actifio Copy Data Virtualization

So what is Actifio? It’s essentially a massively distributed object storage with a global name space, file system on top of it. Application hosts/servers run agents in their environments (VMware, SQL Server, Oracle, etc.) to provide change block tracking and other metadata as to what’s going on with the primary data to be backed up. So when a backup is requested, only changed blocks have to be transferred to Actifio and deduped. From that deduplicated change block backup, a full copy can be synthesized, in native format, for any and all purposes.

With change block tracking, backups become very efficient and deduplication only has to work on changed data so that also becomes more effective. Data copying can also be done more effectively since their only tracking deduplicated data. If necessary, changed blocks can also be applied to data copies to bring them up to date and current.

With Actifio, one can apply SLA’s to copy data. These SLA’s can take the form of data governance, such that some copies can’t be viewed outside the country, or by certain users. And they can also provide analytics on data copies. Both of these capabilities take copy data to whole new level.

We didn’t get into all Actifio’s offerings on the podcast but Actifio CDS is as a high availability appliance which runs their  object/file system and contains data storage. Actifio also comes in a virtual appliance as Actifio SKY, which runs as a VM under VMware, using anyone’s storage.  Actifio supports NFS, SMB/CIFS, FC, and iSCSI access to data copies, depending on the solution chosen. There’s a lot more information on their website.

It sounds a little bit like PrimaryData but focused on data copies rather than data migration and mostly tier 2 data access.

The podcast runs ~46 minutes and  covers a lot of ground. I spent most of the time asking Ash to explain Actifio (for Howard, TFD11 filled this in). Howard had some technical difficulties during the call which caused him to go offline but then came back on the call. Ash and I never missed him :), listen to the podcast to learn more.

Ash Ashutosh, CEO Actifio

Ash Ashutosh Hi Res copy-resizedAsh Ashutosh brings more than 25 years of storage industry and entrepreneurship experience to his role of CEO at Actifio. Ashutosh is a recognized leader and architect in the storage industry where he has spearheaded several major industry initiatives, including iSCSI and storage virtualization, and led the authoring of numerous storage industry standards. Ashutosh was most recently a Partner with Greylock Partners where he focused on making investments in enterprise IT companies. Prior to Greylock, he was Vice President and Chief Technologist for HP Storage.

Ashutosh founded and led AppIQ, a market leader of Storage Resource Management (SRM) solutions, which was acquired by HP in 2005. He was also the founder of Serano Systems, a Fibre Channel controller solutions provider, acquired by Vitesse Semiconductor in 1999. Prior to Serano, Ashutosh was Senior Vice President at StorageNetworks, the industry’s first Storage Service Provider. He previously worked as an architect and engineer at LSI and Intergraph.

33: GreyBeards talk HPC storage with Frederic Van Haren, founder HighFens & former Sr. Director of HPC at Nuance

IMG_6319In episode 33 we talk with Frederic Van Haren (@fvha), founder of HighFens, Inc. (@HighFens), a new HPC consultancy and former Senior Director of HPC at Nuance Communications. Howard and I got a chance to talk with Frederic at a recent HPE storage deep dive event, I met up with him again during SFD10, where he was talking on behalf of Kaminario, and he was also at HPE Discover conference last week.

Nuance is the backend speech recognition engine for a number of popular service offerings. Nuance looks very similar to a lot of other hyper-scale customers and ultimately, we feel may be the way of the future for all IT over the coming decades.  Nuance’s data storage journey since Frederic’s tenure with the company holds many lessons for all of us in the storage industry

Nuance currently has ~6PB usable (~16PB raw) of speech wave files as well as uncountable text and other files, all inside IBM SpectrumScale (GPFS).  They have both lots of big files and lots of small files. These days, Spectrum Scale is processing 2-3M files/second. They have doubled capacity for each of the last 9 years, and today handle a billion new files a month. GPFS stripes data across storage, provides data protection, migration, snapshotting and storage tiering across a diverse mix of storage. At the end of the podcast we discussed some open source alternatives to Spectrum Scale but at the time Nuance started down this path,  GPFS was found to be the only thing that could do the job. This proved to be a great solution as they have completely swapped out the underlying storage at least 3 times and all their users were none the wiser.

The first storage that Frederic talked about was Coraid (no longer in business) and their ATA over Ethernet storage solution. This used a SuperMicro with 24 SATA drives/shelf and they bought 40 shelves. Over time this grew to 1000s of SATA drives and was easily scaleable but hard to manage, as it was pretty dumb storage. In fact, they had to deploy video cameras, focused on drive shelves, to detect when drives failed!

Overtime, Nuance came to the realization that they had to do something more manageable and brought in HPE MSA storage to replace their Coraid storage. The MSA was a great solution for them which had 96 SAS drives, were able to support both faster “SCRATCH” storage using fast SAS 300GB/15KRPM drives and slower “STATIC” storage with slower SATA 760GB/7.2KRPM drives and was much more manageable than the Coraid solution.

Although MSA storage worked great, after a while, Nuance’s sprawling FC environment which was doubling yearly, caused them to rethink their storage once again. This led them to swap out all their HPE MSA storage, for HPE 3PAR to consolidate their FC network and storage footprint.

For metadata, Nuance uses a 76 node, Hadoop cluster for sophisticated search queries as doing an LS on the GPFS file system would take days. Their file meta-data is essentially a textual, row by row database and they use queries over the Hadoop cluster to determine things like which files have american english, spoken by females, with 8Khz recording.  Not sure when, but eventually Nuance deployed HPE Vertica SQL over Hadoop for their metadata engine and dropped average query from 12 minutes to 73 sec.(!!)

Nuance, because of their extreme growth and more open environment to storage innovation, had become a favorite for storage startups and major vendors to do Proofs of Concepts (PoC) on new storage offerings. One PoC, Nuance did was for Kamanario storage. There is a standard metric that says a CPU core requires so many IOPS, so that when CPU cores  increase,  you need to supply more IOPS. They went with Kaminario for their test-dev environment and more performance intensive storage. Nuance appreciates Kamanario’s reliability, high availability and highly predictable performance. (See the SFD10 video feed for Frederic’s session)

We talked a bit about how speech recognition’s Hidden Markov Chain statistical model was heavily dependent on CPU cores. Today, if you want to do a recognition task, you assigned it to one core and waited until it was done, a serial process dependent on the # of CPU cores you had available. This turned out to be quite a problem as you had to scale CPU cores if you wanted to do more concurrent speech recognition activities. Then came GPUs and you could do speech recognition work on a GPU core. With the new GPU cards,   instead of a server having ~16 CPU cores,  you could have a server with multiple Graphic cards having 3000-GPU cores. This scaled a lot easier. Machine learning and deep neural nets have the potential to parallelize this, so that it will scale even better

In the end, HPC trials, tribulations and ways of doing business are starting to become  mainstream. I was recently talking to one vendor that said, most HPC groups start out in isolation to support one application but over time they either subsume corporate IT or get absorbed into corp. IT or continue to be a standalone group (while waiting until one of the other two happen).

The podcast runs ~41 minutes and  covers a lot of ground about one HPC organization’s evolution of their storage environment over time, what was driving some of that evolution and the tools they chose to master it.  Listen to the podcast to learn more.

0F2A7849 - Copyv2-resizedFrederic Van Haren, founder HighFens, Inc.

Frederic Van Haren is the Chief Technology Officer @Highfens and known for his insights in the HPC and storage industry. He has over 20 years of experience in High Tech providing technical leadership and strategic direction in Telecom and Speech markets. Frederic spent the last decade at  Nuance Communications building large HPC environments from the ground up. He is frequently invited to speak at events to provide his insights on the HPC and storage markets. He has played leading roles as President of a variety of technology user groups promoting the use of innovative technology. As an Engineer he enjoys working with the engineering teams from technology vendors providing feedback on new and upcoming products.

Frederic lives in Massachusetts,  USA but grew up in the northern part of Belgium where he received his Masters in Electrical Engineering, Electronics and Automation.

GreyBeards deconstruct storage with Brian Biles and Hugo Patterson, CEO and CTO, Datrium

In this our 32nd episode we talk with Brian Biles (@BrianBiles), CEO & Co-founder and Hugo Patterson, CTO & Co-founder of Datrium a new storage startup. We like to call it storage deconstructed, a new view of what storage could be based on today and future storage technologies.  If I had to describe it succinctly, I would say it’s a hybrid between software defined storage, server side flash and external disk storage.  We have discussed server side flash before but this takes it to a whole another level.

Their product, the DVX consists of Hyperdriver host software and a NetShelf, external disk storage unit. The DVX was designed from the ground up based on the use of host/server side flash or non-volatile memory as a given and built everything else around that. I hesitate to say this but the DVX NetShelf backend storage is pretty unintelligent, just a dual controller disk storage with a multi-task coordinator. In contrast, the DVX Hyperdriver host software used to access their storage system is pretty smart and is installed as a VIB in vSphere. Customers can assign up to 8TB of host-based, server side flash/non-volatile memory to the storage system per server. The Datrium DVX does the rest.

The Hyperdriver leverages host flash, DRAM and compute cores to act as a caching layer for read and write IO and as a data management engine. Write data is write-thru straight from the server side flash to the NetShelf storage system which has Non-volatile DRAM (NVRAM) caching. Once write data is in NetShelf cache, it’s in two places, one on the host server side flash and the other in storage NVRAM. Reads are easier to handle, just being cached from the NetShelf storage in the server side flash. There’s no unique data residing in the hosts.

The Hyperdriver looks like a NFS mount to vSphere and the DVX uses a proprietary protocol to talk with the backend DVX NetShelf. Datrium supports up to 32 hosts and you can define the amount of Flash, DRAM and host compute allocated to the DVX Hyperdriver activity.

But the other interesting part about DVX is that much of the storage management functionality and storage control logic is partitioned between the host  Hyperdriver and NetShelf, with both participating to do what they do best.

For example,  disk rebuilds are done in combination with the host Hyperdriver. DVX RAID rebuild brings data from the backend into host cache, computes rebuild data and writes the reconstructed data back out to the NetShelf backend. This way rebuild performance can scale up with the number of hosts active in a cluster.

DVX data are compressed and deduplicated at the host before being sent to the NetShelf. The NetShelf backend also does a global deduplication on the host data. Hashing computations and data compression activities are all done on the host and passed on to the NetShelf.  Brian and Hugo were formerly with EMC Data Domain, and know all about data deduplication.

At the moment DVX is missing some storage functionality but they have an extensive roadmap with engineering resources to match and are plugging away at all of it. On the other hand, very few disk storage devices offer deduped/compressed data storage and warm server side caches during vMotion. They also support QoS functionality to limit the amount of host resources consumed by DVX Hyperdriver software

The podcast runs ~41 minutes and episode covers a lot of ground about how the new DVX product came about, how they separated storage functionality between host and backend and other aspects of DVX storage.  Listen to the podcast to learn more.

AAEAAQAAAAAAAAK8AAAAJGQyODQwNjg1LWI3NTMtNGY0OC04MGVmLTc5Nzg3N2IyMmEzYQBrian Biles, Datrium CEO & Co-founder

Prior to Datrium, Brian was Founder and VP of Product Mgmt. at EMC Backup Recovery Systems Division. Prior to that he was Founder, VP of Product Mgmt. and Business Development for Data Domain (acquired by EMC in 2009).

Hugo Patterson, Datrium CTO & Co-founderAAEAAQAAAAAAAANZAAAAJDhiMTI2NzMyLTdkZDAtNDE5Yy1hMTM5LTNiMWM2MWM3NTlmMA

Prior to Datrium, Hugo was an EMC Fellow serving as CTO of the EMC Backup Recovery Systems Division, and the Chief Architect and CTO of Data Domain (acquired by EMC in 2009), where he built the first deduplication storage system. Prior to that he was the engineering lead at NetApp, developing SnapVault, the first snap-and-replicate disk-based backup product. Hugo has a Ph.D. from Carnegie Mellon.

 

GreyBeards talk with Lee Caswell and Dave Wright of NetApp

In our 30th episode, we talk with Dave Wright (@JungleDave), SolidFire founder, VP & GM SolidFire of NetApp and Lee Casswell (@LeeCaswell), VP Products, Solution & Services Marketing NetApp. Dave’s been on before as CEO of SolidFire back in May of 2014, but this is the first time for Lee. Dave’s also been a prominent guest at Storage Field Day, most recently at SFD9 with Dave Hitz from NetApp. Unclear how Lee managed to avoid TFD/SFD duty but it’s only a matter of time.

Solidfire was recently acquired by NetApp in their largest acquisition ever, signaling a new direction for them (acquisition closed 2 Feb. 2016). Since we had spent a prior podcast on another recent storage acquisition, we thought it only appropriate to talk with these two as well. We started the discussion with Dave and how it feels to be within the NetApp umbrella.

Another topic that came up was how flash gets used in the cloud. Old school had it that flash was just high IO performance but nowadays, next gen application development has a range of IO requirements which all need consistent performance to data. Flash with scale out and QoS can handle this wide range of requirements across cloud applications. Lee mentioned how flash adoption is changing from application specific to more general purpose storage which is removing the “IO bottleneck”.

Google had written a study saying that for the next decade there will not be a flash-disk crossover but the differences are small enough that you almost have to be hyper-scale customers to see significant economic advantages.

We discussed the lack of lot’s of AFA’s doing well on throughput intensive benchmarks. Dave mentioned that throughput was one of disk’s better performing modes and in the past, storage interfaces 3Gbps-6Gbps hid a lot of flash performance. But benchmarks of synthesized pure workloads aren’t real world, workloads in real data centers are much messier.

IO density (IOPS/GB) came up as another discussion topic.  At low IO density, disk may still make sense but as IO density increases, all flash makes much more sense.

Google also mentioned the importance of tail-end IO latency (IO latency at 99.9%). Poor tail IO latency has been an ongoing problem holding back the adoption of hybrid storage. All flash has same advantages here but are not all AFAs are immune to the problems in tail-end latency.

The podcast runs just over 39 minutes and episode covers a lot of ground about their products, flash technology advantages, and market dynamics.  Listen to the podcast to learn more.

Dave Wright, SolidFire Founder, Vice President, and GM

Dave Wright_201506-0063Dave Wright left Stanford in 1998 to help start GameSpy Industries, a leader in online video game media, technology, and software. While at GameSpy, Dave led the team that created a backend infrastructure powering thousands of games and millions of gamers. GameSpy merged with IGN Entertainment in 2004 to create one of the largest Internet gaming & entertainment media companies. Dave served as Chief Architect for IGN and led technology integration with FIM / MySpace after IGN was acquired by NewsCorp in 2005.

In 2007 Dave founded Jungle Disk, a pioneer and early leader in cloud-based storage and backup solutions for consumers and businesses. Jungle Disk was acquired by leading cloud provider Rackspace in 2008 and Dave worked closely with the Rackspace Cloud division to build a cloud platform supporting tens of thousands of customers. In December 2009 Dave left Rackspace to start SolidFire.

Lee Caswell, Vice President Product, Solutions, and Services Marketing

LeeLee Caswell is vice president of Product, Solutions and Services Marketing at NetApp, where he leads a team that speeds the customer adoption of new products, partnerships, and integrations. Lee joined NetApp in 2014 and has extensive experience in executive leadership within the storage, flash and virtualization markets.

Lee was previously vice president of Marketing at Fusion-IO (now SanDisk). Prior to Fusion-IO Lee was a founding member of Pivot3, a company considered to be an early innovator in hyper-converged systems, where he served as the CEO and CMO. Earlier in his career, Lee held marketing leadership positions at VMware, Adaptec, and SEEQ Technology (now LSI Logic). He started his career at General Electric in Corporate Consulting.

Lee holds a bachelor of arts degree in economics from Carleton College and a master of business administration degree from Dartmouth College. Lee is a New York native and has lived in northern California for many years. He and his wife live in Palo Alto and have two children. In his spare time Lee enjoys cycling, playing guitar, and hiking the local hills.

Disclaimer: NetApp and SolidFire have been clients of DeepStorageNet and NetApp is a current client of Silverton Consulting.

Greybeards talk car videos, storage and IT trends with Marc Farley

In our 30th episode, we talk with 3rd time guest star,  Marc Farley (@GoFarley), Formerly with Datera and Tegile. Marc has recently gone on sabbatical and we wanted to talk to him about what was keeping him busy and what was going on in storage/IT industry these days.

Marc is currently curating a car comedy vlog called theridecast.com. Apparently people, at least in California, are making comedy videos in their cars. They can be quite hilarious, checkout this episode of comedian in cars getting coffee.

While in the storage biz, the industry is getting battered by a number of trends: IT shrinking budgets, vendor proliferation, migration to cloud, and flash becoming old hat. Marc makes multiple points as to why the storage market is undergoing such a major transition these days:

  • Death to tech refresh, long live the cloud –  yes the cloud does upgrade hardware but  planned storage system obsolesce doesn’t happen in the cloud anymore. Cloud providers are  buying new SSDs, disks, white box servers, memory etc,  but not enterprise class storage, server or networking hardware.
  • AFA is boring, but selling – every vendor’s got one , two or sometimes three and they all know how to provide flash storage services. Customers pay extra for AFA, whether they need to or not, because they are swapping out old expensive, enterprise class storage for AFAs that often cost less but still provide better performance..
  • Tail IO latency becoming more important but it’s not understood – when IO response times go from 100µsec to 10msec, it hurts. It doesn’t matter if it’s every 1000 or 10,000 IOs, customers want less performance variability, which is a main reason they move to AFA in the first place. But not all AFA’s perform the same in tail latency and SSD controller/system architecture make a big difference.
  • Hybrid storage survives but only if you go big – hybrid storage economics makes sense only for large, diverse data repositories, that mix user directories, non-performance sensitive apps, and other structured and unstructured data in one data store.
  • Greenfield apps & secondary storage are moving to the cloud but migrating current apps to the cloud is difficult –  for new app development and archive storage, moving to or starting in the cloud is a no-brainer. Transitioning running enterprise class apps to the cloud is tough to do, that requires multiple skill sets and may never be successful. Hybrid  (cloud-on premises) enterprise class apps are too arduous to even contemplate.
  • Realtime analytics is emerging but data needs to be on flash – yes MapReduce is a batch activity which can uses lots of slow disk but there’s more to analytics than MR, and doing log analysis, in anything approaching realtime, one needs flash performance.
  • Optical’s persistence is great but who leaves data on the same technology for  20 years –  with magnetic and electronic storage densities going up every couple of years, who could afford keep data on the same optical technology that was 20 years old. Imagine using microfiche to keep PB of data today, inconceivable.

As for IT in general, one limiter of IT activity will become the lack of skilled engineers, specifically full-stack engineers and data scientists.

We ended our discussions on the economics of Samsung 3D NAND and Intel-Micron (IM) 3D Xpoint non-volatile memories. Both new semiconductor technologies are always long term investments. Today, Samsung is probably losing money on each 3D TLC NAND SSD it sells, but over time, as  fab yields improve, it should become cheap enough to make a profit. Similarly, 3D Xpoint may be costly to produce early on, but as IM perfect  their fab processes, the technology should become inexpensive enough to make oodles of $s for them. And there’s more technology changes to come.

The podcast runs just over 40 minutes and covers a lot of ground. Marc’s been in the IT almost as long as the GreyBeards and has a unique perspective on what’s happening today, having been with so many diverse, major and (minor) startup vendors throughout his tenure in the industry.  Listen to the podcast to learn more.

Marc Farley


Marc is a storage greybeard who has worked for many storage companies and is currently on sabbatical. He has written three books on storage including his most recent, Rethinking Enterprise Storage: A Hybrid Cloud Model and his previous books Building Storage Networks and Storage Networking Fundamentals.

In addition to his writing books he has been a blogger and podcaster about storage topics while working for EqualLogic, Dell, 3PAR, HP, StorSimple,  Microsoft, and others.

When he is not working, Marc likes to ride bicycles, listen to music, spend time with his family and dote on his cats. Of course there’s that car video curation…