35: GreyBeards talk Flash Memory Summit wrap-up with Jim Handy, Objective Analysis

In this episode, we talk with Jim Handy (@thessdguy), memory and flash analyst at Objective Analysis. Jim’s been on our podcast before and last time we had a great talk on flash trends. As Jim, Howard and Ray were all at Flash Memory Summit (FMS 2016) last week, we thought it appropriate to get together and discuss what we found interesting at the summit

Flash is undergoing significant change. We started our discussion with which vendor had the highest density flash device. It’s not that easy to answer because of all the vendors at the show. For example Micron’s shipping a 32 GB chip and Samsung announced a 1TB BGA. And as for devices, Seagate announced a monster, 3.5″ 60TB SSD.

MicroSD cards have 16-17 NAND chips plus a mini-controller, at that level, with a 32GB chip, we could have a ~0.5TB MicroSD card in the near future. No discussion on pricing but Howard’s expectation is that they will be expensive.

NVMe over fabric push

One main topic of conversation at FMS was how NVMe over fabric is emerging. There were a few storage vendors at FMS taking advantage of this, including E8 Storage and Mangstor, both showing off NVMe over Ethernet flash storage. But there were plenty of others talking NVMe over fabric and all the major NAND manufacturers couldn’t talk enough about NVMe.

Facebook’s keynote had a couple of surprises. One was their request for WORM (QLC) flash.  It appears that Facebook plans on keeping user data forever. Another item of interest was their Open Compute Project Lightning JBOF (just a bunch of flash) device using NVMe over Ethernet (see Ray’s post on Facebook’s move to JBOF). They were also interested in ganging up M.2 SSDs into a single package. And finally they discussed their need for SCM.

Storage class memory

The other main topic was storage class memory (SCM), and all the vendors talked about it. Sadly, the timeline for Intel-Micron 3D Xpoint has them supplying sample chips/devices by year end next year (YE2017) and releasing devices to market with SCM the following year (2018). They did have one (hand built) SSD at the show with remarkable performance.

On the other hand, there are other SCM’s on the market, including EverSpin (MRAM) and CrossBar (ReRAM). Both of these vendors had products on display but their capacities were on the order of Mbits rather than Gbits.

It turns out they’re both using ~90nm fab technology and need to get their volumes up before they can shrink their technologies to hit higher densities. However, now that everyone’s talking about SCM, they are starting to see some product wins.  In fact, Mangstor is using EverSpin as a non-volatile write buffer.

Jim explained that 90nm is where DRAM was in 2005 but EverSpin/CrossBar’s bit density is better than DRAM was at the time. But DRAM is now on 15-10nm class technologies and sell 10B DRAM chips/year. EverSpin and CrossBar (together?) are doing more like 10M chips/year. The costs to shrink to the latest technology are ~$100M to generate the masks required. So for these vendors, volumes have to go up drastically before capacity can increase significantly.

Also, at the show Toshiba mentioned they’re focusing on ReRAM for their SCM.

As Jim recounted, the whole SCM push has been driven by Intel and their need to keep improving the performance of memory and storage, otherwise they felt their processor sales would stall.

3D NAND is here

IMG_6719Just about every NAND manufacturer talked about their 3D NAND chips, ranging from 32 layers to 64 layers. From Jim’s perspective, 3D NAND was inevitable, as it was the only way to continue scaling in density and reducing bit costs for NAND.

Samsung was first to market with 3D NAND as a way to show technological leadership. But now everyone’s got it and providing future discussions on bit density and number of layers.  What their yields are is another question. But Planar NAND’s days are over.

Toshiba’s FlashMatrix

IMG_6720Toshiba’s keynote discussed a new flash storage system called the FlashMatrix but at press time they had yet to share their slides with the FMS team,  so  information on FlashMatrix was sketchy at best.

However, they had one on the floor and it looked like a bunch of M2 flash across an NVMe (over Ethernet?) mesh backplane with compute engines connected at the edge.

We had a hard time understanding why Toshiba would do this. Our best guess is perhaps they want to provide OEMs an alternative to SanDisk’s Infiniflash.

The podcast runs over 50 minutes and covers flash technology on display at the show and the history of SCM. I think Howard and Ray could easily spend a day with Jim and not exhaust his knowledge of Flash and we haven’t really touched on DRAM. Listen to the podcast to learn more.

Jim Handy, Memory and Flash analyst at Objective Analysis.

JH Mug BW

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication. He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media.  He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com.

34: GreyBeards talk Copy Data Management with Ash Ashutosh, CEO Actifio

In this episode, we talk with Ash Ashutosh (@ashashutosh), CEO of Actifio a copy data virtualization company. Howard met up with Ash at TechFieldDay11 (TFD11) a couple of weeks back and wanted another chance to talk with him.  Ash seems to have been around forever, the first time we met I was at a former employer and he was with AppIQ (later purchased by HP).  Actifio is populated by a number of industry veterans and since being founded in 2009 is doing really well, with over 1000 customers.

So what’s copy data virtualization (management) anyway?  At my former employer, we did an industry study that determined that IT shops (back in the 90’s) were making 9-13 copies of their data. These days,  IT is making, even more, copies of the exact same data.

Data copies proliferate like weeds

Engineers use snapshots for development, QA and validation. Analysts use data copies to better understand what’s going on in their customer-partner interactions, manufacturing activities, industry trends, etc. Finance, marketing , legal, etc. all have similar needs which just makes the number of data copies grow out of sight. And we haven’t even started to discuss backup.

Ash says things reached a tipping point when server virtualization become the dominant approach to running applications, which led to an ever increasing need for data copies as app’s started being developed and run all over the place. Then came along data deduplication which displaced tape in IT’s backup process, so that backup data (copies) now could reside on disk.  Finally, with the advent of disk deduplication, backups no longer had to be in TAR (backup) formats but could now be left in-app native formats. In native formats, any app/developer/analyst could access the backup data copy.

Actifio Copy Data Virtualization

So what is Actifio? It’s essentially a massively distributed object storage with a global name space, file system on top of it. Application hosts/servers run agents in their environments (VMware, SQL Server, Oracle, etc.) to provide change block tracking and other metadata as to what’s going on with the primary data to be backed up. So when a backup is requested, only changed blocks have to be transferred to Actifio and deduped. From that deduplicated change block backup, a full copy can be synthesized, in native format, for any and all purposes.

With change block tracking, backups become very efficient and deduplication only has to work on changed data so that also becomes more effective. Data copying can also be done more effectively since their only tracking deduplicated data. If necessary, changed blocks can also be applied to data copies to bring them up to date and current.

With Actifio, one can apply SLA’s to copy data. These SLA’s can take the form of data governance, such that some copies can’t be viewed outside the country, or by certain users. And they can also provide analytics on data copies. Both of these capabilities take copy data to whole new level.

We didn’t get into all Actifio’s offerings on the podcast but Actifio CDS is as a high availability appliance which runs their  object/file system and contains data storage. Actifio also comes in a virtual appliance as Actifio SKY, which runs as a VM under VMware, using anyone’s storage.  Actifio supports NFS, SMB/CIFS, FC, and iSCSI access to data copies, depending on the solution chosen. There’s a lot more information on their website.

It sounds a little bit like PrimaryData but focused on data copies rather than data migration and mostly tier 2 data access.

The podcast runs ~46 minutes and  covers a lot of ground. I spent most of the time asking Ash to explain Actifio (for Howard, TFD11 filled this in). Howard had some technical difficulties during the call which caused him to go offline but then came back on the call. Ash and I never missed him :), listen to the podcast to learn more.

Ash Ashutosh, CEO Actifio

Ash Ashutosh Hi Res copy-resizedAsh Ashutosh brings more than 25 years of storage industry and entrepreneurship experience to his role of CEO at Actifio. Ashutosh is a recognized leader and architect in the storage industry where he has spearheaded several major industry initiatives, including iSCSI and storage virtualization, and led the authoring of numerous storage industry standards. Ashutosh was most recently a Partner with Greylock Partners where he focused on making investments in enterprise IT companies. Prior to Greylock, he was Vice President and Chief Technologist for HP Storage.

Ashutosh founded and led AppIQ, a market leader of Storage Resource Management (SRM) solutions, which was acquired by HP in 2005. He was also the founder of Serano Systems, a Fibre Channel controller solutions provider, acquired by Vitesse Semiconductor in 1999. Prior to Serano, Ashutosh was Senior Vice President at StorageNetworks, the industry’s first Storage Service Provider. He previously worked as an architect and engineer at LSI and Intergraph.

33: GreyBeards talk HPC storage with Frederic Van Haren, founder HighFens & former Sr. Director of HPC at Nuance

IMG_6319In episode 33 we talk with Frederic Van Haren (@fvha), founder of HighFens, Inc. (@HighFens), a new HPC consultancy and former Senior Director of HPC at Nuance Communications. Howard and I got a chance to talk with Frederic at a recent HPE storage deep dive event, I met up with him again during SFD10, where he was talking on behalf of Kaminario, and he was also at HPE Discover conference last week.

Nuance is the backend speech recognition engine for a number of popular service offerings. Nuance looks very similar to a lot of other hyper-scale customers and ultimately, we feel may be the way of the future for all IT over the coming decades.  Nuance’s data storage journey since Frederic’s tenure with the company holds many lessons for all of us in the storage industry

Nuance currently has ~6PB usable (~16PB raw) of speech wave files as well as uncountable text and other files, all inside IBM SpectrumScale (GPFS).  They have both lots of big files and lots of small files. These days, Spectrum Scale is processing 2-3M files/second. They have doubled capacity for each of the last 9 years, and today handle a billion new files a month. GPFS stripes data across storage, provides data protection, migration, snapshotting and storage tiering across a diverse mix of storage. At the end of the podcast we discussed some open source alternatives to Spectrum Scale but at the time Nuance started down this path,  GPFS was found to be the only thing that could do the job. This proved to be a great solution as they have completely swapped out the underlying storage at least 3 times and all their users were none the wiser.

The first storage that Frederic talked about was Coraid (no longer in business) and their ATA over Ethernet storage solution. This used a SuperMicro with 24 SATA drives/shelf and they bought 40 shelves. Over time this grew to 1000s of SATA drives and was easily scaleable but hard to manage, as it was pretty dumb storage. In fact, they had to deploy video cameras, focused on drive shelves, to detect when drives failed!

Overtime, Nuance came to the realization that they had to do something more manageable and brought in HPE MSA storage to replace their Coraid storage. The MSA was a great solution for them which had 96 SAS drives, were able to support both faster “SCRATCH” storage using fast SAS 300GB/15KRPM drives and slower “STATIC” storage with slower SATA 760GB/7.2KRPM drives and was much more manageable than the Coraid solution.

Although MSA storage worked great, after a while, Nuance’s sprawling FC environment which was doubling yearly, caused them to rethink their storage once again. This led them to swap out all their HPE MSA storage, for HPE 3PAR to consolidate their FC network and storage footprint.

For metadata, Nuance uses a 76 node, Hadoop cluster for sophisticated search queries as doing an LS on the GPFS file system would take days. Their file meta-data is essentially a textual, row by row database and they use queries over the Hadoop cluster to determine things like which files have american english, spoken by females, with 8Khz recording.  Not sure when, but eventually Nuance deployed HPE Vertica SQL over Hadoop for their metadata engine and dropped average query from 12 minutes to 73 sec.(!!)

Nuance, because of their extreme growth and more open environment to storage innovation, had become a favorite for storage startups and major vendors to do Proofs of Concepts (PoC) on new storage offerings. One PoC, Nuance did was for Kamanario storage. There is a standard metric that says a CPU core requires so many IOPS, so that when CPU cores  increase,  you need to supply more IOPS. They went with Kaminario for their test-dev environment and more performance intensive storage. Nuance appreciates Kamanario’s reliability, high availability and highly predictable performance. (See the SFD10 video feed for Frederic’s session)

We talked a bit about how speech recognition’s Hidden Markov Chain statistical model was heavily dependent on CPU cores. Today, if you want to do a recognition task, you assigned it to one core and waited until it was done, a serial process dependent on the # of CPU cores you had available. This turned out to be quite a problem as you had to scale CPU cores if you wanted to do more concurrent speech recognition activities. Then came GPUs and you could do speech recognition work on a GPU core. With the new GPU cards,   instead of a server having ~16 CPU cores,  you could have a server with multiple Graphic cards having 3000-GPU cores. This scaled a lot easier. Machine learning and deep neural nets have the potential to parallelize this, so that it will scale even better

In the end, HPC trials, tribulations and ways of doing business are starting to become  mainstream. I was recently talking to one vendor that said, most HPC groups start out in isolation to support one application but over time they either subsume corporate IT or get absorbed into corp. IT or continue to be a standalone group (while waiting until one of the other two happen).

The podcast runs ~41 minutes and  covers a lot of ground about one HPC organization’s evolution of their storage environment over time, what was driving some of that evolution and the tools they chose to master it.  Listen to the podcast to learn more.

0F2A7849 - Copyv2-resizedFrederic Van Haren, founder HighFens, Inc.

Frederic Van Haren is the Chief Technology Officer @Highfens and known for his insights in the HPC and storage industry. He has over 20 years of experience in High Tech providing technical leadership and strategic direction in Telecom and Speech markets. Frederic spent the last decade at  Nuance Communications building large HPC environments from the ground up. He is frequently invited to speak at events to provide his insights on the HPC and storage markets. He has played leading roles as President of a variety of technology user groups promoting the use of innovative technology. As an Engineer he enjoys working with the engineering teams from technology vendors providing feedback on new and upcoming products.

Frederic lives in Massachusetts,  USA but grew up in the northern part of Belgium where he received his Masters in Electrical Engineering, Electronics and Automation.

GreyBeards deconstruct storage with Brian Biles and Hugo Patterson, CEO and CTO, Datrium

In this our 32nd episode we talk with Brian Biles (@BrianBiles), CEO & Co-founder and Hugo Patterson, CTO & Co-founder of Datrium a new storage startup. We like to call it storage deconstructed, a new view of what storage could be based on today and future storage technologies.  If I had to describe it succinctly, I would say it’s a hybrid between software defined storage, server side flash and external disk storage.  We have discussed server side flash before but this takes it to a whole another level.

Their product, the DVX consists of Hyperdriver host software and a NetShelf, external disk storage unit. The DVX was designed from the ground up based on the use of host/server side flash or non-volatile memory as a given and built everything else around that. I hesitate to say this but the DVX NetShelf backend storage is pretty unintelligent, just a dual controller disk storage with a multi-task coordinator. In contrast, the DVX Hyperdriver host software used to access their storage system is pretty smart and is installed as a VIB in vSphere. Customers can assign up to 8TB of host-based, server side flash/non-volatile memory to the storage system per server. The Datrium DVX does the rest.

The Hyperdriver leverages host flash, DRAM and compute cores to act as a caching layer for read and write IO and as a data management engine. Write data is write-thru straight from the server side flash to the NetShelf storage system which has Non-volatile DRAM (NVRAM) caching. Once write data is in NetShelf cache, it’s in two places, one on the host server side flash and the other in storage NVRAM. Reads are easier to handle, just being cached from the NetShelf storage in the server side flash. There’s no unique data residing in the hosts.

The Hyperdriver looks like a NFS mount to vSphere and the DVX uses a proprietary protocol to talk with the backend DVX NetShelf. Datrium supports up to 32 hosts and you can define the amount of Flash, DRAM and host compute allocated to the DVX Hyperdriver activity.

But the other interesting part about DVX is that much of the storage management functionality and storage control logic is partitioned between the host  Hyperdriver and NetShelf, with both participating to do what they do best.

For example,  disk rebuilds are done in combination with the host Hyperdriver. DVX RAID rebuild brings data from the backend into host cache, computes rebuild data and writes the reconstructed data back out to the NetShelf backend. This way rebuild performance can scale up with the number of hosts active in a cluster.

DVX data are compressed and deduplicated at the host before being sent to the NetShelf. The NetShelf backend also does a global deduplication on the host data. Hashing computations and data compression activities are all done on the host and passed on to the NetShelf.  Brian and Hugo were formerly with EMC Data Domain, and know all about data deduplication.

At the moment DVX is missing some storage functionality but they have an extensive roadmap with engineering resources to match and are plugging away at all of it. On the other hand, very few disk storage devices offer deduped/compressed data storage and warm server side caches during vMotion. They also support QoS functionality to limit the amount of host resources consumed by DVX Hyperdriver software

The podcast runs ~41 minutes and episode covers a lot of ground about how the new DVX product came about, how they separated storage functionality between host and backend and other aspects of DVX storage.  Listen to the podcast to learn more.

AAEAAQAAAAAAAAK8AAAAJGQyODQwNjg1LWI3NTMtNGY0OC04MGVmLTc5Nzg3N2IyMmEzYQBrian Biles, Datrium CEO & Co-founder

Prior to Datrium, Brian was Founder and VP of Product Mgmt. at EMC Backup Recovery Systems Division. Prior to that he was Founder, VP of Product Mgmt. and Business Development for Data Domain (acquired by EMC in 2009).

Hugo Patterson, Datrium CTO & Co-founderAAEAAQAAAAAAAANZAAAAJDhiMTI2NzMyLTdkZDAtNDE5Yy1hMTM5LTNiMWM2MWM3NTlmMA

Prior to Datrium, Hugo was an EMC Fellow serving as CTO of the EMC Backup Recovery Systems Division, and the Chief Architect and CTO of Data Domain (acquired by EMC in 2009), where he built the first deduplication storage system. Prior to that he was the engineering lead at NetApp, developing SnapVault, the first snap-and-replicate disk-based backup product. Hugo has a Ph.D. from Carnegie Mellon.

 

GreyBeards talk with Lee Caswell and Dave Wright of NetApp

In our 30th episode, we talk with Dave Wright (@JungleDave), SolidFire founder, VP & GM SolidFire of NetApp and Lee Casswell (@LeeCaswell), VP Products, Solution & Services Marketing NetApp. Dave’s been on before as CEO of SolidFire back in May of 2014, but this is the first time for Lee. Dave’s also been a prominent guest at Storage Field Day, most recently at SFD9 with Dave Hitz from NetApp. Unclear how Lee managed to avoid TFD/SFD duty but it’s only a matter of time.

Solidfire was recently acquired by NetApp in their largest acquisition ever, signaling a new direction for them (acquisition closed 2 Feb. 2016). Since we had spent a prior podcast on another recent storage acquisition, we thought it only appropriate to talk with these two as well. We started the discussion with Dave and how it feels to be within the NetApp umbrella.

Another topic that came up was how flash gets used in the cloud. Old school had it that flash was just high IO performance but nowadays, next gen application development has a range of IO requirements which all need consistent performance to data. Flash with scale out and QoS can handle this wide range of requirements across cloud applications. Lee mentioned how flash adoption is changing from application specific to more general purpose storage which is removing the “IO bottleneck”.

Google had written a study saying that for the next decade there will not be a flash-disk crossover but the differences are small enough that you almost have to be hyper-scale customers to see significant economic advantages.

We discussed the lack of lot’s of AFA’s doing well on throughput intensive benchmarks. Dave mentioned that throughput was one of disk’s better performing modes and in the past, storage interfaces 3Gbps-6Gbps hid a lot of flash performance. But benchmarks of synthesized pure workloads aren’t real world, workloads in real data centers are much messier.

IO density (IOPS/GB) came up as another discussion topic.  At low IO density, disk may still make sense but as IO density increases, all flash makes much more sense.

Google also mentioned the importance of tail-end IO latency (IO latency at 99.9%). Poor tail IO latency has been an ongoing problem holding back the adoption of hybrid storage. All flash has same advantages here but are not all AFAs are immune to the problems in tail-end latency.

The podcast runs just over 39 minutes and episode covers a lot of ground about their products, flash technology advantages, and market dynamics.  Listen to the podcast to learn more.

Dave Wright, SolidFire Founder, Vice President, and GM

Dave Wright_201506-0063Dave Wright left Stanford in 1998 to help start GameSpy Industries, a leader in online video game media, technology, and software. While at GameSpy, Dave led the team that created a backend infrastructure powering thousands of games and millions of gamers. GameSpy merged with IGN Entertainment in 2004 to create one of the largest Internet gaming & entertainment media companies. Dave served as Chief Architect for IGN and led technology integration with FIM / MySpace after IGN was acquired by NewsCorp in 2005.

In 2007 Dave founded Jungle Disk, a pioneer and early leader in cloud-based storage and backup solutions for consumers and businesses. Jungle Disk was acquired by leading cloud provider Rackspace in 2008 and Dave worked closely with the Rackspace Cloud division to build a cloud platform supporting tens of thousands of customers. In December 2009 Dave left Rackspace to start SolidFire.

Lee Caswell, Vice President Product, Solutions, and Services Marketing

LeeLee Caswell is vice president of Product, Solutions and Services Marketing at NetApp, where he leads a team that speeds the customer adoption of new products, partnerships, and integrations. Lee joined NetApp in 2014 and has extensive experience in executive leadership within the storage, flash and virtualization markets.

Lee was previously vice president of Marketing at Fusion-IO (now SanDisk). Prior to Fusion-IO Lee was a founding member of Pivot3, a company considered to be an early innovator in hyper-converged systems, where he served as the CEO and CMO. Earlier in his career, Lee held marketing leadership positions at VMware, Adaptec, and SEEQ Technology (now LSI Logic). He started his career at General Electric in Corporate Consulting.

Lee holds a bachelor of arts degree in economics from Carleton College and a master of business administration degree from Dartmouth College. Lee is a New York native and has lived in northern California for many years. He and his wife live in Palo Alto and have two children. In his spare time Lee enjoys cycling, playing guitar, and hiking the local hills.

Disclaimer: NetApp and SolidFire have been clients of DeepStorageNet and NetApp is a current client of Silverton Consulting.