39: Greybeards talk deep storage/archive with Matt Starr, CTO Spectra Logic

In this episode, we talk with Matt Starr (@StarrFiles),  CTO of Spectra Logic, the deep storage experts. Matt has been around a long time and Ray’s shared many a meal with Matt as we’re both in NW Denver. Howard has a minor quibble with Spectra Logic over the use of his company’s name (DeepStorage) in their product line but he’s also known Matt for awhile now.

The Pearl

Matt and Spectra Logic have a number of customers with multi-PB to over an EB of data repository problems and how to take care of these ever expanding storage stashes is an ongoing concern.  One of the solutions Spectra Logic offers is the Black Pearl Deep Storage, which provides an object storage, RESTfull interface front end to storage tiering/archive backend that uses flash, (spin-down) disk, (LTFS) tape (libraries) and the (AWS) cloud as backend storage.

Major portions of the Black Pearl are open sourced and available on GitHub. I see several (DS3-)SDK’s for Java, Python, C, and others. Open sourcing the product provides an easy way for client customization. In fact, one customer was using CEPH and they modified their CEPH backup client to send a copy of data off to the Pearl.

We talk a bit about the Black Pearl’s data integrity. It uses a checksum, computed over the object at creation time which is then verified anytime the object is retrieved, copied, moved or migrated and can be validated periodically (scrubbed), even when it has not been touched.

Super Computing’s interesting (storage) problems

Matt just returned from the SC16 (Super Computing Conference 2016) in Salt Lake City last month. At the conference there were plenty of MultiPB customers that were looking for better storage alternatives.

One customer Matt mentioned  was the Square Kilometer Array, the world’s largest radio telescope which will be transmitting 700TB/hour, over an 1EB per year.  All that data has to land somewhere and for this quantity (>eb) of data, tape becomes an necessary choice.

Matt likened Spectra’s  archive solutions to warehouses vs. factories. For the factory floor,  you need responsive (AFA or hybrid) primary storage but for the warehouse, you just want cheap, bulk storage (capacity).

The podcast runs long, over 51 minutes, and reveals a different world from the GreyBeards everyday enterprise environments. Specifically customers that have extra large data repositories and how they manage to survive under the data deluge. Matt’s an articulate spokesperson for Spectra Logic and their archive solutions and we could have talked about >eb data repositories for hours.  Listen to the podcast to learn more.

matt-starrMatt Starr, CTO, Spectra Logic

Matt Starr’s tenure with Spectra Logic spans 24 years and includes experience in service, hardware design, software development, operating systems, electronic design and management. As CTO, he is responsible for helping define the company’s product vision, and serves as the executive representative for the voice of the market. He leads Spectra’s efforts in high-performance computing, private cloud and other vertical markets.

Matt served as the lead engineering architect for the design and production of Spectra’s TSeries tape library family. Spectra Logic has secured more than 50 patents under Matt’s direction, establishing the company as the innovative technology leader in the data storage industry. He holds a BS in electrical engineering from the University of Colorado at Colorado Springs.

36: GreyBeards discuss VMworld2016 with Andy Banta, Storage Janitor, NetApp Solidfire

CrN2cn2VYAEULCZ.jpg-large
Thanks Andy Warfield, Coho Data

In this episode, we talk with Andy Banta (@andybanta), Storage Janitor (Principal Virt. Architect), Netapp SolidFire. Andy’s been involved in Virtual Volumes (VVOLs) and other VMware API implementations at SolidFire and worked at VMware and other storage/system vendor companies before that.

Howard and I were at VMworld2016 late last month and we thought Andy would be a good person to discuss what went there this year.

No VVOLs & VSAN news at the show

Although, we all thought there’d be another release of VVOLs and VSAN announced at the show, VMware announced Cloud Foundation and Cross-Cloud Services. If anything the show was a bit mum about VMware Virtual Volumes (VVOLs) and Virtual SAN™ (VSAN) this year as compared to last.

On the other hand, Andy’s and other VVOL technical sessions were busy at the conference. And one of them ended up having standing room only and was repeated at the show, due to the demand. Customer interest in VVOLs seems to be peaking.

Our discussion begins with why VVOLs was sidelined this year. One reason was that there was a  focus from VMware and their ecosystem on Hyper Converged Infrastructure (HCI) and HCI doesn’t use storage arrays or VVOL.

Howard and I suspected with VMware’s ecosystem growing ever larger, validation and regression testing is starting to consume more resources. But Andy, suggested that’s not the issue, as VMware uses self-certification, where vendors run tests that VMware supplies to show they meet API requirements. VMware does bring in a handful of vendor solutions (5 for VVOLs) for reference architectures and to insure the APIs meet (major) vendor requirements but after that, it’s all self certification.

Another possibility was  that the DELL-EMC acquisition (closed 9/6) could be  a distraction. But Andy said VMware’s been and will continue on as an independent company and the fact that EMC owned ~84% of the stock never impacted VMware’s development before. So DELL’s acquisition shouldn’t either.

Finally we suggested that executive churn at VMware could be the problem. But Andy debunked that and said the amount of executive transitions hasn’t really accelerated over the years.

After all that, we concluded that just maybe the schedule had slipped and perhaps we will see something new in Barcelona for VVOLs and VMware APIs for Storage Awareness (VASA), at VMworld2016 Europe.

Cloud Foundation and Cross-Cloud Services

What VMware did announce was VMware Cloud Foundation and Cross-Cloud Services. This seems to signal a shift in philosophy to be more accommodating to the public cloud rather than just competing with them.

VMware Cloud Foundation is a repackaging of  VMware Software Defined Data Center (SDDC), NSX®,  VSAN and vSphere® into a single bundle that customers can use to spin up a private cloud with ease.

VMware Cross-Cloud Services is a set of targeted software for public cloud deployment to ease management and migration of services . They showed how NSX could be deployed over your cloud instances to control IP addresses and provide micro-segmentation services and how other software allows data to be easily migrated between the public cloud and VMware private cloud implementations. Cross Cloud Services was tech previewed at the show and Ray wrote a  post describing them in more detail (please see VMworld2016 Day 1 Cloud Foundation & Cross-Cloud Services post).

Cloud services

Howard talked about how difficult it can be to move workloads to the cloud and back again. Most enterprise application data is just too large to transfer quickly and to complex to be a simple file transfer.  And then there’s legal matters for data governance, compliance and regulatory regimens that have to be adhered to which make it almost impossible to use public cloud services.

On the other hand, Andy talked about work they had done at SolidFire to use cloud in development. They moved some testing to the cloud to spin up 1000s of (SolidFire simulations) instances to try to catch an infrequent bug (occurring once every 10K runs).  They just couldn’t do this in their lab. In the end they were able to catch and debug the problem much more effectively using public cloud services.

Howard mentioned that they were also using AWS as an IO trace repository for benchmark development work he is doing. AWS S3 as a data repository has been a great solution for his team, as anyone can upload their data that way. By the way, he is looking for a data scientist to help analyze, this data if anyone’s interested.

In general, workloads are becoming more transient these days. Public cloud services are encouraging this movement but Docker and micro services are also having an impact.

VVOLs

One can even see this sort of trend in VMware VVOLs, which can be  another way to enable more transient workloads. VVOLs can be created and destroyed a lot quicker than Vdisks in the pasts. In fact, some storage vendors are starting to look at VVOLs as transient storage and are improving their storage and meta-data garbage collection accordingly.

Earlier this year Howard, Andy and I were all at a NetApp SolidFire Analyst event in Boulder. At that time, SolidFire said that they had implemented VVOLs so well they considered “VVOLs done right”.  I asked Andy what was different with SolidFire’s VVOL implementation. One thing they did was completely separate the Protocol endpoints from the storage side. Another was to provide QoS at the VM level that could be applied to a single or 1000s of VMs

Andy also said that SolidFire had implemented a bunch of scripts to automate VVOL policy changes across 1000s of objects. SolidFire wanted to make use of these scripts for their own VVOL implementation but as they could apply to any vendors implementation of VVOLs, they decided to open source them.

The podcast runs over 42 minutes and covers a broad discussion of the VMware ecosystem, the goings on at VMworld and SolidFire’s VVOL implementation. Listen to the podcast to learn more.

Andy Banta, Storage Janitor, NetApp SolidFire

saturday_drive1_400x400

Andy is currently a Storage Janitor acting as a Principal Virtualization Architect at NetApp SolidFire, focusing on VMware integration and Virtual Volumes.  Andy was a part of the Virtual Volumes development team at SoldiFire.

Prior to SolidFire, he was the iSCSI Tech Lead at VMware, as well as being on the engineering teams at DataGravity and Sun Microsystems.

Andy has presented at numerous VMworlds, as well as several VMUGs and other industry conferences. Outside of work, and enjoys racing cars, hiking and wines. Find him on twitter at @andybanta.

GreyBeards talk with Lee Caswell and Dave Wright of NetApp

In our 30th episode, we talk with Dave Wright (@JungleDave), SolidFire founder, VP & GM SolidFire of NetApp and Lee Casswell (@LeeCaswell), VP Products, Solution & Services Marketing NetApp. Dave’s been on before as CEO of SolidFire back in May of 2014, but this is the first time for Lee. Dave’s also been a prominent guest at Storage Field Day, most recently at SFD9 with Dave Hitz from NetApp. Unclear how Lee managed to avoid TFD/SFD duty but it’s only a matter of time.

Solidfire was recently acquired by NetApp in their largest acquisition ever, signaling a new direction for them (acquisition closed 2 Feb. 2016). Since we had spent a prior podcast on another recent storage acquisition, we thought it only appropriate to talk with these two as well. We started the discussion with Dave and how it feels to be within the NetApp umbrella.

Another topic that came up was how flash gets used in the cloud. Old school had it that flash was just high IO performance but nowadays, next gen application development has a range of IO requirements which all need consistent performance to data. Flash with scale out and QoS can handle this wide range of requirements across cloud applications. Lee mentioned how flash adoption is changing from application specific to more general purpose storage which is removing the “IO bottleneck”.

Google had written a study saying that for the next decade there will not be a flash-disk crossover but the differences are small enough that you almost have to be hyper-scale customers to see significant economic advantages.

We discussed the lack of lot’s of AFA’s doing well on throughput intensive benchmarks. Dave mentioned that throughput was one of disk’s better performing modes and in the past, storage interfaces 3Gbps-6Gbps hid a lot of flash performance. But benchmarks of synthesized pure workloads aren’t real world, workloads in real data centers are much messier.

IO density (IOPS/GB) came up as another discussion topic.  At low IO density, disk may still make sense but as IO density increases, all flash makes much more sense.

Google also mentioned the importance of tail-end IO latency (IO latency at 99.9%). Poor tail IO latency has been an ongoing problem holding back the adoption of hybrid storage. All flash has same advantages here but are not all AFAs are immune to the problems in tail-end latency.

The podcast runs just over 39 minutes and episode covers a lot of ground about their products, flash technology advantages, and market dynamics.  Listen to the podcast to learn more.

Dave Wright, SolidFire Founder, Vice President, and GM

Dave Wright_201506-0063Dave Wright left Stanford in 1998 to help start GameSpy Industries, a leader in online video game media, technology, and software. While at GameSpy, Dave led the team that created a backend infrastructure powering thousands of games and millions of gamers. GameSpy merged with IGN Entertainment in 2004 to create one of the largest Internet gaming & entertainment media companies. Dave served as Chief Architect for IGN and led technology integration with FIM / MySpace after IGN was acquired by NewsCorp in 2005.

In 2007 Dave founded Jungle Disk, a pioneer and early leader in cloud-based storage and backup solutions for consumers and businesses. Jungle Disk was acquired by leading cloud provider Rackspace in 2008 and Dave worked closely with the Rackspace Cloud division to build a cloud platform supporting tens of thousands of customers. In December 2009 Dave left Rackspace to start SolidFire.

Lee Caswell, Vice President Product, Solutions, and Services Marketing

LeeLee Caswell is vice president of Product, Solutions and Services Marketing at NetApp, where he leads a team that speeds the customer adoption of new products, partnerships, and integrations. Lee joined NetApp in 2014 and has extensive experience in executive leadership within the storage, flash and virtualization markets.

Lee was previously vice president of Marketing at Fusion-IO (now SanDisk). Prior to Fusion-IO Lee was a founding member of Pivot3, a company considered to be an early innovator in hyper-converged systems, where he served as the CEO and CMO. Earlier in his career, Lee held marketing leadership positions at VMware, Adaptec, and SEEQ Technology (now LSI Logic). He started his career at General Electric in Corporate Consulting.

Lee holds a bachelor of arts degree in economics from Carleton College and a master of business administration degree from Dartmouth College. Lee is a New York native and has lived in northern California for many years. He and his wife live in Palo Alto and have two children. In his spare time Lee enjoys cycling, playing guitar, and hiking the local hills.

Disclaimer: NetApp and SolidFire have been clients of DeepStorageNet and NetApp is a current client of Silverton Consulting.

Greybeards talk car videos, storage and IT trends with Marc Farley

In our 30th episode, we talk with 3rd time guest star,  Marc Farley (@GoFarley), Formerly with Datera and Tegile. Marc has recently gone on sabbatical and we wanted to talk to him about what was keeping him busy and what was going on in storage/IT industry these days.

Marc is currently curating a car comedy vlog called theridecast.com. Apparently people, at least in California, are making comedy videos in their cars. They can be quite hilarious, checkout this episode of comedian in cars getting coffee.

While in the storage biz, the industry is getting battered by a number of trends: IT shrinking budgets, vendor proliferation, migration to cloud, and flash becoming old hat. Marc makes multiple points as to why the storage market is undergoing such a major transition these days:

  • Death to tech refresh, long live the cloud –  yes the cloud does upgrade hardware but  planned storage system obsolesce doesn’t happen in the cloud anymore. Cloud providers are  buying new SSDs, disks, white box servers, memory etc,  but not enterprise class storage, server or networking hardware.
  • AFA is boring, but selling – every vendor’s got one , two or sometimes three and they all know how to provide flash storage services. Customers pay extra for AFA, whether they need to or not, because they are swapping out old expensive, enterprise class storage for AFAs that often cost less but still provide better performance..
  • Tail IO latency becoming more important but it’s not understood – when IO response times go from 100µsec to 10msec, it hurts. It doesn’t matter if it’s every 1000 or 10,000 IOs, customers want less performance variability, which is a main reason they move to AFA in the first place. But not all AFA’s perform the same in tail latency and SSD controller/system architecture make a big difference.
  • Hybrid storage survives but only if you go big – hybrid storage economics makes sense only for large, diverse data repositories, that mix user directories, non-performance sensitive apps, and other structured and unstructured data in one data store.
  • Greenfield apps & secondary storage are moving to the cloud but migrating current apps to the cloud is difficult –  for new app development and archive storage, moving to or starting in the cloud is a no-brainer. Transitioning running enterprise class apps to the cloud is tough to do, that requires multiple skill sets and may never be successful. Hybrid  (cloud-on premises) enterprise class apps are too arduous to even contemplate.
  • Realtime analytics is emerging but data needs to be on flash – yes MapReduce is a batch activity which can uses lots of slow disk but there’s more to analytics than MR, and doing log analysis, in anything approaching realtime, one needs flash performance.
  • Optical’s persistence is great but who leaves data on the same technology for  20 years –  with magnetic and electronic storage densities going up every couple of years, who could afford keep data on the same optical technology that was 20 years old. Imagine using microfiche to keep PB of data today, inconceivable.

As for IT in general, one limiter of IT activity will become the lack of skilled engineers, specifically full-stack engineers and data scientists.

We ended our discussions on the economics of Samsung 3D NAND and Intel-Micron (IM) 3D Xpoint non-volatile memories. Both new semiconductor technologies are always long term investments. Today, Samsung is probably losing money on each 3D TLC NAND SSD it sells, but over time, as  fab yields improve, it should become cheap enough to make a profit. Similarly, 3D Xpoint may be costly to produce early on, but as IM perfect  their fab processes, the technology should become inexpensive enough to make oodles of $s for them. And there’s more technology changes to come.

The podcast runs just over 40 minutes and covers a lot of ground. Marc’s been in the IT almost as long as the GreyBeards and has a unique perspective on what’s happening today, having been with so many diverse, major and (minor) startup vendors throughout his tenure in the industry.  Listen to the podcast to learn more.

Marc Farley


Marc is a storage greybeard who has worked for many storage companies and is currently on sabbatical. He has written three books on storage including his most recent, Rethinking Enterprise Storage: A Hybrid Cloud Model and his previous books Building Storage Networks and Storage Networking Fundamentals.

In addition to his writing books he has been a blogger and podcaster about storage topics while working for EqualLogic, Dell, 3PAR, HP, StorSimple,  Microsoft, and others.

When he is not working, Marc likes to ride bicycles, listen to music, spend time with his family and dote on his cats. Of course there’s that car video curation…

GreyBeards talk HPC storage with Molly Rector, CMO & EVP, DDN

oIn our 27th episode we talk with Molly Rector (@MollyRector), CMO & EVP of Product Management/Worldwide Marketing for DDN.  Howard and I have known Molly since her days at Spectra Logic. Molly is also on the BoD of SNIA and Active Archive Alliance (AAA), so she’s very active in the storage industry, on multiple dimensions and a very busy lady.

We (or maybe just I) didn’t know that DDN has a 20 year history in storage and in servicing high performance computing (HPC) customers. It turns out that more enterprise IT organizations are starting to take on workloads that look like HPC activity.

In HPC there are 1000s of compute cores that are crunching on PB of data. For Oil&Gas companies, it’s seismic and wellhead analysis; with bio-informatics it’s genomic/proteomic analysis; and with financial services, it’s economic modeling/backtesting trading strategies. For today’s enterprises such as retailers, it’s customer activity analytics; for manufacturers, it’s machine sensor/log analysis;  and for banks/financial institutions, it’s credit/financial viability assessments. Enterprise IT might not have 1000s of cores at their disposal just yet, but it’s not far off. Molly thinks one way to help enterprise IT is to provide a SuperComputer as a service (ScaaS?) offering, where top 10 supercomputers can be rented out by the hour, sort of like a supercomputing compute/data cloud.

We start early talking about DDN WOS: object store, which can handle archive to cloud or backend tape libraries. Later we discuss DDN ExaScaler and GridScaler, which are NAS appliances for Lustre and massively scale out, parallel file system storage, respectively.

Another key supercomputing storage requirement is  predictable performance. Aside from sophisticated QoS offerings across their products, DDN also offers the IME solution, a bump in the cable, caching system, that can optimize large and small file IO activity for backend DDN NAS scalers. DDN IME is stateless and can be removed from the data path while still allowing IT access  to all their data.

While we were discussing DDN storage interfaces, Molly mentioned they were working on an Omni Path Fabric.  Intel’s new Omni Path Fabric is intended to replace rack scale PCIe networks for HPC.

This months edition is not too technical and runs just over 45 minutes. We only got to SNIA and AAA at the tail end and just for a minute or two. Molly’s always fun to talk to, with enough technical smarts to keep Howard and I at bay, at least for awhile :). Listen to the podcast to learn more.

HeadshotMolly Rector, CMO and EVP Product Management & Worldwide Marketing,  DDN

With 15 years of experience working in the HPC, Media and Entertainment, and Enterprise IT industries running global marketing programs, Molly Rector serves as DDN’s Chief Marketing Officer (CMO) responsible for product management and worldwide marketing. Rector’s role includes providing customer and market input into the company’s product roadmap, raising the Corporate brand visibility outside traditional markets, expanding the partner ecosystem and driving the end-to-end customer experience from definition to delivery.

Rector is a founding member and currently serves as Chairman of the Board for the Active Archive Alliance. She is also the Storage Networking Industry Association’s (SNIA) Vice Chairman of the Board and the Analytics and Big Data committee Vice Chairman. Prior to joining DDN, Rector was responsible for product management and worldwide marketing as CMO at Spectra Logic. During her tenure at Spectra Logic, the company grew revenues consistently by double digits year-over-year, while also maintaining profitability. Rector holds certifications as CommVault Certified System Administrator; Veritas Certified Data Protection Administrator; and Oracle Certified Enterprise DBA: Backup and Recovery. She earned a Bachelor’s of Science degree in biology and chemistry.