92: Ray talks AI with Mike McNamara, Sr. Manager, AI Solution Mkt., NetApp

Sponsored By: NetApp

NetApp’s been working in the AI DL (deep learning) space for a long time now and announced their partnership with NVIDIA DGX systems, back in August of 2018. At NetApp Insight, this week they were showing off their new NVIDIA DGX systems reference architectures. These architectures use NetApp AFF A800 storage (for more info on AI DL, checkout Ray’s Learning Machine (deep) Learning posts – part 1, – part 2 and – part3).

Besides the ONTAP AI systems, NetApp also offers

  • FlexPod AI solution based on their partnership with Cisco using UCS C480 ML M5 rack servers which include 8 NVIDA Tesla V100 GPUs and also features NetApp AFF A800 storage for use in core AI DL.
  • NetApp HCI has two configurations with 2- or 3-NVIDIA GPUs that come in 1U or 2U rack servers and run VMware vSphere or RedHad OpenStack/OpenShift software hypervisors suitable for edge or core AI DL.
  • E-series reference architecture that uses the BeeGFS parallel file system and offers InfiniBAND data access for HPC or core AI DL.

On the conference floor, NetApp showed AI DL demos for automotive, financial services, Public Sector and healthcare verticals. They also had a facial recognition application running that could estimate your age and emotional state (I didn’t try it, but Mike said they were hedging the model so it predicted a lower age).

Mike said one healthcare solution was focused on radiological image scans, to identify pathologies from x-Ray, MRI, or CAT scan images. Mike mentioned there was a lot of radiological technologists burn-out due to the volume of work caused by the medical imaging explosion over the last decade or so. Mike said image analysis is something that h AI DL can perform very effectively and doing so would improve the accuracy and reduce the volume of work being done by technologists.

He also mentioned another healthcare application that uses an AI DL app to count TB cells in blood samples and estimate the extent of TB infections. Historically, this has been time consuming, error prone and hard to do in the field. The app uses a microscope with a smart phone and can be deployed and run anywhere in the world.

Mike mentioned a genomics AI DL application that examined DNA sequences and tried to determine its functionality. He also mentioned a retail AI DL facial recognition application that would help women “see” what they would look like with different makeup on.

There was a lot of discussion on NetApp Cloud services at the show, such as Cloud Volume Services and Azure NetApp File (ANF). Both of these could easily be used to implement an AI DL application or be part of an edge to core to cloud data flow for an AI DL application deployment using NetApp Data Fabric.

NetApp also announced a new, all flash StorageGRID appliance that was targeted at heavy IO intensive uses of object store like AI DL model training and data analytics.

Finally, Mike mentioned NetApp’s ecosystem of partners working in the AI space to help customers deploy AI DL algorithms in their industries. Some of these include:

  1. Flexential, Try and Buy AI so that customers could bring them in to supply AI DL expertise to generate an AI DL application using customer data and deploy it on customer cloud or on prem infrastructure .
  2. Core Scientific, AI-as-a-Service, so that customers could purchase a service to implement an AI DL application using customer data and running on Core Scientific infrastructure..
  3. Scale Matrix, Mobile data center AI, so that customers could create an AI DL application and run it on Scale Matrix infrastructure that was transported to wherever the customer wanted it to be run.

We recorded the podcast on the show floor, in a glass booth, so there’s some background noise (sorry about that, but can’t be helped). The podcast is ~27 minutes. Mike is a long time friend and NetApp product expert, recently working in AI DL solutions at NetApp. When I saw Mike at Insight, I just had to ask him about what NetApp’s been doing in the AI DL space. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Mike McNamara, Senior Manager AI Solution Marketing, NetApp

With over 25 years of data management product and solution marketing experience, Mike’s background includes roles of increasing responsibility at NetApp (10+ years), Adaptec, EMC and Digital Equipment Corporation. 

In addition to his past role as marketing chairperson for the Fibre Channel Industry Association, he was a member of the Ethernet Technology Summit Conference Advisory Board, a member of the Ethernet Alliance, and a regular contributor to industry journals, and a frequent speaker at events.

90: GreyBeards talk K8s containers storage with Michael Ferranti, VP Product Marketing, Portworx

At VMworld2019 USA there was a lot of talk about integrating Kubernetes (K8s) into vSphere’s execution stack and operational model. We had heard that Portworx was a leader in K8s storage services or persistent volume support and thought it might be instructive to hear from Michael Ferranti (@ferrantiM), VP of Product Marketing at Portworx about just what they do for K8s container apps and their need for state information.

Early on Michael worked for RackSpace in their SaaS team and over time saw how developers and system engineers just loved container apps. But they had great difficulty using them for mission critical applications and containers of the time had a complete lack of support for storage. Michael joined Portworx to help address these and other limitations in using containers for mission critical workloads.

Portworx is essentially a SAN, specifically designed for containers. It’s a software defined storage system that creates a cluster of storage nodes across K8s clusters and provides standard storage services on a container level granularity.

As a software defined storage system, Portworx is right in the middle of the data path, storage they must provide high availability, RAID protection and other standard storage system capabilities. But we talked only a little about basic storage functionality on the podcast.

Portworx was designed from the start to work for containers, so it can easily handle provisioning and de-provisioning, 100s to 1000s of volumes without breaking a sweat. Not many storage systems, software defined or not, can handle this level of operations and not impact storage services.

Portworx supports both synchronous and asynchronous (snapshot based) replication solutions. As all synchronous replication, system write performance is dependent on how far apart the storage nodes are, but it can provide RPO=0 (recovery point objective) for mission critical container applications.

Portworx takes this another step beyond just data replication. They also replicate container configuration (YAML) files. We’re no experts but YAML files contain an encapsulation of everything needed to understand how to run containers and container apps in a K8s cluster. When one combines replicated container YAML files, replicated persistent volume data AND an appropriate external registry, one can start running your mission critical container apps at a disaster site in minutes.

Their asynchronous replication for container data and configuration files, uses Portworx snapshots , which are sent to an alternate site. But they also support asynch replication to any S3 compatible storage via CloudSnap.

Portworx also supports KubeMotion, which replicates/copies name spaces, container app volume data and container configuration YAML files from one K8s cluster to another. This way customers can move their K8s namespaces and container apps to any other Portworx K8s cluster site. This works across on prem K8s clusters, cloud K8s clusters, between public cloud provider K8s clusters s or between on prem and cloud K8s clusters.

Michael also mentioned that data at rest encryption, for Portworx, is merely a tick box on a storage class specification in the container’s YAML file. They make use use of KMIP services to provide customer generated keys for encryption.

This is all offered as part of their Data Security/Disaster Recovery (DSDR) service. that supports any K8s cluster service whether they be AWS, Azure, GCP, OpenShift, bare metal, or VMware vSphere running K8s VMs.

Like any software defined storage system, customers needing more performance can add nodes to the Portworx (and K8s) cluster or more/faster storage to speed up IO

It appears they have most if not all the standard storage system capabilities covered but their main differentiator, besides container app DR, is that they support volumes on a container by container basis. Unlike other storage systems that tend to use a VM or higher level of granularity to contain container state information, with Portworx, each persistent volume in use by a container is mapped to a provisioned volume.

Michael said their focus from the start was to provide high performing, resilient and secure storage for container apps. They ended up with a K8s native storage and backup/DR solution to support mission critical container apps running at scale. Licensing for Portworx is on a per host (K8s node basis).

The podcast ran long, ~48 minutes. Michael was easy to talk with, knew K8s and their technology/market very well. Matt and I had a good time discussing K8s and Portworx’s unique features made for K8s container apps. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Michael Ferranti, VP of Product Marketing, Portworx

Michael (@ferrantiM) is VP of Product Marketing at Portworx, where he is responsible for communicating the value of containerization and digital transformation to global architects and CIOs.

Prior to joining Portworx, Michael was VP of Marketing at ClusterHQ, an early leader in the container storage market and spent five years at Rackspace in a variety of product and marketing roles

87: Matt & Ray show at VMworld 2019

Matt and Ray were both at VMworld 2019 in San Francisco this past week, and we did an impromptu podcast on recent news at the show.

VMware announced a number of new projects and just prior to the show they announced the intent to acquire Pivotal and Carbon Black. Pat’s keynote the first day was about a number of new products and features but he also spent time discussing how they were going to incorporate these acquisitions.

One thing that caught a lot of attention was “The Tanzu Portfolio”, which was all about how VMware is adopting Kubernetes as an integral and native part of vSphere moving forward. Project Pacific was their working name for integrating Kubernetes as a native feature of vSphere. And the Tanzu Mission Control was a new multi-cloud/hybrid cloud management solution for Kubernetes clusters wherever they ran.

VMware has had a rather lengthy history with container support from project Photon, to VIC, to running PKS ontop of vSphere. But with Project Pacific, Kubernetes is now being brought under the covers of vSphere and any ESXi cluster becomes a .Kubernetes cluster.

We also talked a little bit about Carbon Black and it’s endpoint security. Neither of us are security experts but Matt mentioned another company he talked with at the show that based their product on workload profiling to determine when something has gone amiss.

It’s Ray’s belief that Carbon Black does much the same profilings only for endpoint devices desktops, laptops, and mobile devices (maybe not thin clients).

Pat also talked a bit about IoT and edge processing at the show and they have a push to support more forms of edge computing.

Ray mentioned he talked with HiveCell, at the show who had a standalone Arm server about the size of a big book that can be stood up just about anywhere there’s power and ethernet.

Unfortunately there’s some background noise on the podcast and it happens to be a short one, at over 16.5 minutes. This podcast represents a departure for us, as the Greybeards have never done a live recording at a conference before. We plan to do more of this so we hope you enjoy it. Please let us know what you think about it and if there’s anything we could do to improve our live recording shows. There’s more on the recording so listen to the podcast to learn more.

Matt Leib

Matt Leib (@MBLeib), one of our co-hosts, has been blogging in the storage space for over 10 years, with work experience both on the engineering and presales/product marketing.. His blog is at Virtually Tied to My Desktop and he’s on LinkedIN.

86: Greybeards talk FMS19 wrap up and flash trends with Jim Handy, General Director, Objective Analysis

This is our annual Flash Memory Summit podcast with Jim Handy, General Director, Objective Analysis. It’s the 5th time we have had Jim on our show. Jim is also an avid blogger writing about memory and SSD at TheMemoryGuy and TheSSDGuy, respectively.

NAND market trends

Jim started off our discussion on the significant price drop in the NAND market over the last two years. He said that prices ($/GB) have dropped 60% last year and are projected to drop about 30% this year.

The problem is over production and as vendors are prohibited from dropping prices below cost, they tend to flatten out at production cost. NAND pricing will remain there until supplies start tightening again. Jim doesn’t see that happening until 2021.

He says although this NAND price drops don’t end up reducing SSD prices, it does allow us to buy more SSD storage for the same price. So maybe back earlier this century NAND cost $10K/GB, now it’s around $0.05/GB.

Jim also mentioned that Chinese NAND fabs should start coming online in 2021 too. They have been spending lots of money trying to get their own NAND manufacturing running. Jim said the reason they want to do this is because the Chinese are spending more $s on chips , than they do for oil.

Computational storage, a bright spot

At the show, computational storage (for more hear our GBoS podcast with Scott Shadley, NGD Systems) was hot again this year. Jim took a shot at defining computational storage and talked about the proliferation of ARM cores in SSDs. Keith mentioned that Moore’s law is making the incremental cost of adding more cores close to zero.

Jim said SAMSUNG already have 6 ARM cores in their SSDs, but most other vendors use 3 cores. I met with NetInt at the show who are focused on computational storage for video transcoding. Keith doesn’t think this would be a good fit, because it takes a lot of computation. But maybe as it’s easily distributable (out to a gaggle of SSDs) and it’s data intensive it might work ok. Jim also mentioned while adding cores may be cheap, increasing memory (DRAM) is not.

According to Jim, hyper-scalars are starting to buy computational storage technology. He’s not sure if they are just trying it out or have some real work running on the technology.

SCM news

We talked about Toshiba’s new XC flash and SSDs. Jim said this is just SLC NAND (expensive $/GB and high endurance) with increased parallelism and reduced latency data paths. Samsung’s Z-NAND is similar. Toshiba claims XL Flash SSDs are another storage class memory (SCM, see our 3DX blog post). Toshiba are pricing XL Flash SSDs at about 10X the $/GB price of 3D TLC NAND, or roughly the same as Optane SSDs.

We next turned to Optane DC PM, which Intel is selling at a loss but as it works only with Cascade Lake CPUs, can help increase CPU adoption. So Intel can absorb Optane DC PM losses by selling more (highly profitable) Cascade Lake systems.

Keith mentioned that SAP HANA now works with Cascade Lake-Optane DC PM. This is driving up demand for the new DC PM and new CPUs. Keith said with the new larger size in memory databases from DC PM, HANA able to do more work, increasing Cascade Lake-Optane DC PM-SAP HANA adoption.

Micron also manufacturers 3DX. Jim said they are in an enviable position as they can . supply the chips (at costs) to Intel, so they know chip volumes and can see what Intel is charging for the technology. So, if at some point, it has runway to become profitable, they can easily enter as a sole secondary source for the technology.

Other NAND news

How high can 3D TLC NAND go? Jim said most 3D NAND sold on the market is 64 layers high but suppliers are already shipping more layers than that. All NAND suppliers, bar one, have said their next generation 3D TLC NAND will be over 100 layers. Some years back one vendor said the technology could go up to 500 layers. This year Samsung, said they see the technology going to 800 layers.

We’ve heard of SLC, MLC, TLC and QLC but at the show there was talk of PLC or five level cell NAND technology. If they can make the technology successful, PLC should reduce manufacturing costs, another 10% ($/GB).

We discussed a lot more that was highlighted at the show, including PCIe fabric/composable infrastructure, zoned (NVMe) name spaces (redux SMR disks) and the ongoing success of the show. We had a brief discussion on when if ever NAND costs will be less than disk ($/GB).

The podcast is a little under ~40 minutes. Jim is an old friend, who is extremely knowledgeable about NAND & DRAM technology as well as semiconductor markets in general. Jim’s always been a kick to talk with. Listen to the podcast to learn more.

Jim Handy, General Director, Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication.

He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media. 

He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com

84: GreyBeards talk ultra-secure NAS with Eric Bednash, CEO & Co-founder, RackTop Systems

We were at a recent vendor conference where Steve Foskett (@SFoskett) introduced us to Eric Bednash (@ericbednash), CEO & Co-Founder, RackTop Systems. They have taken ZFS and made it run as a ultra-secure NAS system. Matt Leib, my co-host for this episode, has on-the-job experience with ZFS and was a great co-host for this episode.

It turns out that Eric and his CTO (perhaps other RackTop employees) have extensive experience with intelligence and other government agencies that depend on data security. These agencies deal with cyber security threats an order of magnitude larger, than what corporations see .

All that time in intelligence gave Eric a unique perspective on what it takes to build secure, bullet proof NAS systems. Nine years or so ago, he and his CTO, took OpenZFS (and OpenSolaris) and used it as the foundation for their new highly available and ultra-secure NAS system.

Most storage systems support user access data protection based on authorization. If a user is authorized to see/write data, they have unrestricted access to the data. Perhaps if an organization is paranoid, they might also use data at rest encryption. But RackTop takes all this to a whole other level.

Data security to the Nth degree

RackTop offers dual encryption for data at rest. Most organizations would say single encryption’s enough. The data’s encrypted, how will another level of encryption make it more secure.

It all depends on how one secures keys (and just my thoughts here, maybe how easily quantum computing can decrypt singly encrypted data). So RackTop systems uses self encrypting drives (1st level of encryption) as well as software encryption (2nd level of encryption). Each having their own unique keys RackTop can maintain either in their own system or in a KMIP service provided by the data center.

They also supply user profiling. User data access can be profiled with a dataset heat map and other statistical/logging information. When users go outside their usual access profiles, it may signal a security breach. At the moment, when this happens RackTop notifies security administrators, but Eric mentioned a future release will have the option to automatically shut that user down.

And with all the focus on GDPR and similar regulations coming to a state near you, having user access profiles and access logs can easily satisfy any regulatory auditing requirements.

Eric said that any effective security has to be multi-layered. With RackTop, their multi-layer approach goes way beyond just data-at-rest encryption and user access authentication. RackTop also offers their appliance hardware sourced from secure supply chains and manufactured inside secured facilities. They have also modified OpenSolaris to be more secure and hardened it and its OS against cyber threat.

RackTop even supports cloud tiering with an internally developed secure data mover. Their data mover can securely migrate data (retaining meta-data on their system) to any S3 compatible object storage.

As proof of the security available from a RackTop NAS system, an unnamed US government agency had a “red-team” attack their storage. Although Eric shared only a few details on what the red-team attempted, he did say RackTop NAS survived the assualt without security breach.

He also mentioned that they are trying to create a Zero Trust storage environment. Zero Trust implies constant verification and authentication. Rather like going beyond one time entered login credentials and making users re-authenticate every time they access data. Eric didn’t say when, if ever they’d reach this level of security but it’s a clear indication of a direction for their products.

ZFS based NAS system

A RackTop NAS supplies a ZFS-based file system. As such, it inheritnall the features and advanced functionality of OpenZFS but within a more secured, hardened and highly available storage system

ZFS has historically had issues with usability and its multiplicity of tuning knobs. RackTop has worked hard to make ZFS easier to operate and removed much of the manual tuning required to make it perform well.

The podcast is a long and runs over ~44 minutes. We spent most of our time talking about security and less on the storage functionality of RackTop NAS. The security of RackTop systems takes some getting used to but the need exists today and not many storage systems are implementing security quite to their level. Much of what RackTop does to improve data security blew Matt and I away. Eric is a very smart security expert in addition to being a storage vendor CEO. Listen to the podcast to learn more.

Eric Bednash, CEO & Co-founder, RackTop Systems

Eric Bednash is the co-founder and CEO of RackTop Systems, the pioneer of CyberConvergedTM data security, a new market that fuses data storage with advanced security and compliance into a single platform.   

A serial entrepreneur and innovator, Bednash has more than 20 years of experience in solving the most complex and challenging data problems through designing products and solutions for the U.S. Intelligence Community and commercial enterprises.

Bednash co-founded RackTop in 2010 with partner and current CTO Jonathan Halstuch. Prior to co-founding RackTop, he served as co-founder and CTO of a mid-sized consulting firm, focused on developing mission data systems within the Department of Defense and U.S. intelligence communities.

Bednash started his professional career in data center systems at Time-Warner, and spent the better part of the dot-com boom in the Washington, D.C. area connecting businesses to the internet. His career path began while still in high school, where Bednash’s contracted with small businesses and individuals to write software and build computers. 

Bednash attended Rochester Institute of Technology and Penn State University, and completed both undergrad and graduate coursework in Business and Technology Management at Stevenson University. A Forbes Technology Council member, he regularly hosts thought leadership & technology video blogs, and is a technology writer and speaker. He is a multi-instrument musician, recreational athlete and a die-hard Pittsburgh Steelers fan. He currently resides in Fulton, Md. with his wife Laura and two children