42: GreyBeards talk next gen, tier 0 flash storage with Zivan Ori, CEO & Co-founder E8 Storage.

In this episode, we talk with Zivan Ori (@ZivanOri), CEO and Co-founder of E8 Storage, a new storage startup out of Israel. E8 Storage provides a tier 0, next generation all flash array storage solution for HPC and high end environments that need extremely high IO performance, with high availability and modest data services. We first saw E8 Storage at last years Flash Memory Summit (FMS 2016) and have wanted to talk with them since.

Tier 0 storage

The Greybeards discussed new tier 0 solutions in our annual yearend industry review podcast. As we saw it then, tier 0 provides lightening fast (~100s of µsec) read and write IO operations and millions of IO/sec. There are not a lot of applications that need this level of speed and quantity of IOs but for those that do, Tier 0 storage is their only solution.

In the past Tier 0, was essentially SSDs sitting on a PCIe bus, isolated to a single server. But today, with the emergence of NVMe protocols and SSDs, 40/50/100GBE NICs and switches and RDMA  protocols, this sort of solution can be shared across from racks of servers.

There were a few shared Tier 0 solutions available in the past but their challenge was that they all used proprietary hardware. With today’s new hardware and protocols, these new Tier 0 systems often perform as good or much better than the old generation but with off the shelf hardware.

E8 came to the market (emerged out of stealth and GA’d in September of 2016) after NVMe protocols, SSDs and RDMA were available in commodity hardware and have taken advantage of all these new capabilities.

E8 Storage system hardware & software

E8 Storage offers a 2U HA appliance with 24, hot-pluggable NVMe SSDs connected to it and support 8 client or host ports. The  hardware appliance has two controllers, two power supplies, and two batteries. The batteries are used to hold up a DRAM write cache until it can be flushed to internal storage for power failures. They don’t do any DRAM read caching because the performance off the NVMe SSDs is more than fast enough.

The 24 NVMe SSDs are all dual ported for fault tolerance and provide hot-pluggable replacement for better servicing in the field. One E8 Storage system can supply up to 180TB of usable, shared NVMe flash storage.

E8 Storage uses RDMA (RoCE) NICs between client servers and their storage system, which support 40GBE, 50GBE or 100GBE networking.

E8 does not do data reduction (thin provisioning, data deduplication or data compression) on their storage, so usable capacity = effective capacity.  Their belief is that these services consume a lot of compute/IO limiting IO/sec and increasing response times and as the price of NVMe SSD capacity is coming down over time these activities become less useful.

They also have client software that provides a fault tolerant initiator for their E8 storage. This client software supports MPIO and failover across controllers in the event of a controller outage. The client software currently runs on just about any flavor of Linux available today and E8 is working to port this to other OSs based on customer requests.

Storage provisioning and management is through a RESTful API, CLI or web based GUI management portal. Hardware support is supplied by E8 Storage and they offer a 3 year warranty on their system with the ability to extend this to 5 years, if needed.

One problem with today’s standard NVMe over Fabric solutions is that they lack any failover capabilities and really have no support for data protection. By developing their own client software, E8 provides fault tolerance and data protection for Tier 0 storage. They currently supported RAID 0 and 5 for E8 Storage and RAID 6 is in development.

Performance

Everyone wants native DAS-NVMe SSD storage but unlike server Tier 0 solutions, E8 Storage’s 180TB of NVMe capacity can be shared across up to 100 servers (currently have 96 servers talking to a single E8 Storage appliance at one customer).  By moving this capacity out to a shared storage device it can be be made more fault tolerant, more serviceable and be amortized over more servers. However the problem with doing this has always been the lack of DAS like performance.

Talking to Zivan, he revealed that a single E8 Storage service was capable of 5M IO/sec, and at that rate, the system delivers an average response time of  300µsec and for a more reasonable 4M IO/sec, the system can deliver ~120µsec response times. He said they can saturate a 100GBE network by operating at 10M IO/sec. He didn’t say what the response time was at 10M IO/sec but with network saturation, response times probably went exponentially higher.

The other thing that Zivan mentioned was that the system delivered these response times with a very small variance (standard deviation). I believe he mentioned 1.5 to 3% standard deviations which at 120µsec is 18 to 36µsec and even at 300µsec its 45 to 90µsec. We have never see this level of response times, response time variance and IO/sec in a single shared storage system before.

E8 Storage

Zivan and many of his team previously came from IBM XIV storage. As such, they have  been involved in developing and supporting enterprise class storage systems for quite awhile now. So, E8 Storage knows what it takes to create products that can survive in 7X24, high end, highly active and demanding environments.

E8 Storage currently has customers in production in the US. They are seeing primary interest  in their system from the HPC, FinServ, and Retail industries but any large customers could have the need for something like this.  They sell their storage for from $2 to $3/GB.

The podcast runs ~42 minutes, and Zivan was easy to talk with and has a good grasp of the storage industry technologies.  Listen to the podcast to learn more.

Zivan Ori CEO & Co-Founder, E8 Storage

Mr. Zivan Ori is the co-founder and CEO of E8 Storage. Before founding E8 Storage, Mr. Ori held the position of IBM XIV R&D Manager, responsible for developing the IBM XIV high-end, grid-scale storage system, and served as Chief Architect at Stratoscale, a provider of hyper-converged infrastructure.

Prior to IBM XIV, Mr. Ori headed Software Development at Envara (acquired by Intel) and served as VP R&D at Onigma (acquired by McAfee).

35: GreyBeards talk Flash Memory Summit wrap-up with Jim Handy, Objective Analysis

In this episode, we talk with Jim Handy (@thessdguy), memory and flash analyst at Objective Analysis. Jim’s been on our podcast before and last time we had a great talk on flash trends. As Jim, Howard and Ray were all at Flash Memory Summit (FMS 2016) last week, we thought it appropriate to get together and discuss what we found interesting at the summit

Flash is undergoing significant change. We started our discussion with which vendor had the highest density flash device. It’s not that easy to answer because of all the vendors at the show. For example Micron’s shipping a 32 GB chip and Samsung announced a 1TB BGA. And as for devices, Seagate announced a monster, 3.5″ 60TB SSD.

MicroSD cards have 16-17 NAND chips plus a mini-controller, at that level, with a 32GB chip, we could have a ~0.5TB MicroSD card in the near future. No discussion on pricing but Howard’s expectation is that they will be expensive.

NVMe over fabric push

One main topic of conversation at FMS was how NVMe over fabric is emerging. There were a few storage vendors at FMS taking advantage of this, including E8 Storage and Mangstor, both showing off NVMe over Ethernet flash storage. But there were plenty of others talking NVMe over fabric and all the major NAND manufacturers couldn’t talk enough about NVMe.

Facebook’s keynote had a couple of surprises. One was their request for WORM (QLC) flash.  It appears that Facebook plans on keeping user data forever. Another item of interest was their Open Compute Project Lightning JBOF (just a bunch of flash) device using NVMe over Ethernet (see Ray’s post on Facebook’s move to JBOF). They were also interested in ganging up M.2 SSDs into a single package. And finally they discussed their need for SCM.

Storage class memory

The other main topic was storage class memory (SCM), and all the vendors talked about it. Sadly, the timeline for Intel-Micron 3D Xpoint has them supplying sample chips/devices by year end next year (YE2017) and releasing devices to market with SCM the following year (2018). They did have one (hand built) SSD at the show with remarkable performance.

On the other hand, there are other SCM’s on the market, including EverSpin (MRAM) and CrossBar (ReRAM). Both of these vendors had products on display but their capacities were on the order of Mbits rather than Gbits.

It turns out they’re both using ~90nm fab technology and need to get their volumes up before they can shrink their technologies to hit higher densities. However, now that everyone’s talking about SCM, they are starting to see some product wins.  In fact, Mangstor is using EverSpin as a non-volatile write buffer.

Jim explained that 90nm is where DRAM was in 2005 but EverSpin/CrossBar’s bit density is better than DRAM was at the time. But DRAM is now on 15-10nm class technologies and sell 10B DRAM chips/year. EverSpin and CrossBar (together?) are doing more like 10M chips/year. The costs to shrink to the latest technology are ~$100M to generate the masks required. So for these vendors, volumes have to go up drastically before capacity can increase significantly.

Also, at the show Toshiba mentioned they’re focusing on ReRAM for their SCM.

As Jim recounted, the whole SCM push has been driven by Intel and their need to keep improving the performance of memory and storage, otherwise they felt their processor sales would stall.

3D NAND is here

IMG_6719Just about every NAND manufacturer talked about their 3D NAND chips, ranging from 32 layers to 64 layers. From Jim’s perspective, 3D NAND was inevitable, as it was the only way to continue scaling in density and reducing bit costs for NAND.

Samsung was first to market with 3D NAND as a way to show technological leadership. But now everyone’s got it and providing future discussions on bit density and number of layers.  What their yields are is another question. But Planar NAND’s days are over.

Toshiba’s FlashMatrix

IMG_6720Toshiba’s keynote discussed a new flash storage system called the FlashMatrix but at press time they had yet to share their slides with the FMS team,  so  information on FlashMatrix was sketchy at best.

However, they had one on the floor and it looked like a bunch of M2 flash across an NVMe (over Ethernet?) mesh backplane with compute engines connected at the edge.

We had a hard time understanding why Toshiba would do this. Our best guess is perhaps they want to provide OEMs an alternative to SanDisk’s Infiniflash.

The podcast runs over 50 minutes and covers flash technology on display at the show and the history of SCM. I think Howard and Ray could easily spend a day with Jim and not exhaust his knowledge of Flash and we haven’t really touched on DRAM. Listen to the podcast to learn more.

Jim Handy, Memory and Flash analyst at Objective Analysis.

JH Mug BW

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication. He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media.  He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com.