132: GreyBeards talk fast embedded k-v stores with Speedb’s Co-Founder&CEO Adi Gelvan

We’ve been talking a lot about K8s of late so we thought it was time to get back down to earth and spend some time with Adi Gelvan (@speedb_io), Co-founder and CEO of Speedb, an embedded key-value store, drop-in/replacement for RocksDB, that significantly improves on its IO performance for large metadata databases.

At Adi’s last job they were searching for a key-values store or database to manage the substantial metadata they needed. After looking at RocksDB, they found it had a number of performance problems, especially as you got up to lots of metadata. Speedb was specifically designed to address the problems they found. Listen to the podcast to learn more.

RocksDB is a key-value store engine that manages the metadata for just about every open source project in existence that uses metadata. RocksDB is a Facebook open source fork of Google’s LevelDB database.

The main issues with RocksDB is that when you have a lot of metadata (key:volume pairs), RocksDB performance suffers from highly variable latency and write stalls.

Most RocksDB users are aware of these problems and turn to sharding the database to address them (by essentially shrinking the amount of metadata under management within a single node/instance..

Historically, key-volume stores used B+-trees to store data. B+-trees are great for reading, but bad for writing. Namely the B+-tree usually had to be rebalanced when entries were added and potentially when they were updated. This could cause a cascade of read-write IO throughout the tree, delaying the original IO.

Log Structured Merge trees (LMS-trees) were created to reduce write problems while at the same time provide B+-tree speed for reading. Essentially, an LSM-tree is an in-memory, sequence of (sometimes sorted) key-value pairs that can be written (destaged) to multiple sequential (sorted) string tables (SST) files on some backing store. A hierarchical index is maintained in memory to identify which SSTs holds which key-value data.

RocksDB uses LSM-tree, in memory data structures, to buffer writes. When memory becomes full, the LSM tree can be destaged to backing store to one or more SST files. Howewer, SSTs, when first written, aren’t necessarily in sorted key order, and they may contain duplicate key-value entries to what’s already in other SSTs.

So earlier versions of SSTs will need to be read back in, compacted (duplicate key-value entries deleted), sorted and written back out. The earliest version of the SSTs is considered Level 0 (L0), the next (1st level compacted and sorted) is considered L1, and this process can go on generating L2 to Ln SSTs. We would call this garbage collection, the metadata world calls it compaction.

But each time an SST is written out that’s another read of all the key-value pairs AND another write to storage. In SSDs we would call these repeated writes, write amplification. It turns out that RocksDB can have up to a 30X write amplification for a key-value entry. This means that instead of just writing it once or twice it’s written (and reread) up to 30 times. This IO takes away bandwidth and processing power from normal metadata read and write activity, which impacts IO performance

As GreyBeards know, storage (and flash) garbage collection can lead to unpredictable latencies and system busy times. Intense garbage collection (for SSDs) can seemingly hold off or stall all other IO for some amount of time during this activity. This is the main reason why RocksDB has highly variable latencies and write stalls.

Garbage collection is not an issue when you have limited amounts of metadata entries (key:volume pairs), but as you get more entries, ongoing garbage collection can become a serious impediment to performing IO. When we say “large metadata stores” we are talking 30GBs of metadata, with probably, billions of key-volume pair entries.

There appears to be two dimensions to (RocksDB) LSM-tree/SST file performance. One is the number of levels allowed and the other is the size of SST file.

Speedb determined that two dimensions weren’t sufficient to solve RocksDB performance problems. And sharding the database seemed to be putting the burden on the customer to fix the issue. So Speedb restructured their LSM-trees and SSTs to create 3 or more dimensions to tune for database performance.

With Speedb’s restructured LMS-tree and SST files, they reduce write amplification for large metadata databases, from 30X to 5X. That alone could easily increase system performance by a factor of 6.

Adi mentioned that for one cloud based customer, they were able to double performance with 1/4 the (cloud instance) server hardware, essentially providing an ~8X improvement in performance over RocksDB.

Adi also mentioned that they are targeting system developers with large metadata stores. Luckily Speedb is a fully RocksDB compatible replacement. This means developers should only take ~30 minutes to convert a system to use Speedb.

We also asked about pricing. Adi said there’s two current pricing models: 1) OEMs pay a revenue share to use Speedb and 2) non-OEMs can license the product on a per node per month basis. Given that Speedb node efficiency over RocksDB, there should be a less nodes required to support the same performance for any given metadata store.

Adi also mentioned they are in the process of releasing an open source version of Speedb that incorporates some of the enterprise product. This way developers can try Speedb to see how it works for free. It won’t bethe complete product but it’s better than native RocksDB.

Adi Gelvan, Co-Founder and CEO Speedb

Adi Gelvan is co-founder and CEO of Speedb, a data management startup, that provides a drop-in replacement for RocksDB embedded storage engine. 

Adi was a former IT infrastructure manager with over two decades of management, commercialization and executive sales position. Adi specializes in leading global software technology companies like Infinidat and SQream to outstanding growth.

Adi holds a double academic degree in mathematics & computer science.