In this episode, we talk with a new vendor that’s created a new method to backup database information. Our guest for this podcast is Tarun Thakur, CEO of Datos IO. Datos IO was started in 2014 with the express purpose to provide a better way to back up and recover databases in the cloud. They started with noSQL, cloud based databases, such as MongoDB and Cassandra.
The problem with backing up noSQL and any SQL databases for that matter, is that they are big files and always have some changes in them. So, for most typical backup systems, databases are always flagged as files that have been changed and thus need to be backed up. So each incremental, backups up the whole database file, even if only a row has changed. All this results in a tremendous waste of storage.
Deduplication can help, but there are problems deduplicating databases. Many databases used compressed data for storing data and deduplication that is based on fixed length blocks don’t work well for variable length, compressed data (see my RayOnStorage Poor deduplication … post).
Also, variable length deduplication algorithms are usually based on known start of record triggers to understand where a chunk of data can be found. Some databases do not use these start of row, start of table indicators, which throw off variable length deduplication algorithms.
So, with traditional backup systems most databases don’t deduplicate very well and are backed up all the time resulting in lots of waisted storage space.
How’s Datos IO different?
Datos IO identifies and backups only changed data, not changed (database) files. Their Datos IO RecoverX product extracts rows from a database, identifies whether this data has changed and then just backups the changed data.
As more customers create applications for the cloud, backups become a critical component of cloud operations. Most cloud based applications are developed from the start, using noSQL databases.
Traditional backup packages don’t work well with NoSQL, cloud databases, if at all. And data center customers are reluctant to move their expensive, enterprise backup packages to the cloud, even if they could operate effectively there.
Datos IO saw that backing up noSQL MongoDB and Cassandra databases in the cloud as a major new opportunity, if it could be done properly.
How does Datos IO backup changed data?
Essentially, RecoverX takes a point-in-time snapshot of the database and then reads each table, row by row, comparing (hashes of) each row’s data obtained, with the row data they previously backed up and if changed, the new row’s data is added to the current backup. This provides a semantic deduplication of database data.
Furthermore, because RecoverX is looking at the data rather than files, compressed data works just as well as uncompressed. Datos IO uses standardized database APIs to extract the row data, that way they remain compatible with each release of database software.
RecoverX backups reside in S3 objects on the public cloud.
New in RecoverX Version 2
Many customers liked their approach so much they wanted RecoverX to do this for regular SQL databases as well. Major customers are not just developing new applications for the cloud they also want to do enterprise application development, test and QA in the cloud as well, and these applications almost always use SQL databases.
So, Datos IO RecoverX Version 2 nows supports migration and cloud backups for standardized SQL databases. They are starting with MySQL, with plans to support other SQL databases used by the enterprise. Migration occurs by backing up the datacenter MySQL databases to the cloud and then recovering it to the cloud.
They have also added backup and recovery support for Apache Hadoop, HDFS from Cloudera and HortonWorks. Another change is that Datos IO originally offered only a 3 node solution but with Version 2, it will now support up to a 5 node cluster.
They have also added more backup management and policy support. Now you can add/subtract database table backups at anytime. Now admins can change backup policies to add or subtract tables/databases on the fly, even while backups are taking place.
The podcast runs ~30 minutes. Tarun has been in the storage industry for a number of years from microcoding storage control logic to managing major industry development organizations. He has an in depth knowledge of storage and backup systems that’s hard to come by and was a pleasure to talk with. Listen to the podcast to learn more.
Podcast: Play in new window | Download (Duration: 30:52 — 14.2MB) | Embed
Subscribe: Apple Podcasts | Google Podcasts | Spotify | Stitcher | Email | RSS
Tarun Thakur, CEO, Datos IO
Tarun Thakur is co-founder and CEO, where he leads overall Datos IO business strategy, day-to-day execution, and product efforts. Prior to founding Datos IO he held senior product and technical roles at several leading technology companies.
Most recently, he worked at Data Domain (EMC), where he led and delivered multiple product releases and new products. Prior to EMC, Thakur was at Veritas (Symantec), where he was responsible for inbound and outbound product management for their clustered NAS appliance products.
Prior to that, he worked at the IBM Almaden Research Center where he focused on distributed systems technology. Thakur started his career at Seagate, where he developed advanced storage architecture products.
Thakur has more than 10 patents granted from USPTO and holds an MBA from Duke University.