WIP Rust/Native Code disaster recovery tool


I recently had a disaster recovery scenario after a particularly nasty power cut took out my NAS, and corrupted the databases on my backups, and due to a PEBKAK error I’m going to attribute to lack of sleep, damage to the filesystem.

Of course I also happened to run into the glitch with database rebuilds taking inordinately long to run.

I am personally not a fan of relying on .net for disaster recovery, so I initially used the python script from the git repository to do the restore, and this worked great for my smaller jobs, but for the one involving the majority of the files on the nas, it was taking WAY to long to run.

Seeing an opportunity for improvement, I took it upon myself to follow the great rustacean tradition and rewrite it in rust. A dash of rayon latter and its now able to throw every core the machine has at the problem, and running quite a bit faster than the python script.

You can find the project on my github here:

It’s currently very limited, full of .unwrap() instead of proper error handling, doesn’t yet support aes inside the application, has next to nothing for UI,and has substantial room for improvement, but I do plan on continuing to improve it, so feedback, suggestions, and contributions are welcome and appreciated.

Recreating database logic/understanding/issue/slow
Duplicati won't restore data from cloud. Ransom attack It's URGENT

Just so I understand, does this allow you to restore your protected files directly from the back end data when you lost your local Duplicati sqlite database? Or does this provide a different method of rebuilding the local sqlite database?


It doesn’t rely on the duplicati database, and doesn’t actually use sqlite at all after the update I just finished.

It directly indexes the .dblock/data files each time it run, which takes a bit, it too about an hour for a 500gb backup with 150MB volumes and with 400 versions on my server. (Indexing seems to be slower with larger volumue sizes, it’s IO limited and appears to be doing a lot of randomish seeking).

My next step is going to be attempting to use the dindex files to speed things up, if they are good, but I don’t see this tool directly interacting with the duplicati database anytime soon