Is Duplicati 2 ready for production?

–dblock-size=“1GB” --blocksize=“64MB”

The ridiculous / insanely slow restore usually starts with “Registering missing blocks”… And then you know you can throw out all wishful thinking about RTO out of the window. Because it can take well, days or weeks as mentioned here earlier.

Related the the previous statement, starts with registering missing blocks … And if you’re able to wait through the code smell and insanely slow single threaded code (which btw is NOT limited by I/O just burning CPU cycles) then you’ll end up with code 2, if you’re lucky. But Code 100 is also likely result and restore failure after that.

It’s not over warning, if it’s up to random change if backup restore will ever complete / be successful.

But let’s hope the newer version is less likely to constantly corrupt data. We’ll be seeing. As mentioned. Today I’m testing all the backups and confirming that the backup sets are now all good. Then I’ll just keep monitoring the failure rate. If there’s the classic Duplicati silent backup set corruption problem, I’ve been so frustrated with. Hopefully the latest canary version does handle the situation better. Also with compact. At least I’m hopeful because now it seems to at least at times, to start recovery correctly if aborted. As well as do some recovery steps which were missing earlier.

Ref: FTP delete not atomic / verified / transactional (?)

I’ve also updated absolutely all related software, Duplicati, .NET, Windows versions, back-end server software, absolutely all possible related things and so on.