Five Days of Recreate Database

I’m a long-time Duplicati user, and donate as well. Recently, I started getting the dreaded “System.Exception: Detected non-empty blocksets with no associated blocks!” error and no backups would run. I’m on 2.0.5.0_experimental_2020-01-03, Windows 10.

Researching here and other places led me to the conclusion that Delete and Recreate was my only recourse (repair didn’t work). I didn’t want to clone the backup into a new one, because I need sometimes to go back to old files and restore them. I didn’t want to lose that history/availability.

The recreate has been running non-stop for over five days, and it’s hard to get any work done. I have Windows updates that I can’t apply because I’m afraid to paus the backup, install them, and reboot and possibly lose all of this time. Below is my current log message. Is there any way to speed this up? Safe to pause, reboot, and resume? My internet connection is solid, not much else happening on the home network.

Mar 22, 2020 11:33 AM: Pass 3 of 3, processing blocklist volume 342 of 4133
Mar 22, 2020 10:45 AM: Pass 3 of 3, processing blocklist volume 341 of 4133

That’s not a good sign. Seems that Duplicati thinks it needs to download many (all?) dblocks in order to recreate the database. How many dblock files are on the back end? Is it 4133, or is that a subset of the total? We have been optimistic that recent versions of Duplicati would not require all dblocks to recreate the db.

48 minutes to process a single dblock when you have almost 3800 to go represents a crazy amount of time. And no, there is no way that I know of to pause and resume this process (to allow for a reboot).

How do I duplicate my existing (broken) backup into a new, fresh one? Export it, then import it? It sucks that I might lose all my history. I just upgraded to 2.0.5.1, so looking for next steps. Should I retry the delete and recreate on the old one just to see if it works better in 2.0.5.1?

That won’t get you anywhere, because Duplicati still has to recreate a local database before it can be used for backup jobs.

There is no functional difference between 2.0.5.0 and 2.0.5.1, so I doubt it will make a difference. But if you upgraded presumably that means you canceled the recreate operation that was in progress. You have no local database right now, correct? I would go ahead and try the recreation option again.

How many dblock files do you have on your back end storage?

I’m currently running a second backup job that I had previously set up in Duplicati. This one backs up to a local large USB drive. The primary backup that doesn’t work anymore was backing up to my Google Drive. So the USB one seems to be working, I was under the impression that only the Google Drive backup was broken. Do they share a local database? Or is there one for each backup job you have configured within Duplicati? Don’t know how to check the # of dblock files on backend storage, other than go into Google Drive and get a file count for that folder? Thanks for your help!

Each backup job has its own local database - they are not shared.

I’m not a Google Drive user so I’m not sure how easy it is to see the files manually. Maybe they are hidden from the normal interface, I’m not sure. I was just curious if it was 4133 dblock files or if you have more, not that it matters a whole lot. Recreating the database by downloading/processing 4133 dblocks seems to be a no-go: it would take months. Database recreations are not supposed to require dblocks at all.

Ok so I upgraded to 2.0.5.1 and tried to delete and recreate again. Same result, Duplicati thinks it needs to download over 4k dblocks. Are my years of backups on that google drive gone for good?

The backup data is there (at least 4133 dblocks, anyway), but we need to get your database recreated. I’m not sure what will cause a modern version of Duplicati to download so many dblocks. Perhaps your dlist or dindex files got damaged or deleted?

Have you found a way to view the Duplicati files in Google Drive?

Thanks for your help. Google Drive is notorious for this ridiculous flaw (hard to count files in a folder, especially if it’s a lot). I tried some tricks and workarounds, one of them shows 8688 files in that directory used for the Duplicati backup. I’m pretty sure that’s right. So that’s combined dblock.zip.aes and dindex.zip.aes. These date back to August 16, 2016. I have my backups run automatically at night so it would be most nights. What a mess. Thanks again for any help.

Was it hard to get work done because Duplicati slowed the system? It’s certainly quite slow itself.
Any idea via Task Manager (or otherwise) what system resources were under stress at the time?
Whatever’s being slow here might also affect Direct restore from backup files if it’s ever required.

Duplicati.CommandLine.RecoveryTool.exe

This tool can be used in very specific situations, where you have to restore data from a corrupted backup. The procedure for recovering from this scenario is covered in Disaster Recovery.

Restoring files using the Recovery Tool

So you can probably still get files if it’s worth tying up the storage, but it’d be nicer to work normally, which basically means getting your database back together somehow. Slowness makes this worse.

Do you have a copy of the database before Recreate, or any other information on backup history?
Sometimes one may try to back out the latest changes to remote, in the hope that earlier is still OK.
The error you got is a database error, but the unknown is what it’s looking for (and trying real hard).

If you have nothing else, Google Drive itself can let you make guesses, if your usage is predictable.

Here’s a generally predictable backup, viewed at drive.google.com:

On the right of that is a helpful Activity display:

Seeing no deletes is a good sign, because deletes might be a compact that would complicate things.
Regardless, the theory (I think) is that dlist files are your backup file lists, and say what blocks to use.
Blocks are in dblock files and are indexed by dindex files one-for-one. If somehow a block is missing, extensive searching occurs. It can be awhile even if fast, and the one you posted was far from fast…

So – if you copied the DB before the Recreate, there’s a chance it can be fixed, or at least shed light.
Otherwise there’s a chance that hiding the most recent dlist (add prefix to name) may help Recreate, however if it gets to Pass 3 of 3 again, you probably don’t want to wait until the end if it’s still so slow.

There are some errors, though, that actually benefit from searching all the way. Did you run Canary? That had dindex and other problems awhile. Beta is safest for important backups (but is not perfect). Experimental has been pretty good recently, because it’s been a pre-Beta (largely to test upgrading).