Recreating database Problem

I had a database issue on one of my large backups. I started a repair, and the Recreating database is still running after 2 days. The green progress bar got to what might be the end in less than a day. Its a little unclear what the end really is, it’s bumped into the circled X, with a little white space following it, but if its increasing its not visibly obvious.

It’s still making dup- xxxxx files in /tmp. There are two persistent but increasing, currently at 18MB that list an increasing number of Backend events GETs of 150MB (started and completed). I can see the 150MB of the transaction appear as a dup- tmp file and disappear quickly. There are bout 50 thousand transactions at this point in each of these logs.

So will this ever end?

Short answer is “yes”.

But if it takes forever, then it does not help.

The recreate should download the .dlist and .dindex files and recreate the database with just that.

However, in some cases information is missing in the .dindex files and Duplicati starts hunting for the missing information in the .dblock files that contains data. These files are of course much larger so it will take significantly longer if that is what is happening.

You can view the “Live logs” under the “About” menu to see what is happening.

What issue? What other steps were tried before the DB recreate, with with what result?

You seem to be looking at something, but it’s unclear what logs you have.

The kinds of lines that a Duplicati makes are below.
This is pulled from my profiling level log file. Live log will omit tags in front.

2025-03-21 18:55:02 -04 - [Verbose-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-ProcessingAllBlocklistVolumes]: Processing all of the 4 volumes for blocklists: duplicati-b5f7c6ba16ee7427c91e8c3aad27a0ceb.dblock.zip, duplicati-b83ca5bac408f40a7abe744909d350c32.dblock.zip, duplicati-b89a632dc4c22495abd98308c4680cc63.dblock.zip, duplicati-bb1c4aa1f84434836a4e5799e6c6bff27.dblock.zip
2025-03-21 18:55:02 -04 - [Information-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-ProcessingAllBlocklistVolumes]: Processing all of the 4 volumes for blocklists
2025-03-21 18:55:02 -04 - [Verbose-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-ProcessingBlocklistVolumes]: Pass 3 of 3, processing blocklist volume 1 of 4
2025-03-21 18:55:03 -04 - [Verbose-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-ProcessingBlocklistVolumes]: Pass 3 of 3, processing blocklist volume 2 of 4
2025-03-21 18:55:03 -04 - [Verbose-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-ProcessingBlocklistVolumes]: Pass 3 of 3, processing blocklist volume 3 of 4
2025-03-21 18:55:03 -04 - [Verbose-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-ProcessingBlocklistVolumes]: Pass 3 of 3, processing blocklist volume 4 of 4
2025-03-21 19:21:59 -04 - [Verbose-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-ProcessingAllBlocklistVolumes]: Processing all of the 1 volumes for blocklists: duplicati-bfeb48795b2fb4df2b4a060e74822fc0e.dblock.zip

Verbose level seems to be where it gets very clear how much searching is left to do.
Possibly exhaustive (pass 3) search will still not find all data. Please note any errors.

That’s probably the remaining downloads that pass 3 “volume X of Y” will show in log.

There were actually two identical backup, differing only to different cloud servers. The problem was that the partition with the sqlite files filled up, so the first one failed and then the second. I moved the sqlite files to a new bigger disk partition and ran backups. The each failed with “Unable to determine database format: database disk image is malformed”. I then ran repair on the first one which succeded after half a day. The one I’m asking about in this thread is the repair on the second one that is 3 days and counting. The last live verbose log entry with a count was

May 12, 2025 11:18 AM: Pass 3 of 3, processing blocklist volume 23138 of 30767

Up until that point it was doing doing about 10 GETs a minute. The final two entries following that are

  • May 12, 2025 12:17 PM: Backend event: Get - Started: duplicati-bbfea73626ed24a66b06ba6c4ef090413.dblock.zip.aes (149.969 MB)

  • May 12, 2025 11:18 AM: Backend event: Get - Completed: duplicati-bbf966d7750cc4e729dd97e8818724cab.dblock.zip.aes (149.969 MB)

So an hour gap between the last completed GET and the next started, with no completion or anything else in the log which at this writing is 4 hours later. I presume that its stuck permanently.

All I can think to do is delete the backup and related cloud files, and start a new one. Any other suggestion?

I have identified an issue that causes these long recreate runs, so at least now I understand why it is fetching the .dblock files.

Looking at your data, you still need to process ~7000 files, so it will take a while.
It should not slow down as you see. I realize I have been slow to respond, but did you see any drop in network speed, high CPU or memory load, or any other symptomps that could explain why it would do nothing for 4 hours?