Direct restore (lost database) Will it re-download all this?

Just trying to understand… so it knows about these dblocks because they are present in the file listing, but during the recreate none of the dlists referenced those files? So Duplicati downloads them to see what they contain and adds the info to the database? How do the dlists get updated so they reference these “unreferenced” dblocks? (Just thinking about how to go about fixing the error…)

I have written about the file format and ideas, but you are asking a bit differently, so here is another explanation.

The main list of files in the backup is really a list of paths and the final hash of the file.

The final hash is a checksum used when verifying a restored file. For each file, there is also an ordered list of blocks required to do the restore. As the overhead of this “list of block hashes” is significant, the “list of block hashes” is treated as normal data blocks. This is where the problems occur.

To restore a file (and re-build the database structure), Duplicati needs the full expanded list of hashes. To get this list, it needs to locate the data blocks with the hashes. Data blocks usually reside in dblock files, but to avoid downloading these larger files, the “list of hashes” data blocks are replicated in the dindex files.

(The only other information found in dindex files, is a list of hashes for data blocks found in a dblock file, allowing Duplicati to figure out which dblock file contains a specific hash)

You can set the index-file-policy to choose what contents it has, but generally it has both the map and the replicated data blocks. This allows Duplicati to rebuild the database using only the dlist and dindex files.

For the error, the recreate process downloads the dlist and dindex files, and then expands the “list of hashes” blocks. Using the expected size of the files, it knows how many hashes it should know for each file, and what “list of hashes” blocks it needs to obtain the full list.

If the database is not complete after downloading this (some files do not have all hashes), Duplicati will download the dblock files that it knows (from the maps) contains the “list of hashes” blocks.

If there are still items missing, it will just keep downloading dblock files until it (hopefully) finds all it needs.

The error seems to happen because the “list of hashes” are missing from the dindex files for some reason.
My best guess is that something goes awry during compacting, and there is either some references to deleted data, or the data blocks are somehow not copied correctly into the newly generated dindex files.

1 Like

Thank you - I appreciate you taking the time to explain so I can better understand the inner-workings!

1 Like

Posted before about slow database rebuilds that have affected my use case.
Thought the zero byte file bug was the cause (however my understanding it’s addressed in 2.0.4.21_experimental) but recently have been able to validate it’s not the cause of my slow rebuilds.

Had an old data set from 2.0.4.5_beta (that was parked for 4 - 5 months due to slow rebuild time) that needed 353 dblocks to be processed when rebuilding the database using 2.0.4.21_experimental (total rebuild time for the 700gb data set 5.2 days).

So I have again abandoned that old data set (new backups quicker than a rebuild) and have created a new backup with only 4 versions (4 days of backups) using 2.0.4.21_experimental.

Have just tested a rebuild and already it’s starting to report 1 dblock is required for the rebuild which adds allot of time to the rebuild time (~30+ minutes a dblock file it seems).

I do have the following options enabled which may be the cause:

  • –auto-cleanup=true
  • –auto-vacuum=true

I’ll see if I can figure out what steps I’m needing todo to reproduce the problem where it goes from not needing any dblocks to starting to report the need for dblocks for a rebuild.

But I’m guessing the auto-vacuum or auto-cleanup could be a cause as you mention @ kenkendk

auto-vacuum isn’t the problem… That’s purely a sqlite function that doesn’t change any database records.

Not sure what auto-cleanup does but I think the suspected cause is auto compaction, which happens by default but can be disabled with the no-auto-compact flag.

@drwtsn32 yes, doesn’t look like any of those options are the cause.

Have been pushing out backups using 2.0.4.21_experimental different data set sizes and versions.
Trimming versions and aborting backups (about 15 backups in on a 200gb data set that I’m adding/removing files/folders most backups) and rebuilding using another host after each backup. Have not been able to fault the backup and identify how to reproduce the problem as yet.

I’ll start adding some compacts in between backups and see if that helps reproduce.
Any other thoughts ?

Anyone know what options the auto-compact uses for:

  • small-file-max-count
  • small-file-size
  • threshold

?

Also looks like the compact is run as a result of the purge, which as I’m getting it to do max version management (changing the versions allowed between backups) it looks to have already been triggering this a few times without causing the problem.

Didn’t get to test for much longer (didn’t run a purge or compact as yet), as the next backup run (which was aborted part way through) ended with the error:
Found 1 file(s) with missing blocklist hashes
So I’m probably going to park my testing here.