Failed: Invalid header marker

on the three known broken files becomes

hexdump -C duplicati-bd742daac0957402f81e85c35a5be6662.dblock.zip.aes | grep '00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00'

and you look to see if there are several runs of those NULs. Repeated lines of the same data show as *, however just searching for that might find repetitions of something else (which would be informative too).

If you have man pages installed, just type man hexdump if that’s easier than following the link I provided.

This is a closer look at files you already identified.

The find command is the loop in the original test. The left side does search, and right side does action. You could probably adapt it to scan for runs of NULs if you wanted, but my plan was a study, not survey. Survey would be done using the verification file. If need be, we can invent another survey, but let’s wait.

What will take time is the duplicati-verification.json run, as it needs to read all of the files The good part about that is it’s very thorough, and will even find things like unreadable files due to low-level problems.

The affected command should be faster because it just looks in the database info. Duplicati has to keep track of what dblock files have pieces of what source files in order to do restore, so this test just looks in the other direction, starting with a dblock file, and then looking to see what source files have blocks in it.

Disaster Recovery has a lab example of recovery from intentionally corrupting a backup. Some of that’s possibly applicable after you find how much was affected and whether it’s important or not so important.

How the backup process works
How the restore process works
can give some background info.