Failed: Invalid header marker

Do you know of good examples of software handling damaged backups? We’re not sure where the damage is from, but if it was from Data degradation during storage, the article describes how some filesystem types such as ZFS, Btrfs and ReFS have protections. I see NAS vendors claiming bit rot protection, but I don’t know what it is. WIP Add par2 parity files and auto repair to backends #3879 possibly will emerge someday to give more redundancy at the Duplicati level, but for now there’s a dependency that storage is reliable. Any errors are audited by download and verification which isn’t particularly quick so default sample size is small, but it can be raised as much as tolerable by using

  --backup-test-samples (Integer): The number of samples to test after a
    backup
    After a backup is completed, some (dblock, dindex, dlist) files from the
    remote backend are selected for verification. Use this option to change
    how many. If the backup-test-percentage option is also provided, the
    number of samples tested is the maximum implied by the two options. If
    this value is set to 0 or the option --no-backend-verification is set, no
    remote files are verified
    * default value: 1

v2.0.4.11-2.0.4.11_canary_2019-01-16

Added test percentage option, thanks @warwickmm

  --backup-test-percentage (Integer): The percentage of samples to test after
    a backup
    After a backup is completed, some (dblock, dindex, dlist) files from the
    remote backend are selected for verification. Use this option to specify
    the percentage (between 0 and 100) of files to test. If the
    backup-test-samples option is also provided, the number of samples tested
    is the maximum implied by the two options. If the no-backend-verification
    option is provided, no remote files are verified.
    * default value: 0

Some cloud storage providers scan their own files for bit-rot issues, and some allow downloading an object hash instead of the actual whole object, which may allow for much faster verification someday.

But the question is how to recover from a damaged backup? Fixing the --rebuild-missing-dblock-files option (or finding out how to use it if use is wrong) would be a start, but data redundancy is designed low, with a given default 100 KB source file block stored only once, and newer uses get deduplicated.

Block-based storage engine describes the design move away from the historical full/incremental idea. There isn’t a traditional concept of full to do on demand. There are only blocks, which might exist by previous backup (in which case changes are uploaded), or might not (in which case all are uploaded).

How the backup process works describes it in more detail, including how deduplication reuses blocks.

Creating a new manual full backup (if concept of full backup were introduced) would at least add more copies of source data, but would complicate the reference scheme which is already slow and complex.

General best practice for those who highly value their data is multiple backups done very differently, so program bugs, destination issues, etc. are mitigated by redundancy. Test backups well, even for a total loss of source system. Relying on Beta software is risky, and Canary varies between worse and better.