Failed: Invalid header marker

Makes sense. Losing track of steps. So header scan missed this, and it was caught in the full hash test, while it possibly could have been caught in NUL-runs scan except for the fact that such scan wasn’t run.

Upload throttle corrupts backup, especially OneDrive. Analyzed, with code proposed. #3787 may cause the runs of NULs for at least one person in this topic who was throttling, but you’re not, so it’s less likely.

Good question. Is anything else (even source files) changed monthy? Old dblock files are not rewritten, but the data they contain can gradually become obsolete as older backup versions are deleted by your retention rules. When enough data is obsolete, compact occurs to reclaim the otherwise-wasted space. You can look around to see which other old files you still have from 2018, and see if there’s any pattern.

Then there’s your retention change theory. BTW there’s no merging. It’s just thinning out to less closely spaced versions when you’re willing to sacrifice the in-between-the-remaining-backup views of the files.

If they are also unchanged since 2018 backup, that would help explain why the dblocks are still around. This would be an ideal situation for the --rebuild-missing-dblock-files option, but it doesn’t seem to work with the small test I ran, and I’m not seeing much encouragement from the other forum posts about it…

Lacking the ideal fix, the fallback is probably to check the other bad dblock files, then follow the disaster recovery article to see if you can just get them out of the way so they stop being complained about, then backup again to backup the same files (if others are also still intact). They just won’t be in old versions…

If you keep having trouble, and you’re willing to take some risk, you might consider switching to Canary, which has numerous data integrity bug fixes that aren’t in Beta yet. Try this on a less critical system that sees the error too often. If you have a good Canary (recent ones seem good), you can change Settings update channel to Beta and wait for it. Or if you’re going to stay on Canary, at least read release notices.

I do have hundreds, perhaps thousands of files from 2018. These are mainly pictures automatically upload from mobile phones to be backed up and not all users sort through the images regularly and delete or move them somewhere else.

Anyhow, this all came up, because files wanted to get compact - I assume and the system is unable to get rid of the “broken” ones. This is also the reason why my backup works without an error, as long as their is not retention policy enabled.

I think this is a major downside of the concept of “full backup” once and afterwards incremental backups only.

Right now I am splitting backups, instead of the one 2TB backup to smaller ones, but this is going to take some time until the uploads are finished.

After that, I am up to taking some risks :wink: and let you know about the outcome.

For the version, I am using canary channel from the beginning, never give it too much thought. I started with duplicati 2 at a time you needed to search for it on the old website. Anyhow, I keep that canary now, and switch to beta, for the sake of productivity :smiley:

But at the end, it seems we are stuck, and Duplicati is not capable of handling this situation easily or at all. Latter we are going to see after I have the new backup sets in place, most properly after christmas.

THANKS for all your help, and I hope this can get fixed somehow in the future. Perhaps the option of creating a new manual full backup could resolve at least some of the effects of this problem.

Do you know of good examples of software handling damaged backups? We’re not sure where the damage is from, but if it was from Data degradation during storage, the article describes how some filesystem types such as ZFS, Btrfs and ReFS have protections. I see NAS vendors claiming bit rot protection, but I don’t know what it is. WIP Add par2 parity files and auto repair to backends #3879 possibly will emerge someday to give more redundancy at the Duplicati level, but for now there’s a dependency that storage is reliable. Any errors are audited by download and verification which isn’t particularly quick so default sample size is small, but it can be raised as much as tolerable by using

  --backup-test-samples (Integer): The number of samples to test after a
    backup
    After a backup is completed, some (dblock, dindex, dlist) files from the
    remote backend are selected for verification. Use this option to change
    how many. If the backup-test-percentage option is also provided, the
    number of samples tested is the maximum implied by the two options. If
    this value is set to 0 or the option --no-backend-verification is set, no
    remote files are verified
    * default value: 1

v2.0.4.11-2.0.4.11_canary_2019-01-16

Added test percentage option, thanks @warwickmm

  --backup-test-percentage (Integer): The percentage of samples to test after
    a backup
    After a backup is completed, some (dblock, dindex, dlist) files from the
    remote backend are selected for verification. Use this option to specify
    the percentage (between 0 and 100) of files to test. If the
    backup-test-samples option is also provided, the number of samples tested
    is the maximum implied by the two options. If the no-backend-verification
    option is provided, no remote files are verified.
    * default value: 0

Some cloud storage providers scan their own files for bit-rot issues, and some allow downloading an object hash instead of the actual whole object, which may allow for much faster verification someday.

But the question is how to recover from a damaged backup? Fixing the --rebuild-missing-dblock-files option (or finding out how to use it if use is wrong) would be a start, but data redundancy is designed low, with a given default 100 KB source file block stored only once, and newer uses get deduplicated.

Block-based storage engine describes the design move away from the historical full/incremental idea. There isn’t a traditional concept of full to do on demand. There are only blocks, which might exist by previous backup (in which case changes are uploaded), or might not (in which case all are uploaded).

How the backup process works describes it in more detail, including how deduplication reuses blocks.

Creating a new manual full backup (if concept of full backup were introduced) would at least add more copies of source data, but would complicate the reference scheme which is already slow and complex.

General best practice for those who highly value their data is multiple backups done very differently, so program bugs, destination issues, etc. are mitigated by redundancy. Test backups well, even for a total loss of source system. Relying on Beta software is risky, and Canary varies between worse and better.

No, I don’t. But in my experience with Duplicati the rate of backups that end up damaged is very high, and this is not normal for a end user.

You gave nearly ten approaches, but the only one that I can easily follow is to avoid upload throttle option. Unfortunately, if I have well understood, once the damage is done, deleting such option doesn’t solve the issue.

The other hints are out of my reach, mainly because the time. I have a normal life with limited (even cognitive) resources, and I don’t understand a lot about logs, exadecimal editors, and how to identify a damaged file among thousands.

So, the simplest thing a user lke me can do is to delete a backup and start a new one. As long as a new damage occurs.

This is my sad truth.

I’d note that a new participant to this topic answered. I still would like to hear from previous poster.

There were two different throttling bugs that were fixed. One does damage on the upload resulting in persistent damage until fixed, e.g. by rebuilding a dlist or dindex. One is temporary just on download. Removing the throttling clears that issue fine. Don’t rely on the upload/download names. Remove all.

The sad truth is that ability to produce fixes is also very time-limited however huge improvements have been made since the last true Beta, 2.0.4.5 in November 2018, though “end up damaged” is too vague to say whether or not a fix applies. If you want to continue with a specific past thread, please feel free…

Although (as expected) not everyone is able to help diagnose problems at a technical level, those who are able to provide good clues help not only themselves, but also those who are looking to fix the bugs.

I used to do this all the time, and I don’t recommend current Duplicati Beta to anybody who’s not willing to take the chance of this. Basically don’t let it be your only backup of history you’d really hate to lose…

Still, starting over gets tiring. I keep enough logs to help find what broke, and this has led to some fixes.

For some who are suffering from heavy damage due to a fixed bug, the best approach is to pick up the known fix in the form of Canary builds. Problem is that one also picks up unknown new bugs in Canary.

Duplicati very much needs to get a Beta out IMO. Beta users are still running on code over a year old…

Yes, because I also have encountered the error under discussion. This thread it’s the first result of my google search.

Maybe it’s my fault, but I have not understood how to identify the dlist or dindex file. What I hoped is that Duplicati log cited that file, but it isn’t so (if I’m not wrong). This is very frustrating. Below my message, I attach the log I’ve received by email.

I’ve always used Canary build. My current version should be the latest: 2.0.4.37_canary_2019-12-12. Anyway, thank you for your reply.

Failed: Invalid header marker
Details: System.Security.Cryptography.CryptographicException: Invalid header marker ---> System.IO.InvalidDataException: Invalid header marker
   in SharpAESCrypt.SharpAESCrypt.ReadEncryptionHeader(String password, Boolean skipFileSizeCheck)
   in SharpAESCrypt.SharpAESCrypt..ctor(String password, Stream stream, OperationMode mode, Boolean skipFileSizeCheck)
   in Duplicati.Library.Encryption.AESEncryption.Decrypt(Stream input)
   in Duplicati.Library.Encryption.EncryptionBase.Decrypt(Stream input, Stream output)
   in Duplicati.Library.Main.BackendManager.<>c__DisplayClass36_0.<coreDoGetPiping>b__0()
   in System.Threading.Tasks.Task.Execute()
   --- Fine della traccia dello stack dell'eccezione interna ---
   in Duplicati.Library.Main.AsyncDownloader.AsyncDownloaderEnumerator.AsyncDownloadedFile.get_TempFile()
   in Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)
   in Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)
   in Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
   in Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()
--- Fine traccia dello stack da posizione precedente dove è stata generata l'eccezione ---
   in System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   in CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
   in Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.<Backup>b__0(BackupResults result)
   in Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)

Log data:
2019-12-23 01:16:47 +01 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.Security.Cryptography.CryptographicException: Invalid header marker ---> System.IO.InvalidDataException: Invalid header marker
   in SharpAESCrypt.SharpAESCrypt.ReadEncryptionHeader(String password, Boolean skipFileSizeCheck)
   in SharpAESCrypt.SharpAESCrypt..ctor(String password, Stream stream, OperationMode mode, Boolean skipFileSizeCheck)
   in Duplicati.Library.Encryption.AESEncryption.Decrypt(Stream input)
   in Duplicati.Library.Encryption.EncryptionBase.Decrypt(Stream input, Stream output)
   in Duplicati.Library.Main.BackendManager.<>c__DisplayClass36_0.<coreDoGetPiping>b__0()
   in System.Threading.Tasks.Task.Execute()
   --- Fine della traccia dello stack dell'eccezione interna ---
   in Duplicati.Library.Main.AsyncDownloader.AsyncDownloaderEnumerator.AsyncDownloadedFile.get_TempFile()
   in Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)
   in Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)
   in Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
   in Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()

Default logging is often too-brief.one-line summaries, dropping wanted details which are on next lines. This really hurts ability to debug one-time problems. Reproducible ones just require additional logging.

Viewing the Duplicati Server Logs gives one of the broad ideas, but there are many different logs kept.

If error is reproducible, could you please see if About → Show log → Live → Retry can show details? Alternatively, for unattended logging, you could instead set Advanced options –log-file and –log-file-log-level=retry. Yours is in a different spot, not the after-backup verification. It looks like compact is trying to pull down something old that has turned into wasted space. If you can get a name, can you also look at its date? Unfortunately the compact gets dblock files which contain actual data from the source files, so permanent damage is hard to recover from. If dblock was created before about Sep 2019, throttling bug was there. If this is too complex, you can perhaps look over your destination files by date to see how far back they go, and whether damage from throttling might have been done. I can’t say, without evidence.

Asking people to post their backup files is risky. Even if normally encrypted, buggy situation might differ. The hexdump command is more a Linux thing, but Windows has some nice GUI editors such as Frhed.

What Storage Type did you set on your Destination screen? I trust some (and some destinations) more. There could also be some unfixed bug that nobody has been able to track down enough to lead to a fix.

If you like, start over, but if “the rate of backups that end up damaged is very high”, what others do you have? “I also have encountered the error under discussion. This thread it’s the first result of my google search.” isn’t clear on whether you keep hitting this one (but just now looked), or if it’s a variety of them.

Thanks for that. Canary is the early warning system, so I hope you aren’t hitting anything newly buggy. Although it may vary unpredictably, recent Canary has been doing great for me in terms of backing up.

Gdrive (since, thanks to my edu organization, I get unlimited space - that, paradoxically, I can’t use)

What do you mean? If you mean “damages”, then the last two weird (and incomprehensible for me) errors are “Failed: Impossibile decriptare i dati (passphrase non valida?)” and “Failed: Invalid header marker”, that is, this one. Previously I have a lot of “Found x files that are missing from the remote storage, please run repair” that I was unable to correct running repair.
If you mean “other programs”, I used Easeus Todo Backup. Eventually I preferred Duplicati for supporting a larger collection of cloud destinations. But cloud backup jobs are also the ones giving me many troubles.

If that was short for Google Drive, then that means you’re not dong Local folder or drive and letting Google’s client sync to cloud for you (as some people do, and it probably has pros and cons).

https://usage-reporter.duplicati.com/ shows Google Drive gets possibly the highest volume of cloud backups, so I’m not sure why yours seems unreliable at making files that can be read back. Are you using Advanced options or custom Remote Volume Size on the options page that may be unusual?

Here I don’t understand. I use nothing but Duplicati, with Google Drive as destination. I haven’t any google synchronization software installed.

I know such site and if I remember well you were the one who pointed me to it.

Nothing weird, apart the upload-throttle option which I was forced to use since otherwise Duplicati gets all the bandwidth of my wired-and-not-so-broad connection and prevents me even to browser.

Anyway, buone feste!

No problem. It just wasn’t 100% clear originally what you had. Now I know whatever’s happening is happening directly between you and Google Drive, and sync software can’t possibly be part of this.

Set a bandwith limit describes it, I guess, and at the time of advice, nobody knew damage potential. Exact nature of your file damage isn’t clear without a view somehow of the start of the file where the header (starts with letters AES) “should” be, but that allows me room to speculate that later data was written over it, as the known (now fixed in Canary) bug can do. Long NUL runs are harder to explain.

Upload throttle corrupts backup, especially OneDrive. Analyzed, with code proposed. #3787 has an example of what your file might look like (if you look). Search for comment below. Note missing AES:

Here’s what a suspected end-overwrote-start looks like

So the best I can do (unless you want to try to get into auditing and manual repair) is to suggest the current Canary, with throttling as you need, doing restart of backup from start, and seeing if it holds.

Settings in Duplicati can change your Update channel back to Beta, to avoid unexpected Canary updates which may be worse than your initial one, or may be better. I hope for a Beta release soon.

Advanced options on Edit screen 5 Options will let you increase the verification sample after backup. Possibly you will want to slowly raise that if you want to check more intensively for new file damage. Weighing against this plan is that with limited internet capacity, downloads are just more stress on it:

To you as well.

I am out of my vacation :slight_smile: and still in the process of splitting up my big backup into smaller backup sets. After this I am getting back risking something :smiley:

The purge finished the first backup finished and therefor the compact of the backup set also went through. This whole process took a week but no errors :slight_smile: It seems I do have a consistent backup again. I still need to do some tests, which takes some time because the other backups where stalled for the last week and could not start nor finish. I have to wait until they are fine as well.

But seems tracking down the broken dblock files, deleting them from the backup set and running the purge command fixed the backup set itself.

If the files affected by this dblock I deleted are “still” or “again” in the backup set and can be restored I am fine :slight_smile:

So far, many many many thanks for the help.

Release: 2.0.5.0 (experimental) 2020-01-03 does a much better job of keeping things consistent.
Throttling isn’t the only thing fixed, so I’d advise at least getting Beta update when it comes out…

yes, I just disabled my docker update process until a new beta is released.
because of the updated DB version I cannot go from my canary version down to the beta available right now.
so lets wait until a new beta gets released.