Failed: Invalid header marker

Hello, I have a problem in one of my computers when doing the copy with duplicati v2.0.2.1, it happens to me every 2 months, I create the task again and it works again for another little time. Would anyone know how to tell me what could happen to me?

Thank you very much

Failed: Invalid header marker
Details: System.Security.Cryptography.CryptographicException: Invalid header marker ---> System.IO.InvalidDataException: Invalid header marker
   en SharpAESCrypt.SharpAESCrypt.ReadEncryptionHeader(String password, Boolean skipFileSizeCheck)
   en SharpAESCrypt.SharpAESCrypt..ctor(String password, Stream stream, OperationMode mode, Boolean skipFileSizeCheck)
   en Duplicati.Library.Encryption.AESEncryption.Decrypt(Stream input)
   en Duplicati.Library.Encryption.EncryptionBase.Decrypt(Stream input, Stream output)
   en Duplicati.Library.Main.BackendManager.<>c__DisplayClass34_0.<coreDoGetPiping>b__0()
   en System.Threading.Tasks.Task.Execute()
   --- Fin del seguimiento de la pila de la excepción interna ---
   en Duplicati.Library.Main.AsyncDownloader.AsyncDownloaderEnumerator.AsyncDownloadedFile.get_TempFile()
   en Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
   en Duplicati.Library.Main.Operation.RecreateDatabaseHandler.Run(String path, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
   en Duplicati.Library.Main.Operation.RepairHandler.RunRepairLocal(IFilter filter)
   en Duplicati.Library.Main.Operation.RepairHandler.Run(IFilter filter)
   en Duplicati.Library.Main.Controller.<>c__DisplayClass20_0.<Repair>b__0(RepairResults result)
   en Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)

Welcome to the forum! I edited your post to improve the formating. (Just added ~~~ before and after the output you pasted, please see here for details).

Hello @luistalavera, sorry to hear you’re having this issue. I have a few questions to hopefully help narrow down the potential causes - feel free to answer as many or few as want / can (but the more answers we get the more likely we can figure out what’s going on):

  1. When you recreate the backup job is it just the local job you recreate or are you deleting the backup and starting over from “nothing”?
  2. When you recreate the backup job is it all by hand or are you exporting the existing job as a file then importing it again?
  3. What OS are you running on?
  4. What destination are you going to?
  5. Are you using any non-default settings in your backup (sped throttles, volume size, block size, encryption type, etc.)

I tried to repair and then repair and delete the database, but it did not work. I always have to do the job from the beginning. I use w2008 R2, the destination is Google drive. an AES256 encryption, the volume of the copy is 65Gb and the block size is 50Mb

What was the error message shown hen it didn’t work?

I ran into the same issue today and have no clue why and how I can fix it.

This is from the duplicati.log

2019-03-25 16:11:54 +01 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.Security.Cryptography.CryptographicException: Invalid header marker ---> System.IO.InvalidDataException: Invalid header marker
  at SharpAESCrypt.SharpAESCrypt.ReadEncryptionHeader (System.String password, System.Boolean skipFileSizeCheck) [0x00054] in <5e494c161b2e4b968957d9a34bd81877>:0
  at SharpAESCrypt.SharpAESCrypt..ctor (System.String password, System.IO.Stream stream, SharpAESCrypt.OperationMode mode, System.Boolean skipFileSizeCheck) [0x001af] in <5e494c161b2e4b968957d9a34bd81877>:0
  at (wrapper remoting-invoke-with-check) SharpAESCrypt.SharpAESCrypt..ctor(string,System.IO.Stream,SharpAESCrypt.OperationMode,bool)
  at Duplicati.Library.Encryption.AESEncryption.Decrypt (System.IO.Stream input) [0x00000] in <f3dfd7d9192a41f6af77ad64669f738a>:0
  at Duplicati.Library.Encryption.EncryptionBase.Decrypt (System.IO.Stream input, System.IO.Stream output) [0x00000] in <f3dfd7d9192a41f6af77ad64669f738a>:0
  at Duplicati.Library.Main.BackendManager+<>c__DisplayClass36_0.<coreDoGetPiping>b__0 () [0x00029] in <fbbeda0cad134e648d781c1357bdba9c>:0
  at System.Threading.Tasks.Task.InnerInvoke () [0x0000f] in <7686e0988c5144ca8abb303461e0b835>:0
  at System.Threading.Tasks.Task.Execute () [0x00000] in <7686e0988c5144ca8abb303461e0b835>:0
   --- End of inner exception stack trace ---

Any ideas, I am using Duplicati - 2.0.4.15_canary_2019-02-06 and haven’t made any software changes so far.

The second backup job I have is working just fine, it only copys to backblaze instead of using SCP to a remote host.

Thank for any tips.

I made the 100mb files

Descarga Outlook para iOS

had the same issue again.
restarting duplicati fixed it in my case

I migrated to canary docker container for my installation because I did not trust the qnap mono implementation.

But still the same :frowning: My 1,7TB backup is throwing the same error over and over again, and sometimes a restart of the container fixes is, sometimes not.
Fun fact, the second job with 25GB is always running through without an error.

Both have the same destination, use the same protocol to transfer (SSH) and other setting are similar only the size differs.

This is starting to bug me
Running Duplicati - 2.0.4.17_canary_2019-04-11

System.Security.Cryptography.CryptographicException: Invalid header marker ---> System.IO.InvalidDataException: Invalid header marker at SharpAESCrypt.SharpAESCrypt.ReadEncryptionHeader (System.String password, System.Boolean skipFileSizeCheck) [0x00054] in <5e494c161b2e4b968957d9a34bd81877>:0 at SharpAESCrypt.SharpAESCrypt..ctor (System.String password, System.IO.Stream stream, SharpAESCrypt.OperationMode mode, System.Boolean skipFileSizeCheck) [0x001af] in <5e494c161b2e4b968957d9a34bd81877>:0 at (wrapper remoting-invoke-with-check) SharpAESCrypt.SharpAESCrypt..ctor(string,System.IO.Stream,SharpAESCrypt.OperationMode,bool) at Duplicati.Library.Encryption.AESEncryption.Decrypt (System.IO.Stream input) [0x00000] in <8a774389440c4ca192f23b405ad0f041>:0 at Duplicati.Library.Encryption.EncryptionBase.Decrypt (System.IO.Stream input, System.IO.Stream output) [0x00000] in <8a774389440c4ca192f23b405ad0f041>:0 at Duplicati.Library.Main.BackendManager+<>c__DisplayClass36_0.<coreDoGetPiping>b__0 () [0x00029] in <e737745a39a143f09a82fd4f2eaa262c>:0 at System.Threading.Tasks.Task.InnerInvoke () [0x0000f] in <3833a6edf2074b959d3dab898627f0ac>:0 at System.Threading.Tasks.Task.Execute () [0x00000] in <3833a6edf2074b959d3dab898627f0ac>:0 --- End of inner exception stack trace --- at Duplicati.Library.Main.AsyncDownloader+AsyncDownloaderEnumerator+AsyncDownloadedFile.get_TempFile () [0x00008] in <e737745a39a143f09a82fd4f2eaa262c>:0 at Duplicati.Library.Main.Operation.CompactHandler.DoCompact (Duplicati.Library.Main.Database.LocalDeleteDatabase db, System.Boolean hasVerifiedBackend, System.Data.IDbTransaction& transaction, Duplicati.Library.Main.BackendManager sharedBackend) [0x0026c] in <e737745a39a143f09a82fd4f2eaa262c>:0 at Duplicati.Library.Main.Operation.DeleteHandler.DoRun (Duplicati.Library.Main.Database.LocalDeleteDatabase db, System.Data.IDbTransaction& transaction, System.Boolean hasVerifiedBacked, System.Boolean forceCompact, Duplicati.Library.Main.BackendManager sharedManager) [0x00399] in <e737745a39a143f09a82fd4f2eaa262c>:0 at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired (Duplicati.Library.Main.BackendManager backend, System.Int64 lastVolumeSize) [0x000a5] in <e737745a39a143f09a82fd4f2eaa262c>:0 at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x01028] in <e737745a39a143f09a82fd4f2eaa262c>:0 at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <6973ce2780de4b28aaa2c5ffc59993b1>:0 at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x00008] in <e737745a39a143f09a82fd4f2eaa262c>:0 at Duplicati.Library.Main.Controller+<>c__DisplayClass13_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x00035] in <e737745a39a143f09a82fd4f2eaa262c>:0 at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x00271] in <e737745a39a143f09a82fd4f2eaa262c>:0 at Duplicati.Library.Main.Controller.Backup (System.String[] inputsources, Duplicati.Library.Utility.IFilter filter) [0x00068] in <e737745a39a143f09a82fd4f2eaa262c>:0 at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x002f7] in <c71fcbcde8624d2e8a8aa9e50b627014>:0 

I did some testing the last days, and as soon as I set the backup retention to “keep all backups” the error is gone. Before that I had “smart backup retention”

What does this mean, is my backup now broken, because for me it seems like Duplicati is unable to open older backup files to “delete” them based on the former retention setting.

This happened again today, with another backup set I have, but luckily the next backup did not raise the same error again.
I am not sure why and what triggers it, but I slowly start doubting the reliability of my backups. Any ideas or information I can provide to help fixing this.

Although it was asked lightly earlier, let me ask if anybody is using any sort of speed throttle on upload or download, e.g. from –throttle-upload, –throttle-download or the GUI throttle button at the top of the page?

Upload throttle corrupts backup, especially OneDrive. Analyzed, with code proposed. #3787
(the OneDrive comment just means it was the worst of the several tested – there could easily be more)

Determining the nature of the corruption would help, but may involve looking in SQLite databases (not so difficult, but probably needs DB Browser for SQLite or similar installed). Some simpler tests are to check size of the corrupted file, convert to hexadecimal, and see if it looks too round. That might be an FS error. –upload-verification-file will make a text file duplicati-verification.json that describes expected remote files. Running DuplicatiVerify.* from the Duplicati utility-scripts folder can test destination files look as expected, assuming you can either get that sort of destination access, or are willing to get the files more accessible, however if doing a complete download is required, you might just as well run the test command over all. That should do a hash check of all remote files, but you can also do your own simple manual checking of some particular file by looking it up in duplicati-verification.json to at least see if actual size is as expected.

Stepping back to more ordinary steps, logs can be good to see what led up to the problem, however with problems with older files (how old is the bad file based on remote info?), you can’t get details retroactively. Sometimes one might get lucky, and find relevant info in default job logs or About --> Show log --> Stored. Beyond that, use –log-file with –log-file-log-level=retry (reasonable compromise, but other levels may do).

Thanks, lots of mights and could lookup and find something :wink:
I do have a backup set which I have left in place but not using it any more because it raised the above error everytime I run it.
I had the error once with another backup set, so would I would offer is my time and backup sets to find the error, but someone with the capabilities of fixing the issue in the end needs to guide me.
I would say a step by step approach would be great, can also be on github or via mail or sth like this, but I would appreciate if this problem can be fixed with the time everyone spends. Otherwise it is a collection of data nobody is going to use anyone.

So who is in to get this hopefully figured out with the backup sets I do have and perhaps someone else is going to jump into it as well.

Asking again. If the answer is “yes”, the code fix may just need to be done. If “no”, then the big chase starts. Generally there can be no promises that a chase will result immediately in a fix, but data helps in any event.

Best way to preserve data is not in forum (where findings are scattered), but in GitHub issues (now at 820).

Some of the ideas were for this, not specifically for chasing the issue. Rather than doubt, one can check… Alternatively (and easier), one can add Advanced option –backup-test-samples to raise routine verification.

cuts it down to a fraction of the small number of active developers (busy with 820 issues), but we’ll see…

Thanks, lets work on some things using the forum, afterwards I collect and summarize the findings on github.

Concerning the throttling, I am not doing any throttling, the backup set which started to fail some time ago, and still failing does have the following “switches” set. I just used the “export” functionality of the set to provide the information. The others are configured very similar, and besides one error 2 days ago they work just fine.

mono /opt/duplicati/Duplicati.CommandLine.exe backup “ssh://x.x.x.:22/mnt/backup/qpkg/?auth-username=xxx&ssh-fingerprint=ssh-rsa 2048 16:43:4A:A0:B1:8C:81:76:6B:8B:FD:86:9F:AC:08:A0” /share/QPKG/ /share/Container/data/ --backup-name=“Service Configs [remote]” --dbpath=/data/Duplicati/74776788828989867881.sqlite --encryption-module=aes --compression-module=zip --dblock-size=50MB --keep-time=6M --blocksize=500KB --backup-test-samples=1 --disable-file-scanner=true --send-http-url=https://www.duplicati-monitoring.com/log/xxx --exclude-files-attributes=temporary --disable-module=console-password-input

This still happens, not sure why.
Some backups run just fine, on other days it fails with the invalid header marker error.

I have no clue how to get more information collected, without getting help by someone else :frowning:

This is an ongoing issue and it would be great if someone could just help a little.
I would even give access to the instance if this would speed things up, but the last 7 days every backup was failing because of this.
This makes me nervous, it seems that restore still works for what I have tested, but for how long?

THANKS

I gave nearly ten approaches earlier. Are none of them even remotely meaningful? I could elaborate, however there aren’t nearly enough volunteers (of all sorts) to allow on-site debugging, even if it was technically feasible, which it frequently isn’t due to limited tools, need for highly special expertise, etc.

How are you testing restore? It’s unfortunately not very meaningful if to the same system with the original files still around, because original files will be checked and used for restore material that’s relevant to the requested restore – using local data blocks will run faster than downloading blocks.

–no-local-blocks added and checkboxed in Advanced options on job screen 5 Options will disable optimization of this sort. Direct restore from backup files to another machine won’t have this issue.

We need to find out better how badly your backup might be damaged. Ideally you would be able to direct restore all of it to a different system, which simulates recovery from disaster loss of originals.

I’m not sure what constraints you have on disk space, bandwidth, or metered service that interfere. What level of access do you have to the SFTP server? Can you directly access files? Run scripts?
Do you know what SFTP server you have, and what access you have to examine the files closely?

At what point do they fail? There was a theory earlier that compact runs and hits some bad old files. Verifying backend files after backup seems more likely because compact likely won’t run every time.
EDIT: Viewing the log files of a backup job will show CompactResults from compact run if it did run. RetryAttempts being non-zero would be expected in a failure case because it retries before it fails.

In Advanced options, adding and checking –no-auto-compact can avoid its downloads. Adding and setting to 0 –backup-test-samples can prevent the downloads after backup. These are only for tests because they hide the issue, however they do help confirm or refute that you have some bad files…

If you have enough bad files, starting over anew is probably the best plan, if that would be an option.

I just wanted to add that I made similar experiences. A backup startet about a year ago worked fine until three weeks ago it stopped with the invalid header marker error (v 2.0.4.5 under Windows 10, backing up to onedriveV2, usually not throttled, 50M block size).

Error message were not very helpful as they did not contain any specific information regarding a file:

System.Security.Cryptography.CryptographicException: Invalid header marker —> System.IO.InvalidDataException: Invalid header marker

bei SharpAESCrypt.SharpAESCrypt.ReadEncryptionHeader(String password, Boolean skipFileSizeCheck)

bei SharpAESCrypt.SharpAESCrypt…ctor(String password, Stream stream, OperationMode mode, Boolean skipFileSizeCheck)

bei Duplicati.Library.Encryption.AESEncryption.Decrypt(Stream input)

bei Duplicati.Library.Encryption.EncryptionBase.Decrypt(Stream input, Stream output)

bei Duplicati.Library.Main.BackendManager.<>c__DisplayClass36_0.b__0()

bei System.Threading.Tasks.Task.Execute()

— Ende der internen Ausnahmestapelüberwachung —

bei Duplicati.Library.Main.AsyncDownloader.AsyncDownloaderEnumerator.AsyncDownloadedFile.get_TempFile()

bei Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)

bei Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)

bei Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)

bei Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)

bei Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.b__0(BackupResults result)

bei Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action`1 method)

bei Duplicati.Library.Main.Controller.Backup(String inputsources, IFilter filter)

bei Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

I couldn’t determine the cause. I tried different things like rebuilding the database, switching between smart retention and other retention strategies, but nothing helped. I just found that according to duplicati the backup had a size of ~148 GB and according to onedrive the folder was ~249 so it must have been gone out of sync somehow. As I fiddeled around a bit over the year I have no idea what exactly might have contributed to going out of sync. I started a new backup as that was the least time consuming way to continue backups.

But it would be great if Duplicati would offer some remedy in the case of invalid header markers, like replacing the corrupt file with a new one if the files still exist in the same version as in the corrupt file or leading the user to a step-by-step solution of the problem and/or produce error messages with better information what went wrong.

1 Like

Welcome to the forum @llkbennprz

See Upload throttle corrupts backup, especially OneDrive. Analyzed, with code proposed. #3787

If you happen to have your old backup (after starting the new one) you can look for its symptoms multiple ways as listed earlier. If you can identify the bad file, you can look at it with a hex viewer.

Commonly, it seemed like larger files such as the default 50MB dblock were more likely to get hit. Dblock file problems are worse than dlist and dindex because the latter can be fixed by individual manual delete and then Repair. Assuming the problem isn’t in the DB by Recreate, DB will fix file.

Dblock files are the file data so for example, if damage is to a file now deleted, its backup is gone.
The AFFECTED command can show what the loss of a given destination files means to a backup.

Fix implementation of ThrottledStream.Write #3811 was the fix. Throttling even once could have damaged some files, and usually not throttling would mean that they might take awhile to notice, although verification does try to test-balance its sample, so I’d have expected it to notice sooner.

Regardless, that’s my guess for OneDrive throttle issue that comes back in the form of bad files. Additional information might be able to refine or refute it. There’s no beta yet with that throttle fix.

This might be what you want (I’m not 100% certain), but in practice it didn’t work. It’s now off by default. I’m also noticing that the Advanced Options doesn’t show that option, but now it’d need to be re-tested:

v2.0.3.10-2.0.3.10_canary_2018-08-30

Removed automatic attempts to rebuild dblock files as it is slow and rarely finds all the missing pieces (can be enabled with --rebuild-missing-dblock-files ).