Failed: Invalid header marker

@ts678
I am looking forward to what is next :slight_smile:

*** Hash check failed for file:  /mnt/backup/nextcloud-data/duplicati-b3ab98cd26a2e469fa0c8262c7b96ff87.dblock.zip.aes
*** Hash check failed for file:  /mnt/backup/nextcloud-data/duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes
*** Hash check failed for file:  /mnt/backup/nextcloud-data/duplicati-bd742daac0957402f81e85c35a5be6662.dblock.zip.aes
*** Hash check failed for file:  /mnt/backup/nextcloud-data/duplicati-bfcc04033b8f54b61a1c91d24fa04fb17.dblock.zip.aes

I hope I got all of them, this whole thing took ages.
I decided, besides going on with this investigation, to split this backup into smaller parts instead of having one big 2TB backup :slight_smile:

Is the time stamp of duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes recent, meaning it didn’t exist for the previous search for files with long runs of NUL characters? Can you hexdump, and if there isn’t such a run, what does the start of the file look like in hexdump? Expected first 5 bytes above.

What are the lengths of the 4 files? Sometimes a length that’s too binary-even might be from filesystem. Alternatively, previous throttling might have done some damage, although the NUL runs look different…

Regardless, damaged dblock files mean some data is gone, and I’m glad it looks like there were only 4 files. Inventory of files that are going to be corrupted shows how to use the affected command linked earlier to figure out what source files those damaged dblock files would impact. The easiest way to run the command is probably in web UI Commandline, changing Command dropdown at top to affected, and replacing the Commandline arguments box with the simple filename (no path prefix) of the dblock. You can try one first, or put all four in at once on different lines. Then go to the bottom to run command. You can look over the results to see whether or not the files that used the damaged dblock were critical.

There are a lot of ways that things could go depending on what shows up, ranging from trying recovery methods to starting over again, which is often the easiest thing, but has some definite drawbacks to it…

Ideally we’d also work out what went wrong sometime to cause 4 possibly damaged files on destination.

The timestamp for duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes is:

 ls -la duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes
-rwxrwx--- 1 backup backup 262060045 Aug  5  2018 duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes

The hexdump of the above mentioned file:

hexdump -C duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes | head
00000000  41 45 53 02 00 00 21 43  52 45 41 54 45 44 5f 42  |AES...!CREATED_B|
00000010  59 00 53 68 61 72 70 41  45 53 43 72 79 70 74 20  |Y.SharpAESCrypt |
00000020  76 31 2e 33 2e 31 2e 30  00 80 00 00 00 00 00 00  |v1.3.1.0........|
00000030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
000000a0  00 00 00 00 00 00 00 00  00 00 00 00 35 74 05 c9  |............5t..|
000000b0  7d 96 98 60 e8 83 c6 73  c7 1c 67 0c eb 6c fa 18  |}..`...s..g..l..|
000000c0  6a fb 03 93 87 f5 1c 78  e5 b5 2a 95 a4 88 a0 4a  |j......x..*....J|
000000d0  23 c0 d6 d6 74 63 af a8  ac 11 4d 22 04 ca 0a 96  |#...tc....M"....|
000000e0  98 24 d5 1f 88 bc 61 9f  bf ca c5 53 8c 28 22 e6  |.$....a....S.(".|


hexdump -C duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes | grep '00 00 00 00 00 00 00 00'
00000030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000a0  00 00 00 00 00 00 00 00  00 00 00 00 35 74 05 c9  |............5t..|
04d00000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|

For me it would be interesting to figure out why this happened and fix the error, even if this means I do lose some data, but the backup should finish successfully at the end and be consistent again :slight_smile: Which it is not - right now.
Is there a way to remove that blocks to get a consistent backup set again, because this would fix everything, and I assume not only for me :smiley:

I already started to split this big backup set into multiple smaller ones to avoid the impact for this problem in the future. If this happens again, it will only affect a smaller part of this 2TB backup :slight_smile:
Those smaller backups already went through its first few rounds, so my actual data is saved.

So duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes looks like a 2018 file that passed the first scan for NUL rows, but now has a few, and also a bad hash? The header at least looks right though.

If the above is true, maybe the file grew an error somehow. You can check time stamps further, e.g. both ls -l and ls -lc for hints that a program changed it. If not, does QNAP have the smartctl command to check how healthy your drives look, or other metrics? A NAS often has redundancy, but does it log when such redundancy actually has to be used? Linux systems may also log I/O problems to the system logs.

The affected command run as mentioned earlier should be able to tell you the losses of deleted dblocks, and Disaster Recovery as mentioned earlier gives steps for recovery from removal. I suggest you move the dblocks to a different folder just in case something goes wrong and we wind up wanting them again, although the three with the bad header are probably lost. If you like, try manual decrypt and unzip of the one with the good header but bad hash (and small run of NULs) to see whether there’s anything left of it. You can run SharpAESCrypt under mono from the Duplicati install folder. Help text for that is found here. AES Crypt is another option, but you might have to move the bad file to some other system to decrypt it.

Well, I ran the command from the previous post again, just to verify Invalid header markers again. Nothing changed, so the new file which popped up using the verification process (duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes) does not have an invalid header, and was therefor not caught previously.

# ls -l duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes
-rwxrwx--- 1 backup backup 262060045 Aug  5  2018 duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes
# ls -lc duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes
-rwxrwx--- 1 backup backup 262060045 Nov 22  2018 duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes

SmartCTL is not an option on that setup, the Raspi hosting the USB disk does not support S.M.A.R.T. information for that disks/usb controller. I try to figure something out for this situation, because this is for sure not optimal. Anyhow, if the disk would start failing, why is the data all from 2018 and one file per month. Disks failing would be more random, so I must conclude this is not the reason. And I am doing backups every day, not on a “monthly” basis.

# ls -l /mnt/backup/nextcloud-data/duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes
-rwxrwx--- 1 backup backup 262060045 Aug  5  2018 /mnt/backup/nextcloud-data/duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes
# ls -l /mnt/backup/nextcloud-data/duplicati-bfcc04033b8f54b61a1c91d24fa04fb17.dblock.zip.aes
-rwxrwx--- 1 backup backup 261810285 Sep 13  2018 /mnt/backup/nextcloud-data/duplicati-bfcc04033b8f54b61a1c91d24fa04fb17.dblock.zip.aes
# ls -l /mnt/backup/nextcloud-data/duplicati-b3ab98cd26a2e469fa0c8262c7b96ff87.dblock.zip.aes
-rwxrwx--- 1 backup backup 261920237 Okt  3  2018 /mnt/backup/nextcloud-data/duplicati-b3ab98cd26a2e469fa0c8262c7b96ff87.dblock.zip.aes
# ls -l /mnt/backup/nextcloud-data/duplicati-bd742daac0957402f81e85c35a5be6662.dblock.zip.aes
-rwxrwx--- 1 backup backup 261947757 Nov  7  2018 /mnt/backup/nextcloud-data/duplicati-bd742daac0957402f81e85c35a5be6662.dblock.zip.aes

The affected command for duplicati-bd2d9bf5f4fa54ee4a3d280718c7c8aed.dblock.zip.aes showed three files are affected, two videos and a picture. All three files still exist in their source location and are not corrupt and therefor functional.

Right now I am downloading the 4 files which we have identified during this process to test if they can still be decrytped manually.

Also one thing, which came into my mind, I used “Smart Backup retention” during this time period. This would also explain the why those files are roughly a month apart. I switched away from the “Smart Backup retention” because I have not fully understood how it merges data after some time and a 6 month backup retention is actually ok for my use case.

Makes sense. Losing track of steps. So header scan missed this, and it was caught in the full hash test, while it possibly could have been caught in NUL-runs scan except for the fact that such scan wasn’t run.

Upload throttle corrupts backup, especially OneDrive. Analyzed, with code proposed. #3787 may cause the runs of NULs for at least one person in this topic who was throttling, but you’re not, so it’s less likely.

Good question. Is anything else (even source files) changed monthy? Old dblock files are not rewritten, but the data they contain can gradually become obsolete as older backup versions are deleted by your retention rules. When enough data is obsolete, compact occurs to reclaim the otherwise-wasted space. You can look around to see which other old files you still have from 2018, and see if there’s any pattern.

Then there’s your retention change theory. BTW there’s no merging. It’s just thinning out to less closely spaced versions when you’re willing to sacrifice the in-between-the-remaining-backup views of the files.

If they are also unchanged since 2018 backup, that would help explain why the dblocks are still around. This would be an ideal situation for the --rebuild-missing-dblock-files option, but it doesn’t seem to work with the small test I ran, and I’m not seeing much encouragement from the other forum posts about it…

Lacking the ideal fix, the fallback is probably to check the other bad dblock files, then follow the disaster recovery article to see if you can just get them out of the way so they stop being complained about, then backup again to backup the same files (if others are also still intact). They just won’t be in old versions…

If you keep having trouble, and you’re willing to take some risk, you might consider switching to Canary, which has numerous data integrity bug fixes that aren’t in Beta yet. Try this on a less critical system that sees the error too often. If you have a good Canary (recent ones seem good), you can change Settings update channel to Beta and wait for it. Or if you’re going to stay on Canary, at least read release notices.

I do have hundreds, perhaps thousands of files from 2018. These are mainly pictures automatically upload from mobile phones to be backed up and not all users sort through the images regularly and delete or move them somewhere else.

Anyhow, this all came up, because files wanted to get compact - I assume and the system is unable to get rid of the “broken” ones. This is also the reason why my backup works without an error, as long as their is not retention policy enabled.

I think this is a major downside of the concept of “full backup” once and afterwards incremental backups only.

Right now I am splitting backups, instead of the one 2TB backup to smaller ones, but this is going to take some time until the uploads are finished.

After that, I am up to taking some risks :wink: and let you know about the outcome.

For the version, I am using canary channel from the beginning, never give it too much thought. I started with duplicati 2 at a time you needed to search for it on the old website. Anyhow, I keep that canary now, and switch to beta, for the sake of productivity :smiley:

But at the end, it seems we are stuck, and Duplicati is not capable of handling this situation easily or at all. Latter we are going to see after I have the new backup sets in place, most properly after christmas.

THANKS for all your help, and I hope this can get fixed somehow in the future. Perhaps the option of creating a new manual full backup could resolve at least some of the effects of this problem.

Do you know of good examples of software handling damaged backups? We’re not sure where the damage is from, but if it was from Data degradation during storage, the article describes how some filesystem types such as ZFS, Btrfs and ReFS have protections. I see NAS vendors claiming bit rot protection, but I don’t know what it is. WIP Add par2 parity files and auto repair to backends #3879 possibly will emerge someday to give more redundancy at the Duplicati level, but for now there’s a dependency that storage is reliable. Any errors are audited by download and verification which isn’t particularly quick so default sample size is small, but it can be raised as much as tolerable by using

  --backup-test-samples (Integer): The number of samples to test after a
    backup
    After a backup is completed, some (dblock, dindex, dlist) files from the
    remote backend are selected for verification. Use this option to change
    how many. If the backup-test-percentage option is also provided, the
    number of samples tested is the maximum implied by the two options. If
    this value is set to 0 or the option --no-backend-verification is set, no
    remote files are verified
    * default value: 1

v2.0.4.11-2.0.4.11_canary_2019-01-16

Added test percentage option, thanks @warwickmm

  --backup-test-percentage (Integer): The percentage of samples to test after
    a backup
    After a backup is completed, some (dblock, dindex, dlist) files from the
    remote backend are selected for verification. Use this option to specify
    the percentage (between 0 and 100) of files to test. If the
    backup-test-samples option is also provided, the number of samples tested
    is the maximum implied by the two options. If the no-backend-verification
    option is provided, no remote files are verified.
    * default value: 0

Some cloud storage providers scan their own files for bit-rot issues, and some allow downloading an object hash instead of the actual whole object, which may allow for much faster verification someday.

But the question is how to recover from a damaged backup? Fixing the --rebuild-missing-dblock-files option (or finding out how to use it if use is wrong) would be a start, but data redundancy is designed low, with a given default 100 KB source file block stored only once, and newer uses get deduplicated.

Block-based storage engine describes the design move away from the historical full/incremental idea. There isn’t a traditional concept of full to do on demand. There are only blocks, which might exist by previous backup (in which case changes are uploaded), or might not (in which case all are uploaded).

How the backup process works describes it in more detail, including how deduplication reuses blocks.

Creating a new manual full backup (if concept of full backup were introduced) would at least add more copies of source data, but would complicate the reference scheme which is already slow and complex.

General best practice for those who highly value their data is multiple backups done very differently, so program bugs, destination issues, etc. are mitigated by redundancy. Test backups well, even for a total loss of source system. Relying on Beta software is risky, and Canary varies between worse and better.

No, I don’t. But in my experience with Duplicati the rate of backups that end up damaged is very high, and this is not normal for a end user.

You gave nearly ten approaches, but the only one that I can easily follow is to avoid upload throttle option. Unfortunately, if I have well understood, once the damage is done, deleting such option doesn’t solve the issue.

The other hints are out of my reach, mainly because the time. I have a normal life with limited (even cognitive) resources, and I don’t understand a lot about logs, exadecimal editors, and how to identify a damaged file among thousands.

So, the simplest thing a user lke me can do is to delete a backup and start a new one. As long as a new damage occurs.

This is my sad truth.

I’d note that a new participant to this topic answered. I still would like to hear from previous poster.

There were two different throttling bugs that were fixed. One does damage on the upload resulting in persistent damage until fixed, e.g. by rebuilding a dlist or dindex. One is temporary just on download. Removing the throttling clears that issue fine. Don’t rely on the upload/download names. Remove all.

The sad truth is that ability to produce fixes is also very time-limited however huge improvements have been made since the last true Beta, 2.0.4.5 in November 2018, though “end up damaged” is too vague to say whether or not a fix applies. If you want to continue with a specific past thread, please feel free…

Although (as expected) not everyone is able to help diagnose problems at a technical level, those who are able to provide good clues help not only themselves, but also those who are looking to fix the bugs.

I used to do this all the time, and I don’t recommend current Duplicati Beta to anybody who’s not willing to take the chance of this. Basically don’t let it be your only backup of history you’d really hate to lose…

Still, starting over gets tiring. I keep enough logs to help find what broke, and this has led to some fixes.

For some who are suffering from heavy damage due to a fixed bug, the best approach is to pick up the known fix in the form of Canary builds. Problem is that one also picks up unknown new bugs in Canary.

Duplicati very much needs to get a Beta out IMO. Beta users are still running on code over a year old…

Yes, because I also have encountered the error under discussion. This thread it’s the first result of my google search.

Maybe it’s my fault, but I have not understood how to identify the dlist or dindex file. What I hoped is that Duplicati log cited that file, but it isn’t so (if I’m not wrong). This is very frustrating. Below my message, I attach the log I’ve received by email.

I’ve always used Canary build. My current version should be the latest: 2.0.4.37_canary_2019-12-12. Anyway, thank you for your reply.

Failed: Invalid header marker
Details: System.Security.Cryptography.CryptographicException: Invalid header marker ---> System.IO.InvalidDataException: Invalid header marker
   in SharpAESCrypt.SharpAESCrypt.ReadEncryptionHeader(String password, Boolean skipFileSizeCheck)
   in SharpAESCrypt.SharpAESCrypt..ctor(String password, Stream stream, OperationMode mode, Boolean skipFileSizeCheck)
   in Duplicati.Library.Encryption.AESEncryption.Decrypt(Stream input)
   in Duplicati.Library.Encryption.EncryptionBase.Decrypt(Stream input, Stream output)
   in Duplicati.Library.Main.BackendManager.<>c__DisplayClass36_0.<coreDoGetPiping>b__0()
   in System.Threading.Tasks.Task.Execute()
   --- Fine della traccia dello stack dell'eccezione interna ---
   in Duplicati.Library.Main.AsyncDownloader.AsyncDownloaderEnumerator.AsyncDownloadedFile.get_TempFile()
   in Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)
   in Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)
   in Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
   in Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()
--- Fine traccia dello stack da posizione precedente dove è stata generata l'eccezione ---
   in System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   in CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
   in Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.<Backup>b__0(BackupResults result)
   in Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)

Log data:
2019-12-23 01:16:47 +01 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.Security.Cryptography.CryptographicException: Invalid header marker ---> System.IO.InvalidDataException: Invalid header marker
   in SharpAESCrypt.SharpAESCrypt.ReadEncryptionHeader(String password, Boolean skipFileSizeCheck)
   in SharpAESCrypt.SharpAESCrypt..ctor(String password, Stream stream, OperationMode mode, Boolean skipFileSizeCheck)
   in Duplicati.Library.Encryption.AESEncryption.Decrypt(Stream input)
   in Duplicati.Library.Encryption.EncryptionBase.Decrypt(Stream input, Stream output)
   in Duplicati.Library.Main.BackendManager.<>c__DisplayClass36_0.<coreDoGetPiping>b__0()
   in System.Threading.Tasks.Task.Execute()
   --- Fine della traccia dello stack dell'eccezione interna ---
   in Duplicati.Library.Main.AsyncDownloader.AsyncDownloaderEnumerator.AsyncDownloadedFile.get_TempFile()
   in Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)
   in Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)
   in Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
   in Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()

Default logging is often too-brief.one-line summaries, dropping wanted details which are on next lines. This really hurts ability to debug one-time problems. Reproducible ones just require additional logging.

Viewing the Duplicati Server Logs gives one of the broad ideas, but there are many different logs kept.

If error is reproducible, could you please see if About → Show log → Live → Retry can show details? Alternatively, for unattended logging, you could instead set Advanced options –log-file and –log-file-log-level=retry. Yours is in a different spot, not the after-backup verification. It looks like compact is trying to pull down something old that has turned into wasted space. If you can get a name, can you also look at its date? Unfortunately the compact gets dblock files which contain actual data from the source files, so permanent damage is hard to recover from. If dblock was created before about Sep 2019, throttling bug was there. If this is too complex, you can perhaps look over your destination files by date to see how far back they go, and whether damage from throttling might have been done. I can’t say, without evidence.

Asking people to post their backup files is risky. Even if normally encrypted, buggy situation might differ. The hexdump command is more a Linux thing, but Windows has some nice GUI editors such as Frhed.

What Storage Type did you set on your Destination screen? I trust some (and some destinations) more. There could also be some unfixed bug that nobody has been able to track down enough to lead to a fix.

If you like, start over, but if “the rate of backups that end up damaged is very high”, what others do you have? “I also have encountered the error under discussion. This thread it’s the first result of my google search.” isn’t clear on whether you keep hitting this one (but just now looked), or if it’s a variety of them.

Thanks for that. Canary is the early warning system, so I hope you aren’t hitting anything newly buggy. Although it may vary unpredictably, recent Canary has been doing great for me in terms of backing up.

Gdrive (since, thanks to my edu organization, I get unlimited space - that, paradoxically, I can’t use)

What do you mean? If you mean “damages”, then the last two weird (and incomprehensible for me) errors are “Failed: Impossibile decriptare i dati (passphrase non valida?)” and “Failed: Invalid header marker”, that is, this one. Previously I have a lot of “Found x files that are missing from the remote storage, please run repair” that I was unable to correct running repair.
If you mean “other programs”, I used Easeus Todo Backup. Eventually I preferred Duplicati for supporting a larger collection of cloud destinations. But cloud backup jobs are also the ones giving me many troubles.

If that was short for Google Drive, then that means you’re not dong Local folder or drive and letting Google’s client sync to cloud for you (as some people do, and it probably has pros and cons).

https://usage-reporter.duplicati.com/ shows Google Drive gets possibly the highest volume of cloud backups, so I’m not sure why yours seems unreliable at making files that can be read back. Are you using Advanced options or custom Remote Volume Size on the options page that may be unusual?

Here I don’t understand. I use nothing but Duplicati, with Google Drive as destination. I haven’t any google synchronization software installed.

I know such site and if I remember well you were the one who pointed me to it.

Nothing weird, apart the upload-throttle option which I was forced to use since otherwise Duplicati gets all the bandwidth of my wired-and-not-so-broad connection and prevents me even to browser.

Anyway, buone feste!

No problem. It just wasn’t 100% clear originally what you had. Now I know whatever’s happening is happening directly between you and Google Drive, and sync software can’t possibly be part of this.

Set a bandwith limit describes it, I guess, and at the time of advice, nobody knew damage potential. Exact nature of your file damage isn’t clear without a view somehow of the start of the file where the header (starts with letters AES) “should” be, but that allows me room to speculate that later data was written over it, as the known (now fixed in Canary) bug can do. Long NUL runs are harder to explain.

Upload throttle corrupts backup, especially OneDrive. Analyzed, with code proposed. #3787 has an example of what your file might look like (if you look). Search for comment below. Note missing AES:

Here’s what a suspected end-overwrote-start looks like

So the best I can do (unless you want to try to get into auditing and manual repair) is to suggest the current Canary, with throttling as you need, doing restart of backup from start, and seeing if it holds.

Settings in Duplicati can change your Update channel back to Beta, to avoid unexpected Canary updates which may be worse than your initial one, or may be better. I hope for a Beta release soon.

Advanced options on Edit screen 5 Options will let you increase the verification sample after backup. Possibly you will want to slowly raise that if you want to check more intensively for new file damage. Weighing against this plan is that with limited internet capacity, downloads are just more stress on it:

To you as well.

I am out of my vacation :slight_smile: and still in the process of splitting up my big backup into smaller backup sets. After this I am getting back risking something :smiley:

The purge finished the first backup finished and therefor the compact of the backup set also went through. This whole process took a week but no errors :slight_smile: It seems I do have a consistent backup again. I still need to do some tests, which takes some time because the other backups where stalled for the last week and could not start nor finish. I have to wait until they are fine as well.

But seems tracking down the broken dblock files, deleting them from the backup set and running the purge command fixed the backup set itself.

If the files affected by this dblock I deleted are “still” or “again” in the backup set and can be restored I am fine :slight_smile:

So far, many many many thanks for the help.

Release: 2.0.5.0 (experimental) 2020-01-03 does a much better job of keeping things consistent.
Throttling isn’t the only thing fixed, so I’d advise at least getting Beta update when it comes out…

yes, I just disabled my docker update process until a new beta is released.
because of the updated DB version I cannot go from my canary version down to the beta available right now.
so lets wait until a new beta gets released.