Hash Mismatch on dblock file. How to fix?

I use Duplicati (v2.0.3.3) on my Ubuntu server to backup my media files. The target backend is a disk directly attached to the server (via external USB enclosure). So it’s a local, unencrypted backup.

For a while now, every run has been consistently throwing the following error after each run:

Failed to process file duplicati-b7442f2d4ccb143a1a59115c84d2ee358.dblock.zip
Duplicati.Library.Main.BackendManager+HashMismatchException: Hash mismatch on file "/tmp/dup-c8067558-2b75-4abf-977a-18595d5697d2", recorded hash: OxfruDqmQFjLiYlr9q9Ro5CGQTDYpuqQFsEp9IdMnaY=, actual hash x3KhVo1HYvWuKByN1o7Ucye9TXHfFCWd4c4Y3FUtIJE=
  at Duplicati.Library.Main.BackendManager.GetForTesting (System.String remotename, System.Int64 size, System.String hash) [0x00065] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Operation.TestHandler.DoRun (System.Int64 samples, Duplicati.Library.Main.Database.LocalTestDatabase db, Duplicati.Library.Main.BackendManager backend) [0x003f7] in <ae134c5a9abb455eb7f06c134d211773>:0 

Before this message ends up in the logs, there are “retry” messages like this:

Operation Get with file duplicati-b7442f2d4ccb143a1a59115c84d2ee358.dblock.zip attempt 5 of 5 failed with message: Hash mismatch on file "/tmp/dup-c8067558-2b75-4abf-977a-18595d5697d2", recorded hash: OxfruDqmQFjLiYlr9q9Ro5CGQTDYpuqQFsEp9IdMnaY=, actual hash x3KhVo1HYvWuKByN1o7Ucye9TXHfFCWd4c4Y3FUtIJE=
Duplicati.Library.Main.BackendManager+HashMismatchException: Hash mismatch on file "/tmp/dup-c8067558-2b75-4abf-977a-18595d5697d2", recorded hash: OxfruDqmQFjLiYlr9q9Ro5CGQTDYpuqQFsEp9IdMnaY=, actual hash x3KhVo1HYvWuKByN1o7Ucye9TXHfFCWd4c4Y3FUtIJE=
  at Duplicati.Library.Main.BackendManager.GetForTesting (System.String remotename, System.Int64 size, System.String hash) [0x00065] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Operation.TestHandler.DoRun (System.Int64 samples, Duplicati.Library.Main.Database.LocalTestDatabase db, Duplicati.Library.Main.BackendManager backend) [0x003f7] in <ae134c5a9abb455eb7f06c134d211773>:0 

I’ve searched the forums, and I’ve struck out trying to find examples where the hash mismatch happens on a dblock file. Is there a way I can save this backup set? Or do I have to nuke it and start over fresh?

Upon further searching, I found the thread Help: Hash mismatch error. It appears that I’m having the same problem. But what is interesting is unlike OP of that thread, this is happening to me on an unencrypted, local backup where I’m using regular *NIX paths for both my source folders and the backup destination.

Following the advice in the linked thread, I set the option --no-backend-verification=true to see if the backup would complete if testing wasn’t done, and indeed, it did complete successfully with this option set. When I unset the option, it went right back to happening. Though it does seem to have chosen a different dblock file to verify, but same result - hashes don’t match.

I also reset --no-backend-verification back to false and tried setting --skip-file-hash-checks=true, and it completed successfully. Resetting --skip-file-hash-checks back to false again reverted my backup state to always completing with the error that the hashes don’t match.

The very last post to the linked thread mentions renaming a file and running the repair on the DB. He doesn’t mention what file he renamed, but if I had to guess, it was the index file. So I will attempt to rename the dindex file that was reported failed in the last test, and see if the repair helps.

So, I tried the renaming of the dindex file that was causing the last hash mismatch error.

I think I made things worse. I renamed it by just adding a .bak extension to it. The database repair operation seems to have added the .bak file to the index. So it continued to throw a bad hash exception.

Worse (or maybe nothing really changed) I ran the verify files operation, and it failed, on yet a different dindex/dblock set.

Kind of stumped what to do next. I’m thinking my best bet is to blow away the DB and do another repair, after removing the .bak file I stupidly added to the index.

What to do next may depend on your use, e.g. if old versions are desired, it’s worth more effort to save them.

This began as a damaged dblock file. I’m not sure of current details. The more damage, the harder this gets.

If you have space and time, and care greatly about your current backup, copy it and databases beforehand.

If you’re comfortable using Linux command line, you can use “unzip -t” (maybe with “find” or “xargs”) to study.

Maybe dates or sizes (especially in binary) can also be clues to finding what might have made damaged files.

The AFFECTED command can tell you which source files would be affected if the dblock file was unavailable.

If that turns out to be the only problem file, you might feel confident enough to delete it, or move it elsewhere.

The LIST-BROKEN-FILES command could be run before the remove. If it doesn’t go off before, it might after.

The PURGE-BROKEN-FILES command could then be run to make Duplicati adapt itself to the lost dblock file.

Docs » Manual » Disaster Recovery does a lab exercise. In real world (after other repairs), might be messier.

Thanks for the reply @ts678.

I looked into those commands, but ultimately, didn’t bear much fruit.

I tried seeing what the results of list-broken-files was, and it reported that nothing was broken.

And a bunch of files seem to have bad hashes, so I don’t want to go individually spelunking through the dblock files.

I was really hoping there’d be just some sort of way to get a new version of the files backed up in the dblock files with the missing hashes. But I don’t see any way to do that without just deleting the backup and starting fresh.

So I think I’m going to go down that route. The old versions don’t really mean much to me.

Here on Windows, moving a random dblock file out of the folder temporarily did get list-broken-files errors, however in a multiple-dblock-damage situation, I can see how that would be a lot of manual work to set up.

Rebuilding a dblock file would be hard in general (because it might have data from files no longer around), however dblock files also have other block types such as file attributes (metadata) or lists of block hashes.

Your request was a bit different because the “get a new version of the files backed up” is basically just the regular backup. The difference is it wouldn’t go into old dblock files but would create new ones as needed.

So if you can repair/recreate/purge your way to a backup that runs, you might be able to continue forward, however there would likely be omissions and other oddities in older versions, plus fixing backups is work…

Unfortunately this means that people sometimes start over without much looking at original problem details therefore it might occur again on that particular installation, or be a generic issue that never gets resolved.

This is why I was encouraging you to see if the test option of unzip finds anything, check for truncated files, check dates to see if there’s a pattern to the bad files, etc. You could even sha256sum then base64 that…

Instead of individually spelunking, you could cd to the destination and run something like this to survey all:

find . -name '*.zip' -exec unzip -t -q {} \;

and then we’d try to interpret the errors and seek causes. Mystery gets deeper if zip test finds no problem.

As a long shot, if these errors are always on /tmp files, is your /tmp area healthy, with enough space, etc.?

It’s certainly your call which way you want to go with this, but it’s always useful to know how problems arise.

Yeah, I decided last night to just blow away the existing backup and rebuild.

Unfortunately, this backup is my biggest backup set (~4TB) and it’s my “production” backup, so I need to get it working again sooner rather than later.

I will be keeping a closer eye on things going forward though. If it happens again, I’ll reopen this thread.