Restore shows Hash mismatch on file but files restored are OK

I did a restore test from backup made on a Samba share. Duplicati runs on docker environment running on Ubuntu 18.04.6.

I am new to Duplicati so doing some testing to validate my setup.

Here the error msg:

  • 2022-04-27 12:55:07 -04 - [Error-Duplicati.Library.Main.Operation.RestoreHandler-PatchingFailed]: Failed to patch with remote file: “duplicati-b58aae526f2554bfeb774f7813996cfad.dblock.zip”, message: Hash mismatch on file “/tmp/dup-9d9386f6-ebc1-47e9-855b-21f37480c039”, recorded hash: K3x/yTjpl9qTxZ+zfsd46Wyfzo45e4lU1PVU+p3YY7o=, actual hash o23YMleI4CkuFkF4EfVL2loP+GMLU8FJZy0S3xk1cIo=

What could be reason for this error - note I don’t get any errors while doing the backups.

Thanks
Lars

Welcome to the forum @joker35

Probably on the line that didn’t show (because it was below the one-line summary that was shown).

Try setting up About → Show log → Live → Retry and see if that /tmp/dup* file is a downloaded file.
If you see something that looks like that error message, try clicking on it to see if it will show details.

During restore, files are downloaded. Sometimes they are either bad at destination or by download.
You have an unencrypted .zip file. Can you go to destination to see if the .zip file will open in a tool?

Meaning that’s the destination (not source?). SMB tends to corrupt files. What size is your bad one?
Sometimes they wind up truncated to binary-even sizes. I forget if it checks the file size or hash first.

EDIT:

files restored are OK

Sounds a little suspect, BTW. “Failed to patch” should mean some part of some file didn’t get set up.
You can get better view of the action (including file names) at About → Show log → Live → Verbose.

@ts678,

thanks for your input I will do some more testing and watch out for your suggestions to review details in log as well as checking if zip file can be opened.

Samba share is destination (mounted in docker with CIFS using ver. 1 and cache=none). This is an old NAS I got running OMV hence ver. 1 and I disabled cache to avoid errors I saw with backups done.

You mentioned SMB tends to corrupt files - what method would otherwise be preferred to place backup on another server?

Anything else that the server can do (of course, avoid unencrypted FTP if the network is not secure).

Until then it’s just an assumption there was an issue with file write over SMB, but sometimes there is.

Thanks for the test. It sounds like maybe that helped a little? I’ve been trying to get somebody to test
directio on their SMB mount, but it looks like cache=none is (maybe) the current preferred method.

If you set upload-verification-file the destination should get a duplicati-verification.json file after every backup which describes expected file content. This is one place where you can get the expected size.

Expected hash is a little harder (it’s Base64 of the SHA-256 hash) but there are scripts that can verify.

How to verify duplicati-verification.json?

If you’re not set up for such testing, Duplicati can download and test all files with The TEST command which you could run from GUI Commandline if you like. It would be interesting if there’s a problem with contents of the file but the length test passes. You might need to look at the verification file to test size.

@ts678,

I set the upload-verification-file flag and new restore didn’t give error anymore.

I didn’t notice the backup time getting affected by this option so any concern to why not just this as standard options with my backups?

Lars

You can do that, but the file is only useful if you use it with a verification script or manual viewing.

You might still want to verify all your files (which change with each backup) using some method…

In addition, unless you use the no-local-blocks option, a restore might not even fetch remote files.