Verify sha256 hash

Hello,

The past couple of days I’ve been getting the following backup warning:

* 2020-02-17 19:53:47 +00 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-MissingRemoteHash]: remote file duplicati-bdedcaa78f933440c93db47148f89443e.dblock.zip.aes is listed as Uploaded with size 0 but should be 5447629, please verify the sha256 hash "Zo5BVHRu7KMVt7PQCRZY7qXPSxpzPQIY8taTYbDXjFM=" * 2020-02-17 19:53:57 +00 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-MissingRemoteHash]: remote file duplicati-bdedcaa78f933440c93db47148f89443e.dblock.zip.aes is listed as Uploaded with size 0 but should be 5447629, please verify the sha256 hash "Zo5BVHRu7KMVt7PQCRZY7qXPSxpzPQIY8taTYbDXjFM="

The file is infact 0 bytes when I check the remote SMB backup share.

I’ve tried to delete the dblock file and run purge broken files command but I then got an error about orphaned files, so purge couldn’t run.

Any suggestions? I’m stuck at the moment.

Thanks.

Update - I restored the 0 byte dblock I deleted, then backed up my Database, then ran a “Recreate (Delete and repair)” on the database, once this was done I was able to delete the 0 byte dblock and run purge broken files without the Orphan file message.
Ran the backup again and no errors.

Welcome to the forum @egtrev

Backing up to an SMB destination seems to be a common source of troubles. Sometimes they’re very solid (at least one person moved to NFS for reliability), other times the suspicion is that an intermittent connection (e.g. laptop moving around) contributed. Your file was believed to be uploaded, but wasn’t.

Although zero is a special number, sometimes files are non-zero-short with a round size inhexadecimal. Although this has been observed, I’m not sure if a reproducible case has been available to be looked at (and even if that were the case, the conclusion might be that Duplicati did OK, and SMB dropped data).

Is Duplicati on Windows, Linux, or something else? There might be some client-side tweaks to help this.
Please verify the sha256 hash? helped Linux, and Windows has some options. What’s the destination? Unfortunately I’m not personally familiar with all the tricks to help SMB, but I can see that some exist…

It’s on Windows 10 backing up to a SMB share (on a local Linux NAS).
I believe the issue started when I had my laptop outside of the local network, but temporary connected to my local VPN, so the local Duplicati backup went ahead because the SMB destination was temporary available over the VPN, I’m thinking the connection got lost at some point causing it to not backup correctly. When I ran the list broken files command, the broken files/dates matched when I was not at home. Backup has been solid for a year now and my B2/Google Cloud Duplciati backup was not affected.

The cloud backups are probably more reliable (in a way) because they don’t try to accept data from Duplicati for later delivery. If it fails (which it will if there’s no connectivity), it fails right then and there subject to –number-of-retries before any error. If it finally errors, it should pick up on the next backup.

Anyway, I’m glad you’ve got several backups (good idea) and even SMB seems to not be intolerable.

I hope it’s OK to continue an old thread.

I am now getting the same message for one of my backup jobs. Each time I run the job, I get six of these messages. When I run the "check file"or whatever it is called in English, I get three of these messages.

I can see in this thread the steps the OP needed to recify the situation, but in this heat I have trouble understanding why to re-create the database (as the problem is elsewhere) and what is “run purge broken files” exactly. In other words, which command do I need to run, and what exactly is it needed for.

Most importantly, do I lose or have I already lost some of the backed-up data?

0 byte destination files, or some other unexpected sizes? Can you give examples of the sizes reported?

Do the files look that size to you, examined from the local system and (if possible) from the destination?

Are the file dates on the affected files recent or variable to the extent you know? Might be a new problem, which might be a good thing because loss of older files might affect more files due to block deduplication using old blocks for new files that happen to have the same block. Can’t tell without some testing though.

What destination type?

Is the Advanced option backup-test-samples set to something more than 1 (which is 1 set of three files)?
The TEST command can be set to all and run in Commandline if you want to download and test all files.
This can take a long time, of course. If the destination is directly accessible from an OS that does scripts, upload-verification-file with utility-scripts\DuplicatiVerify.{py,ps1} is another way to verify integrity of all files.

The AFFECTED command can answer that. Tell it whatever backup files appear to have gotten corrupted.
Generally if you’re getting complaints about some files, those files are kept for a reason, so there’s impact.

Disaster Recovery is a lab exercise which intentionally damages files, then shows how to trim off damage so that backup and restore can continue on undamaged portions. One thing that either changed or wasn’t documented correctly is that damaged files need to be removed manually, or moved to some other folder.

Thank you for the help.

Usually I would do the research and fix it, but right now I am in a precarious family situation and cannot invest my time in digging it all up. So I will just tell what I have and then create a new job to replace the current one, as I really don’t have time to go the long way.

I have Duplicati - 2.0.6.1_beta_2021-05-03, the backup destination is an external disk connected via USB cable, and since the disk is encrypted, the job isn’t. The reason why I am happy to create a new job and remove the old is that this particular job only has one single file in it, an encrypted file that I open with “cryptmount” tool. I will later replace it with multiple “tomb” files, but for now, I have to concentrate on domestic problems.