Please verify the sha256 hash?

Yes, it simply requires that I (or someone else) provides the UI for “purge-broken-files”, which I think we should do anyway.

Glad to hear you got things working again!

Let me know if I got it wrong, but I went ahead and flagged the post to which I think you were referring when you said your problem was solved. :slight_smile:

Is duplicati not supported backing up to a NAS mounted by CIFS? I went back to the beta version, removed all the old backup files from the destination, recreated the database, and started backing up from scratch and I’m still getting hundreds of these errors on one job. It appears every file is failing with the “please verify the sha256 hash” error now.

I’m thinking duplicati doesn’t like something in my new configuration but I don’t understand what to do. If the remote file is a different size than expected, can duplicati reconcile the difference and just say “OK now the file is X size, it’s fine”?

Edit - I just looked and 3 of my backup jobs are having this problem. The only one that is not is a small DAVFS job. The destination is the same, but the source is DAVFS instead of CIFS. But the error messages seem to imply that the problem is on the storage/destination side, not the source side, so I don’t know if that means anything that one job is working (maybe just because it’s much smaller than the others).

Edit 2 - check that - another job that is failing is smaller than the job that is working, so it doesn’t appear to be a backup size issue.

By the way JonMikeIV I really appreciate your input on all my recent threads as I try to get Duplicati working but this issue is not solved - you were replying to Kenny in this thread, and he apparently solved his issue, but I still have not managed to get rid of this error.

To do what Kenny did, does that mean I have to delete the entire remote backup archive (aes files) and run purge-broken-files? Does that mean it will wipe out all my backup for this job and start over?

Deleting the aes files would indeed be starting from scratch. You’d be better off deleting the job - or at least creating (export/import?) a new one to a different folder on your destination.

I’m a little confused - are the sources DAVFS and CIFS (so they’re not local to the machine running Duplicati) or the destinations (or both)?

It is supposed to work. The error messages seem to indicate that the files are somehow truncated after being stored.

Duplicati basically does a file copy from the temporary folder to the destination, so I would assume that you could replicate a similar problem if you copied a file to the CIFS destination manually?

What I am doing is running Duplicati on an Ubuntu server. It mounts various shares via CIFS and the backup destination is a NAS on the same network.

What I’ve done is recreate the backup jobs from scratch (not restoring from the config files from the previous backup setup), with entirely new destinations, and so far I am not seeing the sha256 hash error yet. The old jobs are still running as well and I am still getting the errors on those jobs. I will run the new jobs concurrently for a few days and see if the errors return on the new jobs.

Edit: still broken on my machine, see later post.

I’m going to call this one solved, though I don’t know exactly how. Recreating all the backup jobs from scratch seems to have fixed it; it has now been several days and I haven’t seen any errors in the logs.

Well the error is back again, just took a few more days, so it’s unfortunately not solved, but I’ll update this thread if I find out anything more.

Is it the same file(s) reported each time or do they change?

I just re-ran the backup and between yesterday and today, the same 38 files appear to have the sha256 error.

“remote file duplicati-b19edfe0fbf7c4ca49e09c569e6824fac.dblock.zip.aes is listed as Verified with size 10934076 but should be 52406685, please verify the sha256 hash “sKqBc3FLBHZ1b0FDO0t0xuNG1jIyXZrON0zsKAC0aFQ=”,”

It appears that this file is in fact 10MB (so the verified size matches the actual file size on the NAS), but it was modified 1/14 during a backup operation that reported no warnings or errors.

I guess I will try moving the destination back to the local machine again to see if I can isolate this to the remote storage.

In your backup job is your “Volume size” set to 10MB or the default 50MB?

Are all the reported files (or is that ALL the files) the same 10MB size?

I’m wondering if something is capping / truncating you supposed-to-be 50MB files to 10MB for some treason…

1 Like

The volume size is 50MB (default). Most of the files are 50MB at a quick glance. I don’t think anything is capping since the other files are larger, but certainly something might be truncating certain files for some reason and I’ll look into that. I am hoping to be able to spend some time on this over this weekend.

Thanks for checking that - and good luck with the weekend time (I know I certainly need it). :slight_smile:

If you do get time, can you check if the reported-too-small files are all about the same size (10MB)? I’m not sure yet what it would tell us if they are, but it would be good to know…

I deleted the broken files from the destination, ran list broken files, then purge broken files (could the GUI just ignore filters for those operations rather than telling users filters are unsupported and making us deselect the filters every time we try to run them?), and it looks like it changed the dlist.zip.aes files to match the contents of the dblock.zip.aes files (and deleted 17 more of those dblock files, each of which was ~49.9MB).

I re-ran the backup after that and I got a rather cryptic:

removing file listed as Deleting: duplicati-bad0d455cdae045f1882db88a10bdc5dd.dblock.zip.aes,
    removing file listed as Deleting: duplicati-b7a21edefd1b94189a2a88b7fe0d16806.dblock.zip.aes,
    removing file listed as Deleting: duplicati-be7a9b15c09be49c0bcfd93f3bdc1e826.dblock.zip.aes,
    removing file listed as Deleting: duplicati-b01da0d24c86841859a74bab69da183c1.dblock.zip.aes,
    No remote filesets were deleted

That last line, “No remote filesets were deleted” seems contrary to the fact that it deleted 4 remote dblock.zip.aes files, but maybe I’m misunderstanding what it means by fileset vs file.

I’ve made a copy of this backup job on a local partition as well; I’ll run them side by side (at different times) for a few days and see if anything of interest happens.

@JonMikeIV - sorry I didn’t see your last message until I had already deleted the files. But I did go back to a recent log file and I see a handful of sizes:

10934076 but should be 52406685
377520 but should be 52403997
8912896 but should be 52344077
589824 but should be 52400365
2031616 but should be 52399085

So it seems the answer was that they were different sizes.

This destination was on a NAS, with bit rot protection and snapshots enabled. I turned off snapshots (not sure why that would make a difference, but it’s the only thing I can think of that might affect the files somehow). But I should note that before, when I had Duplicati running on the NAS itself, those were the same settings for the (same) backup destination. So I don’t think this has any relevance but at this point I’m willing to try pretty much anything to get to the bottom of this.

Hallo everyone, thought I may as well join in.

First of all thanks for this nice tool, I find it great in principle.

Unfortunately I have had the same issue since early December 2017 for 3 times now: each time I tried to patch it removing the broken files from the remote storage, repair purge and verify the errors where gone.
… Only to come back in a few days.
And that procedure is also an issue because each time you lose a lot of backed up data.

For what it’s worth I have actually verified that file on the remote storage where really corrupted.

Another point which may be useful to know for you is that the first time this happened was kind of 10 days after I moved the remote storage from a NAS over sftp to a workstation with smb/cifs due to the fact I had to change NAS.
Now I’ve got a new NAS and will move the storage back again to NAS over sftp in a few days… Of course after removing the new broken files (22, a new peak) which came out yesterday, repairing the db, purging files and so on…

I’m starting to wonder if that approach is really fixing anything or not, maybe it would be better to start from scratch

Will let you know if I find out anything new, in the meanwhile any suggestion is really welcome.

BTW, didn’t mention it before, but I’m on beta, not canary

Thank you for sharing your experience as well. Let us know if things start working properly when you switch back to sftp. If this turns out to be a problem with CIFS/SMB I’ll do the same. I am still running parallel backups to local storage on my server, but I haven’t been running them long enough for the SHA256 hash problem to show up (as you say, it takes a few days though I don’t think I’ve had to wait ten days).

Ultimately I still want to store my backups on my NAS, since it has plenty of space as well as RAID.

As I said I had planned to do, I have reverted to the initail setup moving the storage back to NAS over SFTP. I did this on the 25th after removing the broken files, repaining the db and purging.

So far I didn’t see any misbehaviour but it’s 1 week only. Last time it took from the 4th of January to the 21st to experience the problem.(or at least to be notified the waring).

At the current stage I would say that a bug in SMB/CIFS handling is the most likely candidate, but here I would wait for some feedback by the devs.

I’ll get back to update you if I have other news.

I am inclined to agree with you on this. I have been running 3 concurrent jobs backing up the same two sources (six backup jobs in total) to a CIFS share on the NAS, an FTPS share on the NAS, and local storage on the system running Duplicati. The only jobs that continue to show this error are the ones where the CIFS share is the destination.

I am just at day 10 for one of the FTPS destination jobs so I will continue running this but I am hopeful that the issue is resolved by not using CIFS.