Backup issues following NAS crash - Hash mismatch

After looking at other cases of this in the forum, I think this is probably the least disruptive way to fix a limited problem (although we’re not sure how limited this one is…), but you probably meant The DELETE command specifying a version which sometimes takes special techniques to determine because the error message has verson numbers that count up with time, whereas the rest of the UI has versions where 0 is the latest version. Some people have reported success with database Recreate but that can run awhile. How big is the backup?

Although I wish there was a definitive guide to the capabilities and limitations of the tools, the list-broken-files and purge-broken-files commands are, I think, primarily based on database queries to check for files lost due to things like dindex and dblock file issues. There’s an induced lab experiment of those at Disaster Recovery.

And, yes, the above Unexpected difference in fileset issue may or may not relate to Hash mismatch which is a complaint about a downloaded file not having the expected content. A question is when it went bad, and an interrupted download seems less likely unless this is an old download somehow still getting complaints. Does that /tmp file date/time shed any clues? Is it known if that exact file somehow is downloaded repeatedly?

Another possible explanation is the sampled test described below. If a compact ran from the regular backup, there might be signs in the job log. Here’s part of my log from this morning, when the automatic compact ran:

CompactResults:
DeletedFileCount: 26
DownloadedFileCount: 13
UploadedFileCount: 4

Some people focus on current backups, while others focus on keeping old versions. Getting current backups going is easy if one doesn’t care about old ones. If there are some especially critical files, those can even be backed up first, and having several smaller backups can even be good long-term, for scheduling and uptime. Possible backend damage is very slow to detect because the only way to detect damage that escapes a fast look at file sizes is to download everything to inspect. The TEST command can do that, and such a sampled test runs routinely after each backup, as configured by the –backup-test-samples option, but forcing all files on the backend to be sampled can only be suggested indirectly by giving a huge count to the test command.

Difficult to say. None are ideal, but if you could describe “best” in terms of user experience, it may help guide. Some may also require quite a bit of work, and technical skill. Depending on equipment, a hybrid approach is possible where a fresh backup begins, and we move the old one to another Duplicati to see what we can get.