How to repair "MissingRemoteHash" warning

A few days ago, I began receiving the following warning on a backup that has been running without incident for a couple years. I’ve search the forums for similar questions. All the ones I found seem to involve backups to an SMB share, but that’s not the case in my system. The system is running Linux Mint 20.3. Duplicati is backing up to an internal hard drive in a removable bay, mounted as local storage.

Some of the solutions offered include running Verify, which confirms the warning. I then ran Repair, which reports "Failed to perform verification for file: duplicati-b87e318a075f34cd3a08a73f8c25638b3.dblock.zip, please run verify; message: Object reference not set to an instance of an object."

Instructions in other threads also suggest running ‘Purge broken files,’ but I can’t find that feature anywhere.

Not sure what to do next. Any advice is appreciated.

This ‘MissingRemoteHash’ is a bit misleading, since (as the full error message suggests), it’s really a matter of bad or missing remote data. What has (probably) happened here is that the backend had a bad problem after Duplicati had terminated (from its point of view) the uploading of data and some files have been corrupted and can’t be used.

To clean these files, first never ever touch directly them on the backend, it must be done through Duplicati so it can clean its local state in sync with the remote changes.

So, select your backup in the Web UI, click on Advanced / Command line, you’ll see the backup command selected, change it to ‘purge broken files’, remove the files to backup and the filters (these parameters don’t make sense for purge broken files) and click the Run button (bottom right).
You can first list the broken files to check what the purge will do, if you can list more than your 13 files take some more time to assess the situation, else go for it. Good luck.

Thanks for your response.

I believe I did that correctly, but when I run “purge-broken-files,” it just lists the broken file, but doesn’t do anything about it.

I have repeated 'purge-broken-files- several times, and it’s always the same.

Well, reading again your post and my reply, I think I got the ‘13 bad files’ from another post :frowning:
You have only one bad file.
I’d try to see if it is concerning real data by using the ‘affected’ command (replace the backed up files by the actual name of your remote file), then if it’s returning none or an error, try to see if it’s part of the last backup or not (with the file date) and if yes, delete the last backup (if you are sure it will not delete important data that you could have removed from the source).
To delete the last backup, use the ‘delete’ command and add an additional parameter ‘version’ and set it to 0.

I did wonder about that. :slight_smile:

The affected files all belonged to an application that was recently updated by Synaptic. I uninstalled the app (I don’t use it much), and am running the backup again.

Hmm, the backup still warns about that file. I’m not too concerned about losing the data in this backup, but I’d rather not lose the whole history. You suggested deleting the “last backup.” I don’t see how to do that. All I can see is to delete the entire backup with all its versions.

Possibly the one exception to not touching files is that list-broken-files seems to need to see a missing destination file before it considers source files broken. If you like, giving it a new prefix or moving will do.
Looking at the file’s time (if plausible) might give some clue about when the file was made or damaged.
Have you had any filesystem issues, or maybe the removable bay got removed at an inopportune time?

When such size errors come up, I usually do a decimal-to-hex because even value suggests filesystem.
33554432 turns out to be 2000000 in hex. That might set a new record for a suspicious super-even size.

You should be able to list and purge the broken Synaptic source files. I’m glad they weren’t important files.

That’s an interesting question. There were a couple power outages last weekend, one overnight. The file timestamp is around 2:20 AM on Sunday, when the backup would have been running.

I renamed the reported file and ran purge-broken-files again. This time it at least did something:
image

Will see if tonight’s backup runs without warnings.

Thanks to both of you for the advice.

yet that’s quite doable. After selecting delete and emptying the ‘commandline arguments’ box, scroll down all the way and pick an advanced option, select ‘version’ from the ‘core options’. Enter a 0 in the value. Then click run. The last backup version should be removed (marked deleted to be purged later)

Ah, thanks for that. It looks like last night’s run went off with no warnings, so it looks like I’m good to go.

Thanks again to both of you for the advice.