How to fix please verify the sha256 hash?

Oh… I think you’re write. It happened after I delete dblock.zip.aes file because there was message “Message has been altered, do not trust content —> SharpAESCrypt.SharpAESCrypt+HashMismatchException: Message has been altered, do not trust content”. I think it was an error with password. So I decided to remove this file and made purge-broken-files.

Did I do wrong? What if I just remove dindex files too?

I see. But after my information is it still better solution?

It’s what I would have suggested, and I just tested it by intentionally corrupting a dblock file, getting the complaint, deleting the dblock, and doing a purge-broken files. It didn’t touch the dindex, but it deleted database record with expected size, etc. so there was maybe something different in something you did.

The original error message posted on the dindex file lengths says database has a record of the files, so probably will complain about them missing. Adjusting this might mean a recreate eventually, but current situation is not known in detail. If you like, you can post a link to a bug report, so I can try to determine.

Or if you like, you could save the current database, delete the database (so erasing memory of dindex), rename the dindex files to not start with duplicati- (or move them to a different folder) and see how a recreate goes. If it goes well, it gets no further than 70% on progress bar, and a live log shows only dlist and dindex downloads. If it gets to 90%, then it’s searching hard and slow for some data that is missing.

EDIT:

This lost some parts of your backups, but at least we can try not to lose any more while getting it back.

I made bugreport for this backup: bugreport.7z — Yandex Disk (I repack original zip to reduce size. 7z is open format and WinRAR for example can open too. So I hope there is no problem with that).

I saw my logs and see that I did repair after that but I saw the same warnings for this two files after repairing.

So I see only two solutions: may be you see something interesting in the bug report or remove this dindex files and recreate DB. BTW may it help to recreate without removing? I think no. But I don’t see the whole picture for that.

Are you sure you got the right backup? This one has no sign of the two files that you had errors about:

* 2024-03-03 11:16:43 +03 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-MissingRemoteHash]: remote file duplicati-i378ef0b0d2ed41c5af7748b8b47d525a.dindex.zip.aes is listed as Verified with size 541 but should be 324157, please verify the sha256 hash "iZbGIp/Jr8h6UXiFtAasqXBWLZ8SiOqSKvnCohblDmI="
* 2024-03-03 11:16:43 +03 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-MissingRemoteHash]: remote file duplicati-i5dca542a475d40e5b45088bf3167bdc0.dindex.zip.aes is listed as Verified with size 541 but should be 430541, please verify the sha256 hash "VWNQOFL/o6tPOSErRfgfUvgXmDF4B11SN+YyOqKqzGE="

There is an unused 541 byte dindex file, duplicati-ief2ef04c2f2c4ce89a91e3f8e226414c.dindex.zip.aes

I suggest you don’t leave bug reports up long-term, and I have this one already, though it seems wrong.

OMG! Sorry. I sent link to you in the private message.

duplicati-i378ef0b0d2ed41c5af7748b8b47d525a.dindex.zip.aes and duplicati-i5dca542a475d40e5b45088bf3167bdc0.dindex.zip.aes aren’t being used now.

This would have been a pretty good guess before, but now it’s more confirmed.
There are actually a whole lot of 541 byte dindex that are in the same situation.

There are 141 dindex/dblock pairs with 141 verified dblock files, so that’s good.
There are 236 dindex files, which looks like 94 verified size 541 plus one other:
duplicati-icdf43deb183d42f3991f9b2e4feebd13.dindex.zip.aes also looks extra.

Extra files are fairly harmless, unless they confuse something. Missing are bad.
I see some other extra 541 byte files coming from Repair. I haven’t looked at all.

Good history by default only goes back 30 days, and this database is recreated.
If time doesn’t delete the old history, a recreate will, but bug report saves some.

Are two dindex files from 2024-03-03 11:16:43 still getting a complaint? Repair
at that time seems to have downloaded them, and maybe adjusted database…

How all this stuff happened is a good question. What destination type is in use?
Destinations that don’t reliably store data are going to keep bothering a backup.

1 Like

You definitely right! After repairing I did another repairing and all was OK. I made backup and also with success.

Big thanks and sorry for your time.

The destination type is webdav.

I try direct restore and compare files later.

I’m glad it’s back together, but the Operation table showed a whole lot of repairs and other things.
Sometimes destination transfer flakiness can be worked around with number-of-retries, and such.
Corruption of files silently by the destination means you probably should be using something else.

1 Like

Yes. Number-of-retries was set on each backup. And BTW I saw logs again and see that at first was 4 such files and after first repair only 2 and latter no one.

If it would be usefull for anyone I could post my algorighm to autorepair later. And I want to add repairing after this warning messages. Powershell script is based on my functions in my library. So it can’t work without it.

I would like to know at least the approach. Ideally, autorepair isn’t required, but all is not quite perfect.
There is some autorepair in my abuse-testing scripts, so that test can continue despite known issues, however a byproduct is to come up with a nice test case and analysis, so that the issues can be fixed.

Repair is also not always the thing to do without some thought, even though the popup may suggest it.

My scripts works only in fatal ($env:DUPLICATI__PARSED_RESULT -eq ‘Fatal’) and after backup with content of $env:DUPLICATI__RESULTFILE:

if($matches[1].EndsWith('Message has been altered, do not trust content')) {
  if($ErrorRaw -match 'Log data:\n\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} \+\d{2} - \[(.+)\]: Failed to retrieve file (.+)') {
    #_Log $matches[2]
    _D2_RemoveFile $matches[2]
    _D2_RunRepair
    }
  }
elseif ( $matches[1].EndsWith('please run repair') -or $matches[1].StartsWith('Found inconsistency in the following files while validating database:') ) {
    _D2_RunRepair
}
else {
    _Log -Text ("`tUnsupported Fail error: " + $matches[1]) -TextType e
}

I think all is clear).

After post I see that it removes only first file).

So I want to add Repairing after backup with warning and repeat if previous file count decreases).

Attempting to repair your way around data files that have had their data corrupted is a very risky plan.
Destination file content can only be obtained by downloading, and by default only 1 dblock is verified.
Database recreate (if done, as for this job) will find out if the dlist and dindex files remain readable.

Verifying backend files
The TEST command
backup-test-samples
backup-test-percentage

I suggest you use a destination that doesn’t corrupt your files.

If D2 can’t decrypt block file how I can restore files with such a block? And may I’m wrong but this advice I read at this forum)).
And it doesn’t recreate db - only repair.

I will test my destination but may be I did an error with pass or may be a browser put incorrect password to the password field. And may be this error because of D2 error or unhandled network problem.

But I’ll think how to test destination and may be test with local storage with D2.

That’s why I said it’s risky. If backend loses data, you lost part of backup.

1 Like

I found this page when I started getting similar errors. I ran list-broken-files and the single file listed in the warnings was output, as expected. I then proceeded to run purge-broken-files since the file was a dindex file that can be rebuilt with a repair and I get the same error about a bad hash:

# duplicati-cli purge-broken-files "URL"  --dbpath=DBPATH

Enter encryption passphrase: 
  Listing remote folder ...
remote file duplicati-ie8326ce152d24af8a62d56401d331e41.dindex.zip.aes is listed as Verified with size 0 but should be 19453, please verify the sha256 hash "DfdojwDZqaqlpsXdkSbqn3t51/F0fbq9G25tmzMitfE="

What am I missing in the commands here?

You’re missing some headers that add context to the output, but it might still be confusing.

If you add –console-log-level=information, it will look a little better, maybe more like below:

and if you follow the link, you will see the story behind how the headers are now less visible.

Beyond the above, you are maybe OK, but only you know your backup. You passed a usual tripping point by typing passphrase manually, and you didn’t trip over the need for a dbpath, suggesting that you’re trying to match a GUI job. It can be simpler to use GUI Commandline.

Using the Command line tools from within the Graphical User Interface

avoids both potential trips, but adds a few others. I don’t think this is the core problem though.

Problem is (I think – find me a developer if you want more) that you might not have an actual broken file, in the sense that missing destination dblock file has lost data from any source file.

adds context to the missing-header analysis above. Note how it named some source files that broke, whereas your run didn’t. Your message looks like a side note from RemoteListAnalysis, whose main purpose is to return some information to its caller, but which comments as it goes.

is at Warning level, so you see it, as that’s the default console-log-level. You miss Information.

Bottom line is that this isn’t likely going to be a one-commander. Got any other logs or history?

@ts678 thank you for that additional information. That’s all I have for logs.

I noted that the missing file is a “dindex” file and understood from some other posts that this file is created from the client database and therefore can be recreated. So I deleted the file from the destination location (it was size zero anyway) and then ran repair and found that the file was recreated with a reasonable size and then my backups stopped complaining.

For me, using the command line rather than the web UI was more satisfying as the web UI gave very little feedback while it was running and was actually hung for a long time and I’m still not sure why. The confusing part with the WebUI command line is knowing which arguments to remove. However running from the command line worked well.

The one thing I’d like to figure out would be how to prompt for the remote login credentials rather than needing to specify them in the URL that is passed on the command line.

Yes that can be the right approach if the database has the information, which depends on history. Some error cases wind up thinking a dindex is missing even though there’s no associated dblock, resulting in a small (but not 0 length) dindex that basically says nothing (as it has no dblock for it).

2.0.7.100_canary_2023-12-27

Fix missing file error caused by interrupted compact, thanks @Jojo-1000 and @warwickmm

should help close that path off whenever the next Beta goes out. Basically, compact did a dblock and dindex file delete, died, forgot the dindex delete in a DB transaction rollback. Fix solves that.

Looking in the manual or CLI help is sometimes needed. True CLI has opposite “add” problem.

I’m not sure this ever happens. The easiest way to get a nice URL is from Export As Command-line to grab at least the URL. Or take the whole thing and edit its backup into another command.