Seems like nobody who’s stopped by (not everybody follows the forum) is reporting current problems. Possibly the best thing to do is to try your own restore from either your actual backup or a similar test.
Testing recovery from hard-disk-has-died is a good idea. Sometimes people get surprised at how slow database recreation can run. Doing a direct restore to another computer helps ensure that works right. Database recreation actually got a speed-up recently, but it’s still in canary, not in any beta release yet.
There was once a problem somewhat like the one you referred to. 2.0.3.9-2.0.3.9_canary_2018-06-30:
Fixed an issue where restores from the GUI would not autodetect blocksize and other parameters
Unable to restore backup: blocksize mismatch in manifest #2323 was (I think) the issue that was fixed.
Choosing sizes in Duplicati is possibly worth reviewing. Your 800 MB sounds like –dblock-size, as from “Remote volume size” in the GUI, and called “Upload volume size” in the manual. This size varies a lot, especially on the last volume of the backup, and Duplicati should handle whatever the backup job made.
–blocksize which defaults to 100 KB is the size that has to stay stable. Choose per above, then leave it.
EDIT:
“Invalid manifest detected, the field Blocksize has value 512000 but the value 102400 was expected” is reporting a remaining problem in 2.0.4.15. That led to an update in the GitHub issue that claimed the fix. Testing 2.0.4.15, I think I’m seeing this on direct restore. The original fix seemed to allow updating of the parameters from the manifest. Maybe I should test 2.0.3.9 to see if maybe its fix has become unfixed…
EDIT 2: Using Advanced Options --blocksize=500KB on screen 2 of the direct restore sequence fixed it. Your --blocksize might be different, or maybe you’re at default, and a larger remove volume won’t matter.