Backup valid, but still unrestorable?

One more warning, testing, is still done wrong. It’s possible that full tests do pass but full restore still fails. This is the ultimate trap, I were talking about. Which undermines confidence and creates very dangerous false trust for users until the very end. I’ve seen this situation also, that’s why I do always full restore, not full test. Test is unfortunately dangerously misleading.

But back to this specific case I’m now talking about. I guess the problem might be that there’s for some reason left over i file. When the database is available, the file is ignored, but while rebuilding database it probably causes the badly written logic to fail.

Remote file referenced as duplicati-b745030a03fa640d29f2daa1849cf0f2e.dblock.zip.aes by duplicati-i59a736b5d13d47d0b91cdfdfa1ddf8cb.dindex.zip.aes, but not found in list, registering a missing remote file.

This file should get deleted, because it’s not probably even necessary and the restore process should ignore it and it shouldn’t cause it to fail.

Also about testing process, testing backup when the local database is present is also stupid, because it’s not available when you’re doing disaster recovery. These are very bad and inherent flaws in the current “solution”. It seems that the bad reliability is absolutely worst part, but let’s get the problems get fixed. Yet these are obvious and logical problems which should get fixed without reports. If the backup works transactionally and correctly, there shouldn’t be there problems ever and if there is, automatic recovery should occure. - I think I’ve written exactly same around year ago or so.

Luckily this isn’t backup set in terabytes, so it’s relatively viable to run all the tests over 10 gigabit network.

Found 1 missing volumes; attempting to replace blocks from existing volumes

Here’s fresh retested and confirmed results:
Full Restore: FAIL - Code 100, unable to recover
Full Test: OK - 0
List-Broken-Files: OK - 0
Purge-Broken-Files: OK - 0
Repair: OK - 0
Full Restore (again after all checks and tests): FAIL - Code 100, unable to recover

This is perfect example of ‘trap’ product, which wastes your time and resources, and doesn’t provide any value when actually needed. That’s why this kind of critical flaws shouldn’t exists by design.

About version I’m restoring in this case, always the latest version, which makes sense in case of DR restoration. That’s also the reason why if such problem is detected, the potentially missing blocks should be automatically and instantly replaced (when running next backup) because the required data is still available on the sources system.

As mentioned, I’m just TESTING, I’ve got all the data I need and no actual need to restore. So this isn’t actually major problem for me. But as programmer, data administrator, it manager and guy who greatly cares about data integrity and system reliability, I find these problems absolutely devastating.

  • I remember that compression program, which gave great compression results in all tests. It worked really great, it was fast, and gave incredible compression ratios. Under the cover it just referenced the source files. As soon as the user deleted the source files, extracting the archive failed leading to data loss. - Funny stuff.

Edit and continued: test was run with “all” and --full-remote-verification=true even adding --no-local-db=true doesn’t change anything, still passes. --no-local-blocks=true, still passes.

Which option removes excess files from destination folder? I think I’ve seen discussion about this, but I couldn’t find the parameter quickly. I’m also strong advocate of software which goes and “poops” around, leaving junk everywhere. If something isn’t necessary, it should get deleted.

1 Like