This is likely to be related somehow, to compaction and this data set getting “broken” whatever it means after all.
The initial situation was caused by terminated compaction. (Generated by earlier versions, which did have the journal issue). I’ve updated since that. Before running repair, compact, and rest of the operations. My activity was triggered due the index error below.
Missing file: duplicati-i12764fec0f664e6185a416acf7b1850e.dindex.zip.aes
Found 1 files that are missing from the remote storage, please run repair
And repair uploaded 541 byte file, as said, I’ve earlier connected this to restore failure.
Listing remote folder ...
Uploading file (541 tavua) ...
As we all know, this is basically empty place holder, which shouldn’t hurt.
After this compaction finished without problems.
And then we’re in the situation I described bit earlier.
I also decrypted the “broken” b0366 … file, and zip verify passes and extrating it passes also without any problems.
If necessary, I can privately share the dlist and the b0366 index (not data), if anyone thinks that would be helpful. Those must be kept private and deleted when strictly not needed for debugging / analysis, but technically those do not contain anything particularly sensitive information.
… Then to the testing and fixing …
I took copy of the backup set, tried to restore it, just as previous. → Fail
Then I did run repair to rebuild the database locally, then I did run list-broken-files, prune-broken-files and repair again.
And then I decrypted the dlist files, and now I can diff these with the original one from the “broken” set. There are some differences, because data got deleted in the process.
Restore now “works”, by running repair, removing some data from the backups and then running restore. But now at least some data (ie one large file) is missing. It’s still not a good result at all. Restore should work without deleting data from backup.