Backup valid, but still unrestorable?

Unfortunately I don’t have the original one anymore. Strange, ok, that’s not exactly what I expected that to be. It’s actually more interesting. Why would index file have a invalid reference, probably some other bug somewhere else, and not directly related to the compaction abort I were blaming so many issues with.

But, I might have already more similar cases. Yet it’s unfortunately likely that those will be a lot larger data sets, and therefore harder to analyze. That one I provided you, was so wonderful because it was really small set.

But this is probably the secondary cause, which is causing some of the backups to be extremely slow to restore, and unreliable. On top of the missing files after compaction.

But currently I don’t know any other problems than these two. Doesn’t sound too bad after all. I just yesterday run 100% restore tests again for all the sets, and these were the two cases. Missing blocks, which then can or can’t be restored and database locked error after compaction abort (clear bug!) on next backup run, which is trivial to remedy with repair.

Ref: FTP delete not atomic / verified / transactional (?) - #31 by Sami_Lehtinen

Edit continued:

Yes, it seems that we’ve already got such case. Registering missing blocks, insane amount of single threaded CPU waiting and then finally error. Yet again, this is data set which does pass backup. I actually can run test right now, I’ll do it.

Error on restore:

ErrorID: DatabaseIsBrokenConsiderPurge
Recreated database has missing blocks and 3 broken filelists. Consider using "list-broken-files" and "purge-broken-files" to purge broken data from the remote store and the database.

Checked:
Backup runs just fine, restore fails, list-broken-files good, purge-broken files good, repair didn’t fix a thing (no errors to fix) and test (I’ll update when done, this is going to take some time). Yes, test passed without errors. - Still totally logically and deadly dangerously broken.

@ts678 ts678, I can save this state temporarily, both, sqlitedb and full backkup storage. But this data set is around 5 GB. But as mentioned I can’t provide dblocks but otherwise the db contains nothing especially sensitive. Path / file / server -names, aren’t that sensitive.

I temporarily save snapshot of this state.