Backup valid, but still unrestorable?

I did intentionally run a job backup in a way which would have totally messed up older Duplicati versions, and it finally worked as expected. In this case, the remote storage was updated from another client a few times (from old server), before I started using old copy of it on a new server. Which of course creates logical version / integrity conflict.

But the recovery worked as expected. I wonder if the situation would have been worse, if I would have compacted the remote storage between. But if recovery works correctly even that shouldn’t have made a major difference.

Version: 2.0.4.30 canary 2019-09-20

Backup
Checking remote backup …
Listing remote folder …
Extra unknown file: duplicati-20190923T153013Z.dlist.zip.aes
Extra unknown file: duplicati-b9a9cc69c87d84c4aa32a3c2e1cb3d38b.dblock.zip.aes
Extra unknown file: duplicati-baa03eaed547e43449d315084131277e3.dblock.zip.aes
Extra unknown file: duplicati-ia18977bcbb524e41a096df73f2c090ad.dindex.zip.aes
Extra unknown file: duplicati-iaa9629029c614d03a2107ff52defd276.dindex.zip.aes
Missing file: duplicati-20190821T153014Z.dlist.zip.aes
Found 5 remote files that are not recorded in local storage, please run repair
Backend verification failed, attempting automatic cleanup => Found 5 remote files that are not recorded in local storage, please run repair
This is something which shouldn’t happen - There’s still some kind of code logic error in the code base.
Failed to read local db XXX.sqlite, error: database is locked
database is locked => database is locked
database is locked
Fatal error => The process cannot access the file because it is being used by another process.

Repair
Listing remote folder …
Deleting file duplicati-20190923T153013Z.dlist.zip.aes (39,20 KT) …
Deleting file duplicati-b9a9cc69c87d84c4aa32a3c2e1cb3d38b.dblock.zip.aes (31,91 MT) …
Deleting file duplicati-baa03eaed547e43449d315084131277e3.dblock.zip.aes (139,42 KT) …
Downloading file (unknown) …
Failed to accept new index file: duplicati-ia18977bcbb524e41a096df73f2c090ad.dindex.zip.aes, message: Volume duplicati-b9a9cc69c87d84c4aa32a3c2e1cb3d38b.dblock.zip.aes has local state Deleting => Volume duplicati-b9a9cc69c87d84c4aa32a3c2e1cb3d38b.dblock.zip.aes has local state Deleting
Deleting file duplicati-ia18977bcbb524e41a096df73f2c090ad.dindex.zip.aes (1,35 MT) …
Downloading file (unknown) …
Failed to accept new index file: duplicati-iaa9629029c614d03a2107ff52defd276.dindex.zip.aes, message: Volume duplicati-baa03eaed547e43449d315084131277e3.dblock.zip.aes has local state Deleting => Volume duplicati-baa03eaed547e43449d315084131277e3.dblock.zip.aes has local state Deleting
Deleting file duplicati-iaa9629029c614d03a2107ff52defd276.dindex.zip.aes (52,37 KT) …
Uploading file (541 tavua) …

But after running the repair manually backup worked. Let’s restore the backup and see if it works as well. Usually in these kind of situations the backup got messed up and became unrestorable.

Test
Listing remote folder …
Examined 87 files and found no errors

So far so good… Let’s restore the backup and see the final verdict, which used to often fail in this kind of situation.

Restore
Restored 785 (4,54 GT) files to XXX
Duration of restore: 00:07:09

Phew… Honestly this starts looking really good, finally. When next beta comes out, I’ll need push update to all clients. Hoping this will also fix the unrestorable databases cases in future.

TempDir - Once again
Anyway, --tempdir is once again totally broken and doesn’t work at least when restoring. Not via TMPDIR environment variable nor as command-line parameter. Kind of legendary. I just seriously dislike inconsistent code. But that’s a really minor thing, it just would vastly improve restore performance in this case. Yet, at the very same time, it’s so simple thing, I wonder how it can be broken all the time. Must tell something about bad development processes and habits?