The complete fool I am, I tried to get out of an issue by hitting “delete and recreate database” button. This was about 24 hours ago. The latest update I got from the log is that I’m at Pass 3 of 3, processing blocklist volume 170 of 2019. If this is a linear rate process, it will finish in about two weeks… if my vpn connection doesn’t get interrupted (spoiler alert: it inevitably will).
PS: what had happened: a backup was interrupted by an operating system shutdown (a normal one, so duplicati should have had a clean shutdown request from systemd, all filesystems where unmounted properly). But when the next backup ran, I got into the “unexpected difference in fileset” error. So I deleted the version mentioned in the error. And again, and again. After deleting 3 versions, and seeing no progress, I wondered; well, if the issue is that the local database contains different information than on the remote storage, maybe I should just recreate the information that is in the local db with what is on the remote storage, rather than just keep deleting backups entirely (note, it were not the latest versions from the past days, but versions from months ago or even last year).
When searching the forum I see several posts over the past few year of people having the same question, why so slow?
To answer a recurring question:
Yes, this backup set is already > 2 years old, so despite running 220.127.116.11_beta_2021-06-17 now, it may have been upgraded several times by now, at least versions 18.104.22.168 and 22.214.171.124 have also been used on this system.
Other quantitative info:
- source backup volume is about 350GB, remote backup storage around 550GB
- sqlite file is around 3GB
- on the remote storage: 14 dlist files, 2063 dblock files and 2076 dindex files; dblock files are ~250MB each, dlist 65~80MB, dindex most <200KB, but some up to 1.x or even 30 MB
Why posting a new question:
- As stated before, at some point I will not be there to reconnect my vpn in time, a download from remote storage will fail and the recovery process will most likely do the same.
- I prefer not having to wait several weeks before I can run a backup again; at this stage I don’t even know if this backup set is recoverable at all (a backup got interrupted, I didn’t try the delete and recreate database just for fun)
- In the thread Very slow database recreation there’s mention of some different behavior in the then canary release; so maybe there’s interesting change I need to know about. Specifically, older versions of duplicati could have introduced some error or at least inefficiency in the data stored; are there tools to clean up this situation without losing the backup versions (and without having a local database).
- discuss other options:
- I have 2 backups of this machine, so the database that is restoring now, I have a week-old version of it in an offline backup; since then a couple new backups ran on this remote storage… so does restoring this out-dated version of the sqlite db help anything? Or will it just throw some other inconsistency error in my face and fail anyway?
- There’s RecoveryTool, it can recreate indexes, but would that help at all?