Restore/Recreating database taking very long

Hi,

my version: linuxserver.io Docker container, v2.0.6.3_beta_2021-06-17
Backup size: ~250gb in over 10k files
OS: Unraid
Stuff to restore: a couple container configuration files and 1 .img file
Backup target: Mega

I was running weekly (encrypted) backups for the last couple of months and now I lost all my docker containers and need to restore a VM.
About 24h ago I created a new Duplicati container and just started a restore (restore - direct restore - …).
The progress sits now at about 90% and it’s restoring the dblock files (I think):

Oct 6, 2021 11:26 AM: Backend event: Get - Completed: duplicati-bfdb2aacbf8864e479e278ff93e0ad26e.dblock.zip.aes (49.92 MB)
Oct 6, 2021 11:26 AM: Pass 3 of 3, processing blocklist volume 199 of 5221
Oct 6, 2021 11:26 AM: Backend event: Get - Started: duplicati-bfdb2aacbf8864e479e278ff93e0ad26e.dblock.zip.aes (49.92 MB)
Oct 6, 2021 11:21 AM: Backend event: Get - Completed: duplicati-bc554d667c2b9460ba8b73af58093de1a.dblock.zip.aes (49.94 MB)
Oct 6, 2021 11:21 AM: Pass 3 of 3, processing blocklist volume 198 of 5221
Oct 6, 2021 11:21 AM: Backend event: Get - Started: duplicati-bc554d667c2b9460ba8b73af58093de1a.dblock.zip.aes (49.94 MB)

And 1 file needs about 5 minutes for completion. So if it’s really restoring those ~5000 files at 5min/file I would have to wait a while…
I read about a bug concerning the restore, but that should be fixed in my version, right?

I doubt that duplicati is limited by CPU, RAM, I/O or network speed.
It’s downloading to an SSD and restoring to an HDD.

Is there anything I can do about the restore speed? Am I maybe just misinterpreting the logfiles and it doesn’t need all 5221 files?
I would appreciate any feedback.

best regards

How long have you been using Duplicati? Do you recall what version you had when you first started?

Duplicati should only need to download dlist and dindex files to rebuild the database, and as such it should be quite fast. My current understanding is that if it is downloading dblocks, it means some of the dindex files were not written correctly. I believe some past versions of Duplicati had a bug that sometimes caused it to write dindex files wrong. In my experience it was fixed somewhere in the late 2.0.4.x versions, but if you started using Duplicati before that then you may be hit by this issue. That’s why I’m curious what version you had when you first started using Duplicati. (By the way, if you had a functioning database there is a way to rewrite the dindex files correctly with the current version of Duplicati.)

In any case, there isn’t much choice but to let this finish. And there isn’t any way to speed it up. It is largely single-threaded.

I just checked my backup files and it looks like I started backup in early November 2020, so that should have been version 2.0.5.1.

The database was fine I think - never had any problems until the data loss.

If your database is ok, then you shouldn’t use this method for doing a restore. Just do a restore from your backup job directly. (Either click the backup job in the web UI and then click “Restore files”, or click Restore and then click the backup job.)

The “direct restore from files” only creates a temporary database. It is discarded once you do a restore. So it’s only intended for situations where you need to do a one-off restore on a different machine from where Duplicati normally does backups.

Well, I lost the database - prior it was ok.
I stopped the restore, because I didn’t want to wait weeks or months for it to finish.
Now I did the following:

  1. With the new container I was able to restore my old container files (including the database)
  2. I then copied the restored files into the current working directory of the duplicati container - folder structure looks like this:
name size last modified
.cache 2021-10-08 21:56
.config 2021-10-08 21:57
.mono 2021-10-08 21:57
control_dir_v2 2021-10-08 21:55
custom-cont-init.d 2021-10-08 21:55
custom-services.d 2021-10-08 21:55
backup ZVAGIDJZFJ 20210507020002.sqlite 2.11 GB 2021-10-08 21:55
Duplicati-server.sqlite 299 KB 2021-10-08 22:03
Sicherung KLUJDGTBQX 20210722082631.sqlite 870 MB 2021-10-08 21:55
ZVAGIDJZFJ.sqlite 2.74 GB 2021-10-08 22:03

Now I thought this doesn’t look too bad and Duplicati also shows my old backup job.

  1. I tried a restore (via the backup job) and I get this error (I tried different restore points):

System.Exception: Unexpected number of remote volumes detected: 0!
at Duplicati.Library.Main.Database.LocalDatabase.UpdateRemoteVolume (System.String name, Duplicati.Library.Main.RemoteVolumeState state, System.Int64 size, System.String hash, System.Boolean suppressCleanup, System.TimeSpan deleteGraceTime, System.Data.IDbTransaction transaction) [0x00080] in :0
at Duplicati.Library.Main.Database.LocalDatabase.UpdateRemoteVolume (System.String name, Duplicati.Library.Main.RemoteVolumeState state, System.Int64 size, System.String hash, System.Boolean suppressCleanup, System.Data.IDbTransaction transaction) [0x0000f] in :0
at Duplicati.Library.Main.Database.LocalDatabase.UpdateRemoteVolume (System.String name, Duplicati.Library.Main.RemoteVolumeState state, System.Int64 size, System.String hash, System.Data.IDbTransaction transaction) [0x00000] in :0
at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.Collections.Generic.IEnumerable1[T] protectedFiles) [0x009f3] in <e60bc008dd1b454d861cfacbdd3760b9>:0 at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.Collections.Generic.IEnumerable1[T] protectedFiles) [0x00000] in :0
at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter backendWriter, System.Boolean latestVolumesOnly, System.Data.IDbTransaction transaction) [0x00019] in :0
at Duplicati.Library.Main.Operation.RestoreHandler.DoRun (Duplicati.Library.Main.Database.LocalDatabase dbparent, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.RestoreResults result) [0x00136] in :0
at Duplicati.Library.Main.Operation.RestoreHandler.Run (System.String paths, Duplicati.Library.Utility.IFilter filter) [0x00062] in :0
at Duplicati.Library.Main.Controller+<>c__DisplayClass15_0.b__0 (Duplicati.Library.Main.RestoreResults result) [0x0001c] in :0
at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0026f] in :0
at Duplicati.Library.Main.Controller.Restore (System.String paths, Duplicati.Library.Utility.IFilter filter) [0x00021] in :0
at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x0040a] in <156011ea63b34859b4073abdbf0b1573>:0

  1. I tried to verify the files and get the following error:

System.IO.InvalidDataException: Found inconsistency in the following files while validating database:
/source/domains/Sardonyx/sardonyx.img, actual size 65498251264, dbsize 62951972864, blocksetid: 154910
/source/domains/Odin/vdisk1.img, actual size 26843545600, dbsize 26642739200, blocksetid: 154912
/source/system/libvirt/libvirt.img, actual size 5368709120, dbsize 5366251520, blocksetid: 154914
/source/system/docker/docker.img, actual size 21474836480, dbsize 17361018880, blocksetid: 154916
/source/appdata/sonarr/logs.db, actual size 5320704, dbsize 0, blocksetid: 154919
… and 3 more. Run repair to fix it.
at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency (System.Int64 blocksize, System.Int64 hashsize, System.Boolean verifyfilelists, System.Data.IDbTransaction transaction) [0x000e9] in :0
at Duplicati.Library.Main.Operation.TestHandler.Run (System.Int64 samples) [0x0009f] in :0
at Duplicati.Library.Main.Controller+<>c__DisplayClass30_0.b__0 (Duplicati.Library.Main.TestResults result) [0x0001c] in :0
at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String& paths, Duplicati.Library.Utility.IFilter& filter, System.Action1[T] method) [0x0026f] in <e60bc008dd1b454d861cfacbdd3760b9>:0 at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.Action1[T] method) [0x00009] in :0
at Duplicati.Library.Main.Controller.Test (System.Int64 samples) [0x0004b] in :0
at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00423] in <156011ea63b34859b4073abdbf0b1573>:0

  1. I tried to run a repair and get the same error as when I tried the restore: Unexpected number of remote volumes detected: 0!

  2. It looked like I had no other choice, but to delete and rebuild the database, which I did and it is running now for a day or two and now again “stuck” at 90%

I also copied all the backup files to a local disk in hopes that the restore/rebuild would speed up a bit, but it doesn’t look like it makes much of a difference.

If you have a way of monitoring task resource use, I suspect you’ll find that it is CPU-bound… the thing that seems to take so long is rebuilding the SQLite database from the remote files, and it’s apparently a single-threaded process.

I happened to test a restore not long ago, after I’ve had Duplicati running for several years, and discovered this unfortunate limitation. There’s something in the way that database is (re)built that’s slow as molasses once you have a lot of files and versions; I don’t know anything about the internals, so I have no idea what it is and if there is any hope that one day it could be improved.

I don’t know anything that can do what Duplicati does and can do it quickly when you need to restore from only the remote files — “quickly” meaning that the rate-limiting step is the speed with which the remote files can be downloaded, as I assumed (and I think most people would assume) until I discovered otherwise.

Oh, one immediate thing… be very careful about running a “database repair.” I find the terminology confusing: under some circumstances this operation can delete remote files (attempting to sync them with a broken local database), making recovery impossible. Get advice (preferably from someone who knows more than I do) before doing anything like that and/or make a copy of your remote files either locally or on the remote server, if you can do that.

1 Like