Database Restore Takes Over 5 Days/Looking for Alternatives

Hi everyone,

I’m having a serious issue with restoring my Duplicati database. I’ve been trying, in vain, to restore the database since mid-February, which is also when my last backup was. The restore process is taking well over 5 days to complete each time I try.

If a database restore is interrupted, there’s no way to resume it, and we must delete the whole thing and start again from scratch each time.

I’ve been pausing the job whenever possible (and not shutting down) to take the laptop to my workplace, but something inevitably goes wrong. Twice I’ve accidentally triggered a restart myself. Other times, it restarts due to scheduled late-night Windows updates, the router drops out and the restore fails, or any number of unexpected interruptions happen. I’ve done my best to mitigate these risks, but realistically, it’s extremely difficult to keep a week-long restore process running on a laptop without interruption—especially when that portable machine is still needed for day-to-day work.

I’ve attempted using the old RecoveryTool method to rebuild the database from my remote backup files, but it turns out that tool doesn’t work if I don’t have the right file types present, so it’s been of no help in this case.

I’m hoping for advice on how to speed up the restore process, or get a minimally restored database and slowly add backups to it, or otherwise work around the inability to resume a restore job. My most recent backup is now approaching two months old, which is getting pretty dangerous, and despite many attempts, I haven’t made any real progress.

It’s just not practical to rely on a restore process that takes a week but can’t be resumed.

Also, if anyone has any solutions, or recommendations for decent alternatives to Duplicati, I’d love to hear them.

Any insights, workarounds, or recommendations for the current situation would be greatly appreciated.

Thanks in advance for your help.

I assume you mean Recreate (or Repair with no DB). Restore is from a backup.

Are there any computers available that can stay in one location? If so, use one.

Databases can move. Two active systems are bad, but you’re not near that yet.

RecoveryTool is for restore not for regaining database, unless it’s backed up. Occasionally people will backup database, but more usually, one recreates it.

Can you clarify that, or post errors? I “think” it uses dlist and dblock files.
Ordinary Recreate uses dlist and dindex, going for dblock if necessary.
A dindex is supposed to index one dblock, thus avoiding dblock fetching.

The last 30% on the progress bar is downloading dblock files. 90% to 100% becomes very slow because it’s an exhaustive search for missing information.

If you like to see what it’s doing, watch About → Show log → Live → Verbose.

What’s the Destination storage type and speed? Maybe you can copy to faster, similarly to how a stays-in-place computer can help. Got a fast desktop handy?

Something with an SSD (if you don’t have one) is another way to speed things.

How big a backup is this?

Windows has performance tools such as Task Manager to show what got busy. Without any performance measurements, it’s hard to say what may slow things.

You can (if Destination space is available for another folder), start a new backup based on the current one. Export it, Import as new job, change name and folder.

Duplicati can only run one operation at a time, but if you move the Recreate to a different system, you can do a new fresh backup (maybe big upload?) for awhile. Merging them if the old backup comes back can’t be done, so you’d choose one.

I take it there has never been an error message from Recreate that wasn’t purely environmental (e.g. unexpected interruptions)? There’s very litle info to work with.

Hi ts678,

Thanks for the detailed feedback. I ran another generic rebuild from the web GUI, and here’s an excerpt from the verbose log:

5 Apr 2025 11:05 AM: Backend event: Get - Completed: duplicati-if2cd360cc4c14027abf56e08b25f3f77.dindex.zip.aes (845 bytes)
5 Apr 2025 11:05 AM: Backend event: Get - Started: duplicati-if2cd360cc4c14027abf56e08b25f3f77.dindex.zip.aes (845 bytes)
5 Apr 2025 11:05 AM: Processing indexlist volume 12874 of 42987
5 Apr 2025 11:05 AM: Backend event: Get - Completed: duplicati-i6aba8a89e87d4aeba68b05edfe16f799.dindex.zip.aes (18.278 KB)
5 Apr 2025 11:05 AM: Failed to process index file: duplicati-if021d78862ef46e5a42add193839d3e1.dindex.zip.aes
5 Apr 2025 11:05 AM: Backend event: Get - Started: duplicati-i6aba8a89e87d4aeba68b05edfe16f799.dindex.zip.aes (18.278 KB)
5 Apr 2025 11:05 AM: Processing indexlist volume 12873 of 42987
5 Apr 2025 11:05 AM: Backend event: Get - Completed: duplicati-if021d78862ef46e5a42add193839d3e1.dindex.zip.aes (845 bytes)
5 Apr 2025 11:05 AM: Failed to process index file: duplicati-iebb13c167017434db389193e7fabdb71.dindex.zip.aes
5 Apr 2025 11:05 AM: Backend event: Get - Started: duplicati-if021d78862ef46e5a42add193839d3e1.dindex.zip.aes (845 bytes)
5 Apr 2025 11:05 AM: Processing indexlist volume 12872 of 42987
5 Apr 2025 11:05 AM: Backend event: Get - Completed: duplicati-iebb13c167017434db389193e7fabdb71.dindex.zip.aes (845 bytes)
5 Apr 2025 11:05 AM: Backend event: Get - Started: duplicati-iebb13c167017434db389193e7fabdb71.dindex.zip.aes (845 bytes)
5 Apr 2025 11:05 AM: Processing indexlist volume 12871 of 42987
5 Apr 2025 11:05 AM: Backend event: Get - Completed: duplicati-ic39bb3c5d4b64c3db5508b11be373f78.dindex.zip.aes (17.966 KB)
5 Apr 2025 11:05 AM: Backend event: Get - Started: duplicati-ic39bb3c5d4b64c3db5508b11be373f78.dindex.zip.aes (17.966 KB)
5 Apr 2025 11:05 AM: Processing indexlist volume 12870 of 42987
5 Apr 2025 11:05 AM: Backend event: Get - Completed: duplicati-ieff656abe426471aaf0a601484685d25.dindex.zip.aes (37.653 KB)
5 Apr 2025 11:05 AM: Failed to process index file: duplicati-i8f73921cf2d34f78b5a7721cce4664a5.dindex.zip.aes
Duplicati.Library.Main.Volumes.InvalidManifestException: Invalid manifest detected, the field Blocksize has value 1048576 but the value 102400 was expected
at Duplicati.Library.Main.Volumes.VolumeBase.ManifestData.VerifyManifest(String manifest, Int64 blocksize, String blockhash, String filehash)
at Duplicati.Library.Main.Volumes.VolumeReaderBase.ReadManifests(Options options)
at Duplicati.Library.Main.Volumes.IndexVolumeReader..ctor(String compressor, String file, Options options, Int64 hashsize)
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
5 Apr 2025 11:05 AM: Backend event: Get - Started: duplicati-ieff656abe426471aaf0a601484685d25.dindex.zip.aes (37.653 KB)
5 Apr 2025 11:05 AM: Processing indexlist volume 12869 of 42987
5 Apr 2025 11:05 AM: Backend event: Get - Completed: duplicati-i8f73921cf2d34f78b5a7721cce4664a5.dindex.zip.aes (845 bytes)
5 Apr 2025 11:05 AM: Backend event: Get - Started: duplicati-i8f73921cf2d34f78b5a7721cce4664a5.dindex.zip.aes (845 bytes)
5 Apr 2025 11:05 AM: Processing indexlist volume 12868 of 42987
5 Apr 2025 11:05 AM: Backend event: Get - Completed: duplicati-i4426dc44c89740e18a58cedcbc2ffd55.dindex.zip.aes (18.278 KB)
5 Apr 2025 11:05 AM: Failed to process index file: duplicati-i66d18145c80546058bda28d760867a02.dindex.zip.aes
Duplicati.Library.Main.Volumes.InvalidManifestException: Invalid manifest detected, the field Blocksize has value 1048576 but the value 102400 was expected
at Duplicati.Library.Main.Volumes.VolumeBase.ManifestData.VerifyManifest(String manifest, Int64 blocksize, String blockhash, String filehash)
at Duplicati.Library.Main.Volumes.VolumeReaderBase.ReadManifests(Options options)
at Duplicati.Library.Main.Volumes.IndexVolumeReader..ctor(String compressor, String file, Options options, Int64 hashsize)
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
5 Apr 2025 11:05 AM: Backend event: Get - Started: duplicati-i4426dc44c89740e18a58cedcbc2ffd55.dindex.zip.aes (18.278 KB)
5 Apr 2025 11:05 AM: Processing indexlist volume 12867 of 42987
5 Apr 2025 11:05 AM: Backend event: Get - Completed: duplicati-i66d18145c80546058bda28d760867a02.dindex.zip.aes (845 bytes)
5 Apr 2025 11:05 AM: Failed to process index file: duplicati-i2741af7a45ae4a1c94a65aab21ba20bc.dindex.zip.aes
Duplicati.Library.Main.Volumes.InvalidManifestException: Invalid manifest detected, the field Blocksize has value 1048576 but the value 102400 was expected
at Duplicati.Library.Main.Volumes.VolumeBase.ManifestData.VerifyManifest(String manifest, Int64 blocksize, String blockhash, String filehash)
at Duplicati.Library.Main.Volumes.VolumeReaderBase.ReadManifests(Options options)
at Duplicati.Library.Main.Volumes.IndexVolumeReader..ctor(String compressor, String file, Options options, Int64 hashsize)
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
5 Apr 2025 11:05 AM: Backend event: Get - Started: duplicati-i66d18145c80546058bda28d760867a02.dindex.zip.aes (845 bytes)
5 Apr 2025 11:05 AM: Processing indexlist volume 12866 of 42987
5 Apr 2025 11:05 AM: Backend event: Get - Completed: duplicati-i2741af7a45ae4a1c94a65aab21ba20bc.dindex.zip.aes (845 bytes)
5 Apr 2025 11:05 AM: Backend event: Get - Started: duplicati-i2741af7a45ae4a1c94a65aab21ba20bc.dindex.zip.aes (845 bytes)
5 Apr 2025 11:05 AM: Processing indexlist volume 12865 of 42987
5 Apr 2025 11:05 AM: Backend event: Get - Completed: duplicati-i0a67f259c5c6488e8367159899e4b375.dindex.zip.aes (36.481 KB)
5 Apr 2025 11:05 AM: Failed to process index file: duplicati-ib3c54c7281f34c7a84de0c669713976a.dindex.zip.aes
Duplicati.Library.Main.Volumes.InvalidManifestException: Invalid manifest detected, the field Blocksize has value 1048576 but the value 102400 was expected
at Duplicati.Library.Main.Volumes.VolumeBase.ManifestData.VerifyManifest(String manifest, Int64 blocksize, String blockhash, String filehash)
at Duplicati.Library.Main.Volumes.VolumeReaderBase.ReadManifests(Options options)
at Duplicati.Library.Main.Volumes.IndexVolumeReader..ctor(String compressor, String file, Options options, Int64 hashsize)
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
5 Apr 2025 11:05 AM: Backend event: Get - Started: duplicati-i0a67f259c5c6488e8367159899e4b375.dindex.zip.aes (36.481 KB)
5 Apr 2025 11:05 AM: Processing indexlist volume 12864 of 42987

It looks like it’s skipping dindex files (which is expected) but then encounters several index files that fail with an InvalidManifestException—specifically, the manifest’s Blocksize is 1048576 while 102400 was expected.

  • The current database being rebuilt is based on 100 KiB blocks (102400 bytes).
  • But some .dindex.zip.aes files were created using 1 MiB blocks (1048576 bytes), which breaks consistency.

Do you have any insight into whether this blocksize discrepancy might be due to changes in the backup file format (perhaps a result of an update or downgrade)? I’m wondering if restoring from an older backup version (e.g. version “3”) might bypass this mismatch, as was suggested in the thread I saw earlier.

For additional context, my local machine (the one performing the rebuild) runs off an SSD. I attempted a restore on my Pi5—the second, stationary computer—but since it’s not a Windows machine (and I don’t have another Windows PC), that’s my only alternative to a portable laptop. The backups are stored on a home server—a Raspberry Pi 5 running Debian, with a 4TB USB3 hard drive connected to it—and that Pi is wired into my router via Ethernet. I also connect my laptop via Ethernet during rebuilds to maximise throughput. From the backup set, there are 21,549 dblock files, 42,987 dindex files, and 455 dlist files.

I also stand corrected in relation to the terminology. I’ve been attempting a rebuild (not a restore) and now understand that the restore tool may not be the one I need for regenerating the database.

About 3 weeks ago I attempted a restore directly on the Pi5 using the same exported configuration from my PC, adjusted for Linux paths. I did manage to rebuild the database in the Docker container, but the resulting SQLite file is only around 16 GB—compared to 30+ GB on my Windows machine. I’m unsure whether it restored everything correctly. I’ve since begun a repair there as well and will keep an eye on the verbose logs. I’ll also watch for any performance slowdown towards the latter stages of the rebuild, as you mentioned that the last 30% of the progress bar, which involves downloading dblock files, tends to slow dramatically.

I can accept that a full rebuild might take a week given the size of the backup; however, the inability to save progress makes Duplicati 2 feel fundamentally unsafe for large backups. Every interruption—whether it’s a reboot, a power cut, or a network issue—forces a complete restart from scratch. It’s not just inconvenient; it turns every rebuild into a high-stakes gamble, and without checkpointing or resume support, the odds aren’t in the user’s favour.

Regarding the age of the backup, I recognize that my most recent backup is now approaching two months old, which is risky. Creating a brand new backup might have to be the way to go, even though it would add roughly another terabyte to the 1.5 terabytes already used for the main backup.

Any further recommendations for workarounds to handle these manifest errors would be greatly appreciated. Thanks again for your help—I’m looking forward to any additional insights you might have!

Issue Raised Response/Plan
Blocksize Discrepancy / InvalidManifestException Some index files use 1 MiB blocks (1048576 bytes) while the rebuild expects 100 KiB blocks (102400 bytes). Is this due to a format change (update/downgrade), or could restoring from an older version resolve it?
Hardware/Environment I’m rebuilding on an SSD-equipped local machine and have also attempted a restore on my stationary Pi5 (non-Windows), as I don’t have another Windows PC.
Terminology & Tool Usage I now understand I’m performing a rebuild (not a restore), and that the restore tool isn’t the correct option for regenerating the database.
Performance Slowdown I’ll monitor for any slowdown in the final 30% of the rebuild (during dblock fetching) and report back any observations.
Backup Age & New Backup Consideration With my latest backup nearing two months old, the risk is high. A new backup might be needed even if it adds roughly another terabyte to the storage.
Safety & Checkpointing Concerns A full rebuild can take over a week, yet any interruption forces a complete restart—an absurd gamble for enterprise-level backups. Could the tech team offer any solutions to introduce checkpointing or resume support?

Currently three days in to this round… I do need to use the laptop, but thankfully (and unusually) I don’t have to take it anywhere for work for another seven days. Fingers crossed the restore completes successfully and nothing goes wrong in the meantime.

Best,

Adam

That’s what I thought. Thanks for confirming. Basically, rebuild from Destination.

Expected based on seeing live log do that, which unfortunately I think it can do?

What it should do is read all dindex files. A log-file at verbose level will show.

There should be a dindex file for each dblock, with its name and a content index.

Looks way out of whack with one-to-one. I wonder if you got a second set of files. Checking dates might be interesting. Sorting by date, there’s usually a dblock and paired dindex shortly later. There shouldn’t be two dindex right then. Was there a burst of them later on without any dblock files? I’d like to know where they’re from.

You could always option your own blocksize, but the default for 2.0 was 100 KiB, while 2.1 default is 1 MiB. There’s no Duplicati 3, so I don’t know what “3” refers to.

There’s a manifest file in any dlist, dindex, or dblock file. You can get clues about any mystery files by decrypting them with AES Crypt or Duplicati.CommandLine.SharpAESCrypt.exe (Linux has a different name) to read:

{"Version":2,"Created":"20250404T135043Z","Encoding":"utf8","Blocksize":1048576,"BlockHash":"SHA256","FileHash":"SHA256","AppVersion":"2.1.0.5"}

The blocksize for a backup is not supposed to ever change in any ordinary use, because it’s used to compute things like offsets in files. I found one bug where the change in default breaks things. Possibly you found another, but what would it be?

This interesting question isn’t related to whether you’ll wind up reading dblock files.

Not sure what this part means. The recovery tool builds a crude map of the remote contents.
It specifically does not build a database, in an attempt to keep the process as simple as possible, with the expectation that less code == fewer places things can break.

If your goal is to restore files, the recovery tool is designed to be able to do so without the local database.

If your goal is to continue running the backup, you need the database rebuilt.

Fair point. I think we should allow the process to continue.

I am certainly not objective in that, but it depends on what you need. There is a big comparison here. And some more in the Comparison category.

Do you have any idea on what might have created this? We changed the default from 100KiB to 1 MiB for 2.0.8.1, but any operation will first verify that the size is correct.

Not really, since you have a mix of sizes in there. Any version of Duplicati will complain if this is not correct.

We are currently investigating the recreate process and have identified some slowdowns related to the SQLite database once the tables are sufficiently large. I am hoping that we will soon (read: a few weeks) have a more performant recreate process.

I have noted your request to also be able to continue the recreate process.

Any updates? :crossed_fingers: