Long Long .... Long time to restore a backup


after 30 hours I’m still stuck in this situation:

Restoring files …

Scanning for local blocks …

Some information:

  • Restoring about 2.1 teraB from an external NAS via LAN 1Gb
  • A lot of files… about 800.000
  • Restoring from a simple folder (original server has crashed)

One of the biggest problem is that is not possible to get if the process is stuck or not.
The task is taking 1 CPU over 4



You should be able to open another tab of your browser, and from there switch to About / Show log / Live

You might need to adjust log levels. Profiling will show SQL queries – potentially lots of them though.

If it’s in an SQL query, that’s largely single-threaded (thus 25% load on 4 cores), but also look at I/O.
Resource Monitor possibly gives a nice view. Exotic tools like Process Monitor are also available but
Task Manager Processes tab disk activity might be easy way. I think it’s looking to avoid downloads:


Duplicati will attempt to use data from source files to minimize the amount of downloaded data. Use this option to skip this optimization and only use remote data.

says maybe you don’t need any download optimization, but too late for that in the middle of the restore.

After it figures out which blocks it needs from destination, it should download dblocks to distribute them.

I’m hoping you’re not at default 100 KB blocksize, because that’s too small for 2.1 TB (but too late now).

Now it is not stucked anymore

But it takes so much time to restore.

I need to understand the right optimization here.
But let me tell you that optimization or not, it takes to much time for a restore.

I guess we have to associate Duplicati with other “speedy” backup system

All I’ve heard is that there’s a fast LAN. What speed/variety of drives are at destination and source?

Mechanical drives will be slower, and seek time when spreading blocks around is probably part of it.

As a side note, Duplicati parallelizes backups more than restore, theory being that that’s done more.

The entire design also aims at space efficiency (especially for many versions), and that slows things.

Given above. Questions?

The NAs is an old NETGEAR with 4 rotative disks in raid 5 (so surely not the best!)
Anyway if I make some test with the 50MBs file, the speed read is more than 70/80 MB/S
I hope Duplicati could have the possibility to read from a disk and make all the I/O stuffs in RAM

Yep, I’ve red that
I have to study first, then I can formulate questions in the right manner

It does a lot in the tempdir folder (likely the usual Temp). Using a lot of RAM won’t fit a small system.
You can certainly make a RAM disk if you have the space. A mechanical drive is probably the worst.
If files are encrypted, add some time for that. Inside is a .zip file with a file per block of the backup.
If you’re still at default, you have about 20 million blocks and the database has to track all of them…
That’s why for larger backups a larger blocksize is suggested, otherwise performance can drop a lot.

How the backup process works
How the restore process works