You might need to adjust log levels. Profiling will show SQL queries – potentially lots of them though.
If it’s in an SQL query, that’s largely single-threaded (thus 25% load on 4 cores), but also look at I/O.
Resource Monitor possibly gives a nice view. Exotic tools like Process Monitor are also available but
Task Manager Processes tab disk activity might be easy way. I think it’s looking to avoid downloads:
Duplicati will attempt to use data from source files to minimize the amount of downloaded data. Use this option to skip this optimization and only use remote data.
says maybe you don’t need any download optimization, but too late for that in the middle of the restore.
After it figures out which blocks it needs from destination, it should download dblocks to distribute them.
I’m hoping you’re not at default 100 KB blocksize, because that’s too small for 2.1 TB (but too late now).
The NAs is an old NETGEAR with 4 rotative disks in raid 5 (so surely not the best!)
Anyway if I make some test with the 50MBs file, the speed read is more than 70/80 MB/S
I hope Duplicati could have the possibility to read from a disk and make all the I/O stuffs in RAM
Yep, I’ve red that
I have to study first, then I can formulate questions in the right manner
It does a lot in the tempdir folder (likely the usual Temp). Using a lot of RAM won’t fit a small system.
You can certainly make a RAM disk if you have the space. A mechanical drive is probably the worst.
If files are encrypted, add some time for that. Inside is a .zip file with a file per block of the backup.
If you’re still at default, you have about 20 million blocks and the database has to track all of them…
That’s why for larger backups a larger blocksize is suggested, otherwise performance can drop a lot.