I’ve just started using Duplicati, but I have a few questions.
I use it with Icedrive to which I connect with WebDAV. Backup speeds when uploading to Icedrive are very good, but restoring when downloading is a completely different story. The speed as shown on the web interface is terribly low. For example 200 files 11.80 MB are being restored to an external SSD drive at a speed of 1.2 to 1.4 MB/s, but the process goes on and on.
I went to About, Log data from the server, Live tab, verbose listing and I see that files are being restored, however at a very slow rate. The number of files to restore and the total file size are not updated on the web interface, but the current restore speed changes continuously.
I haven’t changed anything in the Advanced settings.
EDIT: Here’s some updated data after the restore finished.
Start 2023-06-12 11:04:16
End 2023-06-12 11:30:45
2023-06-12 11:30:23 +03 - [Warning-Duplicati.Library.Main.Operation.RestoreHandler-MetadataWriteFailed]: Failed to apply metadata (this is followed by a file name and 11 other similar messages)
Well, yes it’s very low. I just tried with a test Webdav backup and I restored 180 Mb in 8 seconds. However there is no information for a possible reason in your post. Maybe something in your backend ? I don’t know Icedrive but the name strikes an association of ‘glacial’
Are you saying you entered your first post into the darn artificial stuff thing and it answered you that the download speed was 189.29 Mb per minute ? If yes I can just ask it the answer to your question. Or did you actually provide more information to this thing while not posting it here ?
I only entered the download size and the time taken and asked Bing to tell me how many MB/minute. Bing did the arithmetic. I don’t think that it would have the answer to my actual question about the speed of restore using Duplicati.
Test conditions matter a lot, as a restore by default will try to obtain local blocks rather than fetch remote. There’s also no need to restore a file that’s already sitting there just perfect as it is. This confuses people when Duplicati warns them it didn’t really need to restore anything. What conditions were these tests in?
How large is this one? For backups over 100 GB, it’s good to scale the blocksize up from 100 KB default.
As is typical, Internet opinions on their speed vary but it can never go faster than your connection speed, and typically the “speed” on an Internet speed test has multiple connections to try to keep parallelism up.
You should be able to see in your Verbose log (or even an Information log) how fast your files download.
Thank you very much for your comments. Before I answer them I should say that I’m quite impressed by Duplicati and a donation is in the pipeline!
My nominal Internet speeds are 1Gbs download and 100Mbs download. As we know these are theoretical speeds and in practice I would achieve lower speeds.
For the purpose of testing I chose to restore data to a different location than the original (source) location of the backup. In fact the destination was an external SSD 256GB drive that I connected to my laptop through a USB hub. Question: Does Duplicati know where the local files were and did it try to do the restore from my local files instead of the remote location (Icedrive)?
A number of my backups are media files and in total can amount to over 100 GB,
I notice that the directory structure and a lot of the content was created almost immediately after the restore began. Then reading the log I could see individual files being restored. Question: Is this normal?
My internal drive is an SSD 256 GB.
Question: During the restore process the second time the web interface was showing that 0 bytes would be restored. However the speed of restore was changing a bit up and down continuously. Is this normal?
Duplicati will attempt to use data from source files to minimize the amount of downloaded data. Use this option to skip this optimization and only use remote data.
I always tell people to set that when they test their backups for integrity, but it also makes speed tests realistic assuming you’re talking about disaster recovery. For lesser damage, source bits may remain.
SQL gets slow (and database big) when tracking more than a few million blocks, so suggest scaling up.
It looks like the whole directory structure is made all at once, early. You can see what you think of below:
IIRC the last step of rebuilding file content is downloading dblocks and patching blocks into files needing that particular block. You might notice files that have surprisingly short length while they’re being rebuilt.
Be careful how you read. Patching metadata refers to attributes. For file data, it says Patching file.
show progress when restoring files #4713 is having its first Beta so has not had much exposure. On the backup side, I think the number is an average upload speed over intermittent uploads, so gets larger as upload happens, and sags down while more changes are being found. There may be similar effect here.