How can I speed up local backups?

I am creating a backup from a WD Black to a WD green hard disk. I.e. both on SATA - and the backup feels like it is not faster than uploading to Google Drive. 120 GBytes have been created already, but it has taken hours.

Currently I continued it for the n-th time… (computer is not always-on) - it probably is comparing right now, since no new files appear at target. But the comparing process also takes… at least an hour it seems.

It is saying "[Local] Backupname: 28367 files left (87.23 GB)"
and the number of files decreases with like 1-5 per second. Pretty slow I think. In the backup set I have set “check-filetime-only”.

Any way I can increase local backup speeds? What can I expect?

%temp%, %userprofile%, Windows, its all on SSD if it does create temp files.

We are looking at a few perfomance optimizations. The 2.0.2.8 canary build has a faster hashing method, which improves performance a bit.

You can also set --zip-compression-level=1 (or 0 if you really don’t care for compression) to reduce the amount of time spent on compression.

For a file-based target (i.e. locally mounted disk), you can set the --tempdir= option to point to a temporary folder on the destination disk. Then also set --use-move-for-put=true and --disable-streaming-transfers which will disable all the throttle/progress/etc and just use a simple “move” command to move the zip archive into the destination folder.

3 Likes

I assume “local” means DAS (direct attached storage) and unless you have a very fast LAN this is NOT a good idea for UNC, NAS, mapped drive, etc. destinations.

1 Like

Correct. And it’s probably not a great idea for USB drives either unless you knew they’ll have a fixed mount point / drive letter every time they are connected OR you script out the setting to use the appropriate mount point at run time.

It’s bad to use D2 without progress. But I may be D2 have to do it automatically but with progress (compressing or only files and summary sizes)? Have to be more smart?

And what do use-move-for-put ?

To allow monitoring how much of a file has been transferred and to limit the speed, Duplicati uses an internal stream (read from file into buffer, write from buffer into file) which normally works fine.

If you use the two options mentioned, the first one will allow simply bypassing the buffer, allowing Duplicati to simply copy the file (like you would do in Explorer, copy/paste). This can, in some cases, be faster and has less impact on the CPU (depending on the system, a copy can be made without moving any data into memory). Downside is that this does not allow Duplicati to see how fast the copy is going (for the progress bar) nor does it allow slowing down the copy process.

Duplicati always creates temporary files, and then copies these into the destination. This approach ensures that any half-done stuff does not end up at the destination. However, if the file is already on the target system (i.e. the --tempdir is set to a folder on the destination) you can avoid the copy, by simply moving (aka renaming) the file. This has the benefit that no matter what size the file is, the move is super fast. If you want this, you can enable --use-move-for-put.

4 Likes