Upload speed very slow, CPU near idle both ends, any ideas?

I’m running a backup of large files from my NAS to an SFTP backend.

Running duplicati in a docker container.

I’m seeing max out approc 8mbps upload on a 20mpbs capable connection with near idle CPU on both ends.

Does anyone have any ideas how to identify the bottle neck ?

Thanks

Unfortunately this is just how duplicati 2.x is at the moment. I have the same “issue” wich is very visible on my threadripper CPU. Very low cpu/thread usage and not much happenning on the upload side either. Hopefully we will get some options in the future to get it to use more resources and speed things up.

1 Like

What speeds do you get if you use plain scp or sftp to copy files?

SSH has some bandwidth issues if you have a high latency.

Using scp directly maxes out the 20mbps upload of my connection.

I’m running duplicati in a docker container (Docker) so tested this in

  • New blank Ubuntu docker container
  • Linux vm (output below)
  • Underlying Unraid host thats runs my VMs and docker containers
    with identical results.

It’s going from Uk <> Crotatia so I thought TCP windows could be a factor, so I used the same size file as duplicati and used pseudo random data to avoid compression being an issue.

[deasmi@desktop:~] $ dd if=/dev/urandom of=./fiftymeg count=50 bs=1024k
50+0 records in
50+0 records out
52428800 bytes (52 MB) copied, 3.16905 s, 16.5 MB/s
[deasmi@desktop:~] $ scp fiftymeg root@slowhost:/dev/null
fiftymeg 100% 50MB 2.4MB/s 00:21
[deasmi@desktop:~] $

Duplicati uses this library for SSH:

It is possible that there are some SSH features that it does not support, which is why you see the slower speeds.

Could well be related to this

in that case, if I get time I might try and build with this version of the library and investigate.

It seems that there is a build with the speed fixes included:

I can make a canary build using that version.

That would be really helpful, I can easily fork the docker file to pull in the build for testing, and I’ll try and setup a manual install on a real host as well when I’m back home later in the week.

Great. I merged the update into master and hope to do a build later today.

I’ve made a new canary build with the updated SSH library: Releases · duplicati/duplicati · GitHub

2 Likes

Thanks, will test when I get back home tomorrow.

BTW: A good way of showing your appreciation for a post is to like it: just press the :heart: button under the post.

If you asked the original question, you can also mark an answer as the accepted answer which solved your problem using the tick-box button you see under each reply.

All of this also helps the forum software distinguish interesting from less interesting posts when compiling summary emails.

Sadly initial testing doesn’t show any improvement.

I am going to do some more digging, and try the test case for the SSH library they have to confirm this as the issue though.

If I confirm the library is at fault I’ll try FTPS as I manage the remote end it’s not too big a deal.

I’m running openvpn to a NAS with a SMB share. Using Cryptomator for the encryption instead of duplicatis own.
Any idea what is going on here? I got a 50MBit upload. Why is duplicati not utilising it? Rclone for example dont have any issue maxing out the upload

duplicati

If you are using file-based storage, you can set the options --disable-streaming-transfer and --use-move-for-put. The first disables the internal stream handling (no upload/download progress reports) and the second uses the OS “move” instead of “copy”, and thus never reading/writing the file in Duplicati.

Sorry for my ignorance I fail to understand how --use-move-for-put helps if the upload is done via ftp to a remote machine?
Maybe I just did not get the meaning of --use-move-for-put correct, but wonder if --use-move-for-put could help also in my scenario. Both machines in local network, but transfer is done via ftp.
Thanks!
Mike

No, the option --use-move-for-put is only for file-based destinations (i.e. SMB/CIFS/SAMBA shares).

Also experiencing the same issue. Desperately slow performance to an SMB share on the local network that can otherwise do v.fast transfers.

:confused:

Is there any progress on this?

If possible can you run the backup (or a smaller version of it) to a local drive and see how long it takes?

If it’s about the same duration for local vs. SMB then the issue is probably in the Duplicati overhead associated with blocking, hashing, and compressing the files and we can focus in on those.

Also, what version of Duplicati are you using?

I’m using the linuxserver.io docker

From within the UI:

You are currently running Duplicati - 2.0.2.1_beta_2017-08-01

The ‘Check for updates now’ option doesn’t work (typical for dockers?).
Does this mean that the docker is not up to date?