Upload speed very slow, CPU near idle both ends, any ideas?

That would be really helpful, I can easily fork the docker file to pull in the build for testing, and I’ll try and setup a manual install on a real host as well when I’m back home later in the week.

Great. I merged the update into master and hope to do a build later today.

I’ve made a new canary build with the updated SSH library: Releases · duplicati/duplicati · GitHub

2 Likes

Thanks, will test when I get back home tomorrow.

BTW: A good way of showing your appreciation for a post is to like it: just press the :heart: button under the post.

If you asked the original question, you can also mark an answer as the accepted answer which solved your problem using the tick-box button you see under each reply.

All of this also helps the forum software distinguish interesting from less interesting posts when compiling summary emails.

Sadly initial testing doesn’t show any improvement.

I am going to do some more digging, and try the test case for the SSH library they have to confirm this as the issue though.

If I confirm the library is at fault I’ll try FTPS as I manage the remote end it’s not too big a deal.

I’m running openvpn to a NAS with a SMB share. Using Cryptomator for the encryption instead of duplicatis own.
Any idea what is going on here? I got a 50MBit upload. Why is duplicati not utilising it? Rclone for example dont have any issue maxing out the upload

duplicati

If you are using file-based storage, you can set the options --disable-streaming-transfer and --use-move-for-put. The first disables the internal stream handling (no upload/download progress reports) and the second uses the OS “move” instead of “copy”, and thus never reading/writing the file in Duplicati.

Sorry for my ignorance I fail to understand how --use-move-for-put helps if the upload is done via ftp to a remote machine?
Maybe I just did not get the meaning of --use-move-for-put correct, but wonder if --use-move-for-put could help also in my scenario. Both machines in local network, but transfer is done via ftp.
Thanks!
Mike

No, the option --use-move-for-put is only for file-based destinations (i.e. SMB/CIFS/SAMBA shares).

Also experiencing the same issue. Desperately slow performance to an SMB share on the local network that can otherwise do v.fast transfers.

:confused:

Is there any progress on this?

If possible can you run the backup (or a smaller version of it) to a local drive and see how long it takes?

If it’s about the same duration for local vs. SMB then the issue is probably in the Duplicati overhead associated with blocking, hashing, and compressing the files and we can focus in on those.

Also, what version of Duplicati are you using?

I’m using the linuxserver.io docker

From within the UI:

You are currently running Duplicati - 2.0.2.1_beta_2017-08-01

The ‘Check for updates now’ option doesn’t work (typical for dockers?).
Does this mean that the docker is not up to date?

Not necessarily. The updates that have been put out since 2.0.2.1 beta have been in the canary “path” so if your settings are for the beta “path” there are no updates yet.

Slower performance for SMB shares on local area networks that can transfer v.fast transfers.

Same issue here w/ beta. Very slow (3MB/s) uploads using a \server\path as a destination. I’d expect 15MB/s or more as I get that with file copies.

Keep in mind that Duplicati is doing a bunch of file processing & sqlite commands and potentially chopping, hashing, compression, & temp file creation before any actual file transfer takes place.

To compare Duplicati’s speed to a normal file copy you’d have to dig through the logs so you could subtract any time spend doing stuff that’s NOT the actual file copy.

You could get closer to what Duplicati does by writing batch file that compresses what you want copied into a temp file, copies that temp file to the destination, then deletes the temp file - though it still wouldn’t simulate time spent on SQL lookups or hashing…

I understand. I only referenced 15 MB/s as that’s realistically the network bottleneck. I figured I would see better than 3MB/s, esp. given very low CPU usage. Are there any benchmark numbers I can sanity check against? I’d like to make sure I’m getting what I can from tuning, etc.

Got it. Knowing the top end of your scale is often a good thing. :slight_smile:

Unfortunately, three are no official “expect this speed” numbers I’m aware of, but I think there are a few posts individuals have made about their own numbers…

When using a file:// destination, Duplicati is essentially doing a filesystem copy, so you should be able to compare it with the speed you get from a command like:

copy source.bin \\server\path

If you get slower numbers, try settings --disable-streaming-transfers as I suggested above, as that bypasses the speed monitor and throttle code, meaning that all transfer should be done outside Duplicati (i.e. handled by the OS, just like copy does).