I have been using Duplicati for over a year now, and it “just works”. Initial uploads have however been always a pain point, and recently I decided to take a closer look.
I am on AT&T UVerse internet service. The service is advertised as 25 Mbps down, 5 up. I consistently get around 90 KBytes/s reported by Duplicati web UI. This is far lower than what is possible. So I contacted wasabi support and they had me running a bunch of tests. One of them is a speedtest to Wasabi servers. It looks like
I get somewhat better results with regular speedtest with nearby “regular” speedtest servers, but let’s go with reported 2.7 Mbps.
I then created a 63 MB test file, and uploaded to a separate test bucket using awscli to Wasabi. My router, running tomato, shows this (red line is upload)
It shows the difference in upload speed between awscli and Duplicati. awscli reports around 600 KB/sec, which is kinda freaky because it is more than the advertised bandwidth and is also more than what I got from Wasabi speedtest.
What can possibly explain the difference and how can I get Duplicati to upload faster? The machine doing the upload is a debian box with amd cpu 8 cores that is totally idle with 16GB RAM, so there is no bottleneck there. There is no throttling in Duplicati or in the router. The Duplicati job is:
mono /usr/lib/duplicati/Duplicati.CommandLine.exe backup "s3://redacted/?s3-server-name=s3.wasabisys.com&s3-location-constraint=&s3-storage-class=&auth-username=REDACTED&auth-password=REDACTED" /home/ /root/ --upload-verification-file=true --backup-name=redacted --dbpath=/root/.config/Duplicati/LCCSMDEODP.sqlite --encryption-module=aes --compression-module=zip --dblock-size=50mb --passphrase=redacted --retention-policy="90D:1D,13W:1W,36M:1M" --disable-module=console-password-input
Please let me know what other information I can provide. I think I can easily achieve a > 3x upload speedup and it will help a LOT