Duplicati upload performance to Wasabi vs other clients


Can anyone confirm what I’m seeing with Duplicati performance is at expected levels or am I doing something that is very negatively affecting the upload speed.

Installed current version Duplicati -
Windows server 2016 up to date patches.
128 GB RAM, 24 CPU cores Xeon Gold 6126. 10TB RAID 1 SAS 10K disk volume. NTFS 4K formatted. NTFS Deduplication enabled. 10 GbE

The upload files are Veeam backups 40 GB to 250 GB in size, dedupe friendly, not encrypted saved on above deduplicated Windows NTFS volume all in a single folder - total folder size about 1 TB across 10 files.

Using an S3 client - e.g. TNTdrive / Cyberduck I can upload these files to the Wasabi bucket at 7-10 MB/sec.This is the plain vbk file and no encryption is being added on or before upload.

Internet is 100 Mbps 1:1 fibre, and getting 10 MB/sec uploads is not unusual for me.

Using Duplicati I only get a maximum of about 400 KB/sec on the upload to Wasabi, no throttling is set.

I’m trying to understand why its 25X slower, since as it stands its making daily uploads like this infeasible and aside from this it looks to be quite promising a product.

Is what I’m seeing pretty much the performance I can expect or is this an aberration and I should be getting closer to line speed? I realise its internet but I have tested at various times of day for a couple days now, also on different ISPs and its pretty consistent what I’m getting.

The uploaded files when I look at the bucket are all 50 MB dblock AES files and a number of smaller dindex files.

Is my experience a result of having enabled encryption or is this being affected by (inline?) deduplication algorithms or something else. This is the initial upload.

I tried the same upload with default block size of 50 MB files, and no SSL or encryption and upload speed was only marginally better at 450-500 KB/sec.

Is this expected or am I missing anything here. Is an initial upload always slow and then speeds up after time when dedupe efficiencies kick in as daily backup deltas are added to the bucket? Thanks for any insights.


Before getting into the rest, are you possibly affected by Wasabi issues that other clients cover better due to parallel uploads which Duplicati added in v2.0.4.16- but no beta?

Statement on Wasabi Service Degradation Over September / October 2019


Hi, am unclear on the Duplicati version and why the older would work better than the current / latest beta but you’re right there looks to be something in this.

I am aware of the Wasabi service degradation notice but havent experienced any issues with them myself. I am creating buckets in their their newer eu-central-1 region.

I just installed duplicati/releases/tag/v2.0.4.16-
on my Win 10 desktop (initial testing was from the backup storage server) - with this version I see s3.eu-central-1.wasabisys.com is available as a drop down selection (in the later beta_2019-07-14 it is not available so in my server upload (post 1) I entered it as a custom URL) I then selected the region eu-central-1 (which both versions describe as Frankfurt although Wasabi I think is Amsterdam - anyway I expect the eu-central-1 part remains the same hence it works)

Will encryption + SSL I now get 5 MB/sec upload speed which is 10X better than I had before (however I am uloading different vbk files)

I will uninstall the 2019-07-14 beta tomorrow on the server and replace with canary_2019-03-28 and do some further testing and revert.

For now it appears that the older canary flies 10X better than the 2019-07-14 beta.

Thanks for the on target feedback.

You can test with but don’t stay on it because later canary like v2.0.4.34- have fixed some serious bugs.
Canary in general has minimal prior testing, and is a bit bleeding-edge. is trying to get to Beta but is hung up on Stop bugs that got in.

v2.0.4.23- is pretty much v2.0.4.5- so the Canary with the initial (somewhat flawed) cut at parallel uploads represents code that’s about 4 months later.

Multipart Uploads is possibly how Cyberduck is getting faster total throughput. Duplicati’s transfer will not split a file AFAIK, but can do multiple separate files (e.g. several dblock files) in parallel, so gets a similar boost (and you can tune Advanced option – asynchronous-concurrent-upload-limit up from its default 4 if you want to see what happens) in a different way, attempting to keep a big pipe full despite inherent transmission limits of the TCP protocol which are reached due to speed-of-light limitations…

TntDrive News

Added support for new Amazon S3 feature - Multipart Uploads.

–asynchronous-upload-folder should show a bunch of roughly remote volume size (default 50 MB) files flowing through, if the preparation runs faster than the upload. If the upload is faster, there will be fewer.

Duplicati.CommandLine.BackendTool.exe can do single-thread uploads of individual files without doing deduplication, compression, encryption, etc., if you want to see what sort of upload rate it gets. Finding what target URL you should use can be tricky, but an Export As Command-line is a good starting point.

Hi TS,

Thanks for comprehensive feedback. Here’s what I’ve figured thus far.

Installed duplicati- today on server. Same settings as previous with SSL and AES checked.

Fed it a 30 GB Veeam backup - uploaded to Wasabi EU at 1.5 MB/sec.It went as high as 2 MB/sec in the beginning.

Uninstalled and went back to [] and redoing the same upload now - getting 1.4 MB/sec.

So appears performance is similar. Must have been luck to get the 5 MB/sec upload from my desktop last night.

So yes its clear the from Duplicati website isnt as performant as the above canaries from Github with Wasabi EU at least from my end.

Will concede all the version numbers are making my head spin. Usually I just expect to install latest stable and leave it alone, so would be great if Duplicati could do that as well.

Will leave it on for now and get on with life.So long as I dont encounter any showstoppers this will work fine.

Thanks, Cheers

There are some severe issues. That’s one reason why Canary is up to On balance, even with possible regressions from new changes, I think progress has been made. For a next-release fix, look at:

v2.0.4.17- where the parallel uploads author fixed this bug in backend files:

Fixed an issue where index files were not generated, thanks @seantempleton

and there has been at least one more index file bug that happened whenever an upload retry happened.


Fixed a retry error where uploaded dindex -files would reference non-existing dblock files, thanks @warwickmm

and there are some more. You can read through the release notes if you want to see what’s happened.

It will always be the case that latest stable is missing features such as this that are still in development. How else could it possibly be done? Well, one path would be for the test team to get internal Canary to beat on for awhile, but there is no test team (volunteers?), so instead after automated test, a Canary is released for any who want to try it. In this way, Canary is somewhat similar to a Windows Insider build.

On the misleading version numbers, an improvement is expected. Duplicati version numbering has the general idea. With this one can at least guess by version number which version has the newer function.

Settings in Duplicati will let you change channels if you get tired of possible surprises Canary may give. Updating Duplicati is somewhat automated. You’ll hear about newer versions for the channel you’re on.

Hi TS,

Thank you for all the wisdom, have now updated to Duplicati - as you suggested to make for a more sensible “insider slow track” build. In the Win$ world I have learnt that fast track isnt where I want to be, its not a great start to the week when your desktop greets you with a BSOD first thing Monday, so am trusting with your advice a slow track Duplicati canary this is where I’m at now.

Have let canary_2019-10-19 loose on uploading to Wasabi EU 28 Veeam vbr and vib files of 20 - 300 GB filesizes for the weekend so lets see how it goes.