Backblaze B2 Storage painfully slow uploading from Windows Server

Hey there all, I’ve been fighting with Duplicati over the past couple days but finally got it set up working and as a service. Also finally figured out how to keep the Gui Tray icon from starting its own local instance. Yay!

Now that everything is up and running, I set up a pretty default backup job, nothing special at all. I exported out my selections as a command, here is everything:

"C:\Program Files\Duplicati 2\Duplicati.CommandLine.exe" backup "b2://*****OfficeBackups/********DandE?auth-username=**************&auth-password=***********************************" "D:\UserData\\" "E:\CompanyArchive\\" --backup-name=**********FileServer --dbpath="C:\Program Files\Duplicati 2\data\**********.sqlite" --encryption-module=aes --compression-module=zip --dblock-size=50mb --passphrase="***********" --retention-policy="1W:1D,4W:1W,12M:1M" --exclude-files-attributes="system,temporary" --disable-module=console-password-input

The problem is, my upload is capping at about 2 Megabytes per second. I did a backblaze speed test and it will go as high as about 60 Megabytes per second up or so. We have a symmetrical 1GB line at this location. This is a pretty beefy file server with lots of cores, a 10Gb/s NIC. I can’t see any bottlenecks on my side at all. I even checked to make sure I didn’t accidentally turn on throttling in Duplicati.

Are there any flags or options I’m missing to pep this up? Are there any recommendations on troubleshooting steps to see where the constraint is?

This particular machine is Windows Server 2012 R2 running in HyperV.

Thanks in advance everyone.

I have found the speed quoted in the status bar is highly inaccurate. I backup large amounts of data, but with a lot of duplicate data (backup storage from a different backup program); the count usually shows a high speed in the beginning (as data is actually uploaded to my offsite backend), and then slows down as it starts to cover duplicate data. Check Resource Monitor to watch the actual work being done by Duplicati.Server.exe and System. I find a lot of data reads to be done by System while the Duplicati process does the writing of the duplicati block files (disk tab) and uploading (network tab). I have also disabled encryption, which has sped up my backups some. You should be able to monitor the process usage, though, to see if encryption is slowing you down significantly (though I doubt it given a large number of cores), but sometimes virtualizing forces encryption to go all-software instead of an optimized path.

I virtualize on KVM, and even in passthrough-mode for the vCPUs, the encryption is definitely slower than on bare metal hardware. Of course, my CPU is also ancient by computing standards (an AMD FX-8350). I have also set -io-thread-priority to abovenormal, and if I am in a hurry during the initial backup, I use Task Manager (or SysInternals’ Process Explorer) to set the priority on the actual Duplicati.Server.exe process doing the work to abovenormal or higher if needed.

I also adjusted the concurrent settings to 4 for hashing, block file creation, and uploading (my VM has 4 cores). While Duplicati consumes 75%+ early in the initial backup run, it drops to around 50% when covering duplicated data. See below for all options I have set. I would have more uploads, but the inbound bandwidth on my destination storage is 100Mbps, and I pretty much peg that with 4 uploads.

–concurrency-block-hashers=4
–concurrency-compressors=4
–concurrency-max-threads=0
–asynchronous-upload-limit=4
–asynchronous-concurrent-upload-limit=4
–dblock-size=100MB
–blocksize=1MB
–usn-policy=On
–thread-priority=abovenormal
–hardlink-policy=All
–snapshot-policy=On

1 Like