Backblaze B2 slow upload

I’m getting very slow upload speeds to B2 (about 200 kB/s). I know there is a bounty for a fix here: Support parallel uploads for a backend [$231] · Issue #2026 · duplicati/duplicati · GitHub

But is everyone really getting that slow speeds when uploading to B2? With about 60GB to upload this is unusable. Would really like to use Duplicati, but with that speed it is almost unusable… any tips/tricks?

I didn’t have this issue during my initial backups to B2. Duplicati would easily max out my internet service upload max speed - which is 10mbps or about 1MB/sec.

What are your settings -block-size etc.?

I didn’t use a custom block size. Just the default (50MB if I recall).

What does the Backblaze speed test report? They seem to be saying they don’t throttle at their end.

https://www.backblaze.com/speedtest/

There are plenty of other speed tests around too, but I’m assuming the one above fits B2 the best…

Thanks for that, didn’t know they had a speed test specifically for them. I got 192.8 Mbit/s from my laptop on wifi. Duplicati is running of my wired workstation. Will do another test from that to see what it says.

I was wondering if the Backblaze speed test somehow used multiple threads. Watching my browser’s TCP connections with Process Explorer, I only saw two, so my guess is it’s not trying to max things with threads.

While I’m used to the IP address lacking a name that it’s known by, when I looked up the IP’s owner, it was Cloudflare instead of the Backblaze that I see during an actual backup. So you might look over this article:

Still, it’s a test and it checks at least some things. For a better test, maybe login to B2 and upload a big file?

I’ll mention that TCP uses a slow start algorithm (on purpose) so small files might never get up to full speed.
When I watch “Profiling” in the live server log, my small files never get to the bytes/second my line can do…

You’ve cited the request for parallel uploads. Larger block size might help, but I can’t give specific guidance. There might be some around, and there are certainly others who’ve commented on performance questions.

By any chance do you know if the issue is specifically Backblaze B2, or is that just the only destination run? There are plenty of possible limiting factors on speed, and it’s hard to know which ones fit a given situation.

An experimental backup of a large ISO is running at about 720KB/s here, and Task Manager is showing Wi-Fi sending at about 6.2Mb/s which is about my uplink capacity (actually a little more than some speed tests say).
Watching the Task Manager graph is also useful to see if it’s bursty, i.e. are there idle spots between uploads.
https://forum.duplicati.com/t/merging-in-concurrent-processing/3308
sounds like it might reduce upload burstiness, however it’s still in the relatively rough canary branch I’m using.
One large file is possibly an easier case. There are reports of too many small files causing SQLite slowdowns.
To make sure we agree on units, did your original report really mean bits per second? The UI reports in bytes.
If it’s in bits, that sure is slow, which makes me even more curious about your burstiness and peak send rates.

Hey @Beatwolf, I like @ts678’s suggestion of testing a larger block size.If you have the space / time I’d suggest making a second test backup to a new folder on B2 but with a larger block size just to see if it makes a difference in your upload speeds.

thanks for your replies. I made an error in my OP it’ was actually KB/s, not bits per second.
Not sure about the burstiness, there doesn’t seem to be any idle periods. I will keep an eye on it though. I tried creating a new backup job (to the same buckets though) but without encryption, and this increased my upload speed to a steady 370-375 KB/s. Which is an improvement. The next thing I will test is larger block size. What would you recommend? 75MB?

Short answer

Sure, give it a try and see if it makes a difference. (My guess is it won’t.) Most people seem to stick in the 50-250MB range, though I’ve seen some go as high as 1GB for local storage.

You can change dblock size all you want and Duplicati will adjust (though only newly upload dblocks will use the new size, old ones will stay the size they were created with).

Long answer

Assuming when you say “block size” you mean dblock (Upload volume) size then that’s a tougher question than it appears because it depends on a few other things such as your bandwidth speed and potential limits.

By default Duplicati will download 1 fileset for testing after each backup is completed. This means that if you choose a 75MB dblock (“Upload volume”) size then for each backup run ~75MB of your B2 download allocation will be used for testing files.

At 75MB that’s likely not going to be a problem, but I know some users who have used dblock sizes of 500MB then suddenly run out of bandwidth. So it all depends on how often you run backups, how many filesets you test per run (default is 1), and how much bandwidth allocation you have with your destination provider.

Also consider that when restoring files, the SMALLEST download chunk will be your dblock size. So even if you only want to restore a 1MB file, at least one 75MB block will need to be downloaded (possibly more if the blocks of your 1MB file are spread across multiple dblocks.

Oh - if you’re using a retention policy to thin your backups over time, you might also need to account for multiple dblock sized downloads during that process as well.


This page might help explain things a bit better than I did:

1 Like

Old thread, but @seantempleton just submitted a pull request to implement parallel uploads.

I just did the network performance monitor on my ongoing Duplicati upload to B2 and found it was maxing out around 10-12 Mbps (my connection is at least 50Mbit). So I loaded my B2 account in a tab in Chrome and directly uploaded a new ~200MB file and it blasted away at ~40Mbps and finished within a minute. Meanwhile my ~250MB duplicati dblock files are taking up to 10 - 15 minutes apiece.

Same experience here. 320 kb/S is max upload speed regardless of what I have tried. Uploading via Arq’s client however is quite fast…

1 Like

I’m in the middle of a 200 GB re-upload to B2 in a new backup job… I have 500MB dblock size selected, and the backup’s been running since yesterday afternoon, and was averaging maybe 5 minutes per chunk in upload time based on the logs (reasonable but not great); suddenly this morning at about 8:30 Duplicati’s pipe seems to have tanked and now each chunk is taking upwards of an hour and a half to upload (reported speed in Duplicati is hovering around 90 KB/s).

Using the Backblaze bandwidth tester, I should be getting at least 20 megabit up… so around 20x the upload speed I’m currently getting. I don’t want to interrupt this new backup job but it’s taking approximately forever. I wish there was a way to see some more detail as to what might be causing the throughput choke i’m seeing.

Although it probably won’t immediately reveal any cause, I think it might be useful to look at throughput over time, preferably as a graph. On Windows, Task Manager shows this nicely for me provided I don’t have any other network-intensive things running. I just look at Wi-Fi and assume the sends are pretty much Duplicati.

Looking at CPU can be useful to see if it looks plausible that Duplicati’s other activities are impeding sends.

Looking at disk can be useful to see if possibly it’s disk-affected.Task Manager only goes to 100% busy, but Resource Monitor can be started from Task Manager for a closer look, such as how overbooked the disk is.

If you weren’t right in the middle of a backup, I’d have suggested Duplicati.CommandLine.BackendTool.exe to try a put of a file without all the other Duplicati processing, just to see if the basic upload method is slow. Because you’re going SO slowly, you possibly could try that, at little risk of slowing down the backup more.

For technical analysis at a network level, netstat can sort of help, and Wireshark definitely can (but is hard).

Getting back to rough estimation, –asynchronous-upload-limit defaulting to 4 says that if you’re truly limited by the network (which sounds quite plausible), you might have 4 files of your 500MB size in your temporary file area (e.g. what echo %TEMP% at a command prompt says). They should be new, and flowing through.

If it helps, I’ve had task manager open to the processes and performance tab for about a day - for the most part Duplicati is the only thing using any significant amount of CPU (not all that much) or network bandwidth. I get 40 - 50 megabits upload and duplicati is currently using maybe 1 megabit.

How do I enable asynchronous upload? As far as I can tell from the log files, only one dblock is being uploaded at a time. I don’t have a setting added for async-upload-limit so I assume it’s using the default of 4. Edit: I guess I was remembering incorrectly in thinking “asynchronous upload” meant multi-threaded simultaneous uploads (as opposed to the ability to prepare new dblock files while the previous one is in transit). So I guess we wait on that one a bit longer.

Since two weeks I’ve been experiencing slow uploads with B2 as well.
I have 512mb dblock size and one file would upload with usual speed (~50mbit), the next one with 0.7mbit. This behaviour has been consistent for several days.

I think it’s the default, and you have to use –synchronous-upload to turn it off. Note that this isn’t parallel simultaneous uploads. I’m just asking you to check if your queue waiting for one-at-a-time is actually 4. Seems like uploading is not CPU limited. Please look at the disk utilization and upload graph sometime.

Some other things you can look into are whether you seem to be getting retries at some level. Duplicati retries at the dblock level are suggested if you go to Job → Show log → Remote, and find dblock files being put repeatedly, probably with no dindex file in between. The file name will change, but clicking the file’s line will show its Size and Hash. If they’re the same, it’s probably the same file with retry of its put.

Getting into slightly exotic network things, opening a Command Prompt to run netstat -s will look like

TCP Statistics for IPv4

  Active Opens                        = 605279
  Passive Opens                       = 25160
  Failed Connection Attempts          = 87880
  Reset Connections                   = 27373
  Current Connections                 = 100
  Segments Received                   = 150439003
  Segments Sent                       = 147518709
  Segments Retransmitted              = 375289

and if you look at the change in the last two lines over some portion of the upload, you can see how well your system is able to send. If something is going wrong, e.g. in network, expect a higher percentage of retransmissions. You might also be able to see this sort of stall in the network graph I mentioned earlier.

@simsim what OS are you on? Most non-Windows ones have a very nice way to see network queues. Beyond that, help yourself to the troubleshooting and investigative steps being suggested in this topic…

It’s running inside a FreeNAS jail.

However I just wanted to add my 2 cents in case more users are seeing this behaviour and it turns out it’s a Backblaze issue, not duplicati.

I gave up on Backblaze and moved to jottacloud - cheaper + faster.