Spectacularly slow upload to B2 on good spec PC

Hi all

I’ve installed Duplicati 2.0.4.5_beta_2018-11-28 onto a Windows 10 box (nice spec - i7-3770K 3.5GHz, 32Gb RAM, Virgin Media 100) in the UK, and I’m doing a test of about 15,000 files across 50Gb, on a local HDD (mechanical), going to a Backblaze B2 bucket. The upload speed is SPECTACULARLY slow - it tops out at about 100KB/s and averages about half that. Speedtest from Backblaze suggests 2.3Mbit/s and Ookla (https://www.speedtest.net/) suggests 4.2Mbit/s. Now I know there’s a bunch of number crunching/processing, AES encryption, compression, database creation yada yada - but this is three hours in now and it’s barely started. It can’t stay THIS slow surely…??

I read that there will be multithreading/parallel uploading happening at some point soon - if it hasn’t already (and if it has, I can’t find where to set/configure it) - is there any news on that? Is there a newer (stable!) version that supports either or both of these?

But right now is there ANYTHING I can do to speed this up…? I get there’s a level of processing involved, as above, but seriously it’s like 0.025% of my available upload bandwidth, and my CPU/RAM/disk IOPS are very low so there’s no bottleneck there either. There must be a way to make this more efficient.

Apologies for posting about something that’s been covered previously but those threads are very old now.

Any advice gratefully received!

Not yet. 2.0.4.5 beta already added some more threading. Parallel uploads came in 2.0.4.16 canary, but if you consider testing with this, pick up 2.0.4.17 canary instead in order to pick up a fix for a bug in 2.0.4.16.

The first thing to figure out is what the bottleneck is, having seemingly measured some things as low-use.

Does the GUI status summary for your backup stay at “Waiting for upload to finish” a long time near end? That would mean you possibly queued most or all of your –asynchronous-upload-limit (default 4) dblock files (default 50 MB). You can also go to your –tempdir (no need to set it, but you might need to find yours) folder to see those largish files rolling through along with other smaller ones, all with semi-random names.

You can also get a more accurate view of upload performance by looking at a profiling level log, either with –log-file and –log-file-log-level or (easier for once, worse for ongoing) server’s About → Show log → Live.

If watching upload finds the 50 MB dblocks slow, you could use Duplicati.CommandLine.BackendTool.exe to upload a 50 MB file using the put command, and using a destination URL from Commandline or Export.

If you see an oddity like varying speeds for same sizes, that may be using a different B2 server each time. Their servers can fill up and overload independently, in which case they tell Duplicati to use a different one. This and other network issues which cause entire files to fail can be seen when you see retries in the logs.

How familiar are you with networking? If this is a network problem, there could be some low-level chasing. From a high level, distance is bad for network speeds (though yours seems extreme), so if you can hang on awhile and really want B2 (currently U.S. West Coast) their European data center might transfer faster. Additionally the Duplicati parallel upload features will presumably be in the next beta whenever that’s done.

1 Like