Not yet. 2.0.4.5 beta already added some more threading. Parallel uploads came in 2.0.4.16 canary, but if you consider testing with this, pick up 2.0.4.17 canary instead in order to pick up a fix for a bug in 2.0.4.16.
The first thing to figure out is what the bottleneck is, having seemingly measured some things as low-use.
Does the GUI status summary for your backup stay at “Waiting for upload to finish” a long time near end? That would mean you possibly queued most or all of your –asynchronous-upload-limit (default 4) dblock files (default 50 MB). You can also go to your –tempdir (no need to set it, but you might need to find yours) folder to see those largish files rolling through along with other smaller ones, all with semi-random names.
You can also get a more accurate view of upload performance by looking at a profiling level log, either with –log-file and –log-file-log-level or (easier for once, worse for ongoing) server’s About → Show log → Live.
If watching upload finds the 50 MB dblocks slow, you could use Duplicati.CommandLine.BackendTool.exe to upload a 50 MB file using the put
command, and using a destination URL from Commandline or Export.
If you see an oddity like varying speeds for same sizes, that may be using a different B2 server each time. Their servers can fill up and overload independently, in which case they tell Duplicati to use a different one. This and other network issues which cause entire files to fail can be seen when you see retries in the logs.
How familiar are you with networking? If this is a network problem, there could be some low-level chasing. From a high level, distance is bad for network speeds (though yours seems extreme), so if you can hang on awhile and really want B2 (currently U.S. West Coast) their European data center might transfer faster. Additionally the Duplicati parallel upload features will presumably be in the next beta whenever that’s done.