Why failed because of disk space?

I would refer you to the forum discussion as a good place for ideas, or to continue the discussion. There is a comment from @mikaelmello that sounds like there was some testing done. I don’t know of any official trial.

I think these are decrypted into semi-randomly-named tmp files on their way to regenerating the restored file. –tempdir and other means exist to specify where temporary files are created, if that helps any. I suggest test.

I can’t comment on what your Chrome is doing, and it might possibly depend on JavaScript code from the site. Watching with netstat should show whether you have parallel TCP. Regardless I know of no go-faster controls.

Can’t get lan speeds explains some of the other tasks besides just transfer that Duplicati must juggle, though it’s apparently able to crank out tmp files faster than they can be uploaded. It’s still doing much multitasking…

10GB was seemingly a previous maximum file size for Google Drive, so perhaps Duplicati added a silent cap which hasn’t been changed, as Syncdocs did. This might be a moving target. Google Drive now permits 5TB. With the background knowledge you now have, you could experiment. If it’s capped, you can file an issue for someone to see if they can find the code and raise it, but you might get pushback about its reasonableness.

You can change --dblock-size whenever you like (but it only affect new files). --blocksize can’t easily change.