In my view this is how it ought to work. Timeouts should never be too big, or they’re not effective timeouts.
I did look on my own earlier, and didn’t spot a specific timeout setting in the old code (and we can’t test it).
Asking for historical research is awkward because the code is by different authors, and many years apart.
The v1 author is backlogged in major needs, and the v2 author (a volunteer as well) seems to be inactive. There’s also Microsoft. This is a completely different API, we have no inside info, and their old API is gone.
So what to do? As before, I suggest setting the timeout for your needs based on your actual transfer time.
which is another reason I’m reluctant to drag people into research. Can’t you just raise a timeout for now?
Although you have a fast connection, 9 GB is a huge step up from the 50 MB default. Note that a single file might have later updates spread across a series of remote volumes, so you might see a tiny file requiring hundreds of gigabytes of download to restore it. This may be a far larger remote volume size than is wise, however your fast connection does make up for it some. What’s the gain besides fewer remote volumes?
Choosing sizes in Duplicati talks about sizes. How the backup process works adds more context for that.
EDIT: Having a larger –blocksize (such as 1 MB) could be good for large backups to reduce block tracking overhead in the database (default is 100 KB) but I’m unsure what a huge –dblock-size offers that’s helpful.