A rule-of-thumb which we hope to avoid by below is to limit backups to a few million blocks.
For this, that means 1 MB --blocksize, up from the default 100 KB. More on that value here.
Choosing sizes in Duplicati offers some other guidance. If your link is quick, you can afford to increase what the Options screen calls “Remote volume size” which is actually dblock-size under another name.
The slowness that can arise is many volumes may be needed to restore few files, if blocks are needed, however if this is purely for disaster recovery restore (but why not get some other use of it meanwhile?) you’re going to download everything anyway, so again larger dblock volumes may have few downsides.
Probably nobody knows. There has been no systematic benchmarking of all backups on all destinations. There’s definitely a job opening if you’re interested in exploring some on your own question about “best”.
Potentially a large number of files would slow down file listing, but that’s generally only at a backup start and end (e.g. to sanity-check that the destination files are “looking” as expected). Even at a rather small default 50 MB remote volume size, that gives you 10 TB (half the files are dindex files), so would suffice.
These would probably be fine too. This number (unlike blocksize) can be changed later, but affects only newly created volumes, however increasing it can trigger a compact which may work awhile to give you what you asked for. Lowering the number is less effective, so you may be holding large files a long time.
If you have time and interest, exploration of settings changes on various sorts of performance is an area much in need of exploration and documentation or even following forum topics to assist people with that.
As a community work, Duplicati hopes that everybody can contribute something to its collected progress.