Identified another slow query during backup

After doing some reading it seems the blocksize parameter and query performance are significantly related. (Why the heck CAN'T we change the blocksize? - #45 by ts678)
It also seems like the number of files being backed up might also have a major effect. I will probably test this with a large but few file backup and a large but many file backup.

Can anyone point me to a in depth explainer on the architecture and database design, probably specifically around how blocksize affects data on the database if any. I am reading through the stuff in the manual and wiki but anything else will help. (How the Backup Process Works - Duplicati 2 User's Manual) (Local database format · duplicati/duplicati Wiki · GitHub)

I think there is enough evidence to confidently say we should edit this page in the manual (Choosing Sizes in Duplicati - Duplicati 2 User's Manual) to say small and even default blocksize will cause poor backup performance. It should probably also be added under general options, with some rules of thumb (which we don’t really have a good idea of atm).

If there are any other tests people would want me to try let me know. I have a pretty robust environment with 10GbE to my destination (local windows pc) so we can nail down some rules of thumb performance wise if I know what tests to do.