50MB is just a safe default size that reduces the likelihood of running into bandwidth caps.
A “quick” summary is that Duplicati chops up files into smaller blocks and those blocks get compressed into dblocks (archive) files. So size vs. number of files doesn’t matter much. 1,000 x 1M files and 1 x 1,000M file will both end up looking about the same at the destination (assuming no de-duplication occurs).
Since the dblock file is what’s built and transferred around you’ll find you need local temp space big enough to handle a few dblock files while being built, sent to the destination, and downloaded / verified. Downloads also happen during version cleanup - so if you say you only want to keep the 5 most recent versions of files, then when a 6th version is backed up the dblock with version #1 gets downloaded and re-compressed. (but maybe not right away, see below for more detail)
Since one or more dblock files are also downloaded and verified with each backup so if your destination (or pipe) has bandwidth limits, very large dblock sizes could run you into bandwidth quota issues.
Low quality (frequent drops) connections between source and destination would be another reason to stick with a lower dblock size (less data to re-send when a drop occurs). However high quality and/or fast connections (such as LAN or even local disks) can benefit from larger dblock sizes. I’ve heard of people going as large as 2GB, though that’s an extreme case - mostly the larger ones seem to be clustered around 500MB or 1GB.
Pretty much all of the above can be enabled, disabled, or adjusted with advanced parameters. For example, you can turn off download/testing of dblock files or adjust the number of “pending” dblock files get created while waiting for the current one to finish being uploaded.
Note that dblock sizes CAN be changed after backups have been created, but the new size is only applied to new files. Dblocks using the older file size won’t be touched until they are deemed “sparsely” filled (such as when deleting older historical versions) at which point they’ll be downloaded, combined with other sparse dblocks, and re-compressed at the newer dblock size.
However, you can NOT change the block (file chunk) size once backups have been created. For that one you might want to review your content and consider having multiple backups with varying block sizes.
For example, a backup of JUST video files that don’t change often might work well / run faster with a larger block size. The same would apply to infrequently changed music files, though a smaller block size that video might be more appropriate. Keep in mind that the block size is the smallest chunk of processing that happens - so if you have a 500MB block size (crazy, don’t do it) and change a single character (maybe fix a typo in some meta-data) then the entire 500MB block gets reprocessed.
For the official summary, check out this page:
Edit: Woops - missed hitting “post” be THAT much… 