What is a good remote file size for large media?

I would suggest higher values. If your source contains only large video files, I would set both block size and remote volume size as high as possible.

if your source location contains only compressed video files, in-file deduplication is not likely to happen, regardless of the block size. File deduplication works always perfectly (for example when you move a file to another folder), regardless of the block size. A smaller block size will result in a larger local database and increase database recreation time dramatically.

For the remote volume size, I would also choose a high value, because all source files are multiple GB’s. This will result in fewer files at the backend and a somewhat smaller local database. You will not benefit from smaller remote volumes if most source files are larger than the remote volume size. If you have to restore a small file (3 MB) and have chosen a remote volume size of 1 GB, you have to download 1 GB to restore your 3 MB file, but this doesn’t apply to your scenario: to restore a 38 GB file, 80 500 MB files have to be downloaded, or if you use a 50 MB remote volume size, 800 files have to be downloaded. With a 500 MB remote volume size, you download 500 MB more than needed in the worst scenario, if the last few bytes of the video file are inside a new remote volume.
I don’t expect much difference in total amount of up/downloaded data for compacting, because I suppose that most video files are never modified after they are backed up.

So my suggestion is to use a very large block size, say 50-200 MB and a remote volume size of 500-1000 MB.

To get an indication of the consequences of different block- and volume sized, you can use this Google sheet:

2 Likes