Since I don’t really know much about sqlite, does the size of each backup’s sqlite file have an impact on the backup/analysis times? As often as the program queries the database, I would think that it matters.
So, assuming it does, what would be the best way to minimize the size of these files? From quickly looking at the structure with a sqlite browser, it seems that it’s related to 1) the number of files to be backed up, 2) the blocksize chunks, and 3) the individual *.zip.aes files that are uploaded to the backup destination. I can’t control the number of files, but would using a large --blocksize and large --dblock-size keep that database smaller? I understand the tradeoffs with these files, but most of my backups are relatively cold storage for large media files, so I can live with them.