Question about sqlite file sizes

Since I don’t really know much about sqlite, does the size of each backup’s sqlite file have an impact on the backup/analysis times? As often as the program queries the database, I would think that it matters.

So, assuming it does, what would be the best way to minimize the size of these files? From quickly looking at the structure with a sqlite browser, it seems that it’s related to 1) the number of files to be backed up, 2) the blocksize chunks, and 3) the individual *.zip.aes files that are uploaded to the backup destination. I can’t control the number of files, but would using a large --blocksize and large --dblock-size keep that database smaller? I understand the tradeoffs with these files, but most of my backups are relatively cold storage for large media files, so I can live with them.

Hello, yes you can control blocksize and dblock sizes, but default values have advantages.
Simplest way is split large backup job to multiple smaller jobs…

My larger sqlite DB have 7,5GB …and it works :slight_smile:
(730GB /500 000 files / 730 versions )

I have a fix that reduces the space of the database by compacting the path table to avoid storing long strings multiple times:

I will merge it in once I fix the speed slowdown in the current canary build.

4 Likes