I tried to find a (at least for me) simple answer, but had no luck…
I do understand, that the restore points are simple snapshots of the backup’d data at a given point in time. (@kenkendk pointed this out in another topic).
So when retention kicks in, snapshots that do not match the retention scheme will be deleted.
This happens very fast. But what happens to the actual backup’d data?
Are the files still there (in the aes.zip files)? Do these (and when) files get a cleanup too?
Does this happen, when Duplicati decides it’s time to do a compacting?
It happens very fast because the retention policy relies on autocompacting to deal with the actual files
In essence retention policy just deletes a dlist files for each restore point it needs to delete. These are usually just 5-50MB in size.
The dlist files reference blocks, scattered around in multiple block files, so you can’t always just delete entire blocks as that would impact other snapshots.
So retention policy just leaves all the blocks. Then Duplicati will see that some volumes are entirely “deleted” blocks and it will just delete the entire dblock file. Or it will see that multiple dblock files are mostly deleted files so it downloads them, removes the deleted files, and then combines the dblock files into new full dblocks
Thank you very much for that answer!
So the complete retention process consists of two steps - first: cleanup db/snapshots - second: cleanup dblock/backup-files.
One more question: i got a sqlite-DB for a job with a size of over 2GB. That job includes many files, therefore the size of the db, i imagine.
Does the number of versions for that job have an impact on the size of the db?
I had over 200 versions in this job and now, after adding retention, am down to 46. But i did not see any relevant change in the db size…
The number matters, but by far the greatest impact on the size is how many files and how many dblocks you have. So if the compacting didn’t remove anything it may not shrink a lot.
Now i’m satisfied.
Thanks a lot.