If you still recall after some time has passed, could you please describe the test method that indicated this? Testing 22.214.171.124 canary here only compacted dblock files that exceeded the threshold setting (default 25%) without any change to the file where only about 9% was wasted. This test used zip files I had laying around, backing up 1 and 10 MB, then 2 and 20 MB, then 3 and 15 MB. I then deleted 1, 20, and 15 MB and set the retention to 1 version, then backed up. This produced waste of 1/11, 20/22 and 15/18 for the three dblocks, however only the two that were above the default 25% threshold were compacted. Duplicati messages said:
2018-11-06 21:43:12 -05 - [Verbose-Duplicati.Library.Main.Database.LocalDeleteDatabase-FullyDeletableCount]: Found 0 fully deletable volume(s)
2018-11-06 21:43:12 -05 - [Verbose-Duplicati.Library.Main.Database.LocalDeleteDatabase-SmallVolumeCount]: Found 1 small volumes(s) with a total size of 511 bytes
2018-11-06 21:43:12 -05 - [Verbose-Duplicati.Library.Main.Database.LocalDeleteDatabase-WastedSpaceVolumes]: Found 2 volume(s) with a total of 68.01% wasted space (34.91 MB of 51.33 MB)
2018-11-06 21:43:12 -05 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting because there is 68.01% wasted space and the limit is 25%
A quick peek at source seemed to say that first a list of dblock files exceeding waste threshold is made, then the waste is computed as sum of their waste (fixable by compact), then waste/total is compared to threshold. This seems supported by the rough numbers above, and by the –threshold documentation stating dual use.
Having a small threshold means frequent compaction, but it might not gain back a whole lot from a given file. Having a large threshold increases the chance of good per-file gains, but compact gets huge when it fires…
Or so I speculate. I haven’t worked with this much. This post is mainly to question the compact-all-files claim.