In terms of compacting files, the ones that are read would not be the latest (those would be the writes).
There has to be enough churn (e.g. deletions of old versions) to cause wasted space and do compact.
Maybe the really old files are ones with stabler source, therefore there’s no wasted space for compact.
Backup Test block selection logic describes that. I’d have thought post-backup test would get new files, however maybe those happen to be good. It sounds pretty odd, but maybe a pattern will emerge later…
Developers sometimes work there too. Duplicati has many technologies, so it’s a lot of ground to cover.
Being willing to give something a shot is great though.
Sure. That would make any pattern easier to see without having to look at the logs (which do show this).