Well, next run should clean-up the temp files. Of course there’s no way to deal with temp files if process is killed. But next run should take care of left-overs, if program is working correctly.
But back to the compaction process. I just tested it, it seems that the auto-compact also compacts ALL files that could be compacted, even a little bit. I would actually prefer model where only files above the compaction limit are compacted. This would make the compaction process more efficient. Because compacting files with very little to compact, is highly inefficient. As far as I see, there’s little to gain, when doing “perfect” compacting, opposed to compacting only worst offenders.
If there would be two separate parameters compact when overall expired data above limit % and compact only files with more than % wasted. It would even allow modifying the thresholds. This is also one way to limiting the time compaction takes, because as stated, compacting files with very little to compact, is the most time and bandwith intensive and most wasteful step of the compaction process.
Anyone, any thoughts about this?