Since pretty much all destinations have finite storage would it make sense have a “keep until” setting for maximum destination size?
Once reached (approached?), Duplicati would automatically start pruning the oldest revisions until back within the keep size limit.
Things to consider include
Is it a hard (if limit will be hit by next upload, stop backup and prune?) or soft limit (finish backup even if over limit then prune until under)
Should add warning notification if source size gets more than x% of destination limit (tough to estimate due to compression variances). Maybe better to do a minimum revisions limit (alert if drop below X revisions after pruning)
status emails (and UI?) should include note of largest number of revisions stored
conflicts with other parms (e.g. auto-cleanup)
side effects could include pruning of deleted files from archive and excessive bandwidth usage while doing iterative pruning
Of course there doesn’t have to be any action, it could just be a notification / email that “hey, your destination is bigger than the preset limit - you should manually prune”.
WIll revisions count be level? Or could I end up with 12 versions of source code files (small) and only 2 versions of .PSTs?
Hard/soft limits should be selectable. All kinds of cloud storage situations show all kinds of behaviors at limits, ranging from operations failures to $1.29 more on yr bill.
How does Duplicati fail now, if a destination – say a local HDD – runs out of space?
i imagined it being backup based - so all revisions in the oldest backup get purged (unless a currently-non-existent “keep deleted files” setting is in play)
selectable hard/soft limits make sense, but I’m assuming the default would be the safest selection of a hard limit
if a destination runs out of space Duplicati throws it’s tiny little hands up in the air and says “I quit”