Making sure Duplicati only add files to remote storage, no updates

There may be some confusion here, along with actual challenges. Some are easier to bypass than others.

The dindex files are small. Each is associated with a large (default 50 MB) dblock file that would cost more.
What you see should only happen at each backup if your source data is turning over enough to do compact which puts its statistics in the backup log if you want to look at it to match the log up to what you are seeing. Complete log section gives detailed destination statistics. Duplicati almost never updates a destination file, however reducing wasted space requires delete (unless you want to never delete, which also raises cost).

The main problem with cold storage as opposed to hot storage that just has price incentives is from need to tell storage which of the Duplicati files you want it to restore for Duplicati access. You don’t know which files. Disaster recovery of everything could bring everything out of cold storage, I suppose (and wait until finished).

I don’t use it because I like to restore occasionally, short of disaster recovery, but preferences might vary.

https://www.oracle.com/cloud/storage/archive-storage/faq/

It’s described as a minimum retention period, but could also be viewed as a prorated early deletion penalty.
Cost optimisation on wasabi has some thoughts on at least getting some value out of minimum retentions, however if source turnover is rapid, you’ll just be paying for 90 days of storage instead of early deletion and remaining storage fee of 90 days. Only way to never delete is to let storage grow unbounded, which is bad.

1 Like