Troubleshooting - increasing number od data in backblaze

Hi,
I use Duplicati 2.0.7.1_beta_2023-05-25 on Win10 with Backblaze storage (B2 Cloud Storage Buckets). It replicates my data every day.
Source amount of data is slightly above 300 GB and backblaze bucket is below 300 GB, I’d like to decrease amount of $$$ I spent each month for backblaze, first in the duplicati settings I have decreased amount of version of files it keeps on backblaze, but it didnt really help,

so now i’m wondering about Lifecycle Settings of backblaze buket? It saves my data in backblaze in 100gb chunks of data. I can see there are files even from 2020, what if I put let’s say a year for “Keep prior versions for this number of days”, but how that will influence my backup data on backblaze?
some files could be stored there in 2020 and if they are not modified, they could have not been replicated for the last two years?

please advise,
thanks

Sure, but there are limits to how little space a backup can take, despite all the processing.

The backup is already smaller than its source. How much less are you attempting to have?

The topic title raises a different question. Size growth is normal as the versions accumulate.
Deleting versions does not by itself lower usage much, but its new wasted space can cause
Compacting files at the backend if one of the factors that could trigger a compact is present.

might interfere. That’s 2000 times the default 50 MB. Why? See Choosing sizes in Duplicati.

Features

Incremental backups
Duplicati performs a full backup initially. Afterwards, Duplicati updates the initial backup by adding the changed data only. That means, if only tiny parts of a huge file have changed, only those tiny parts are added to the backup. This saves time and space and the backup size usually grows slowly.

and it also means that your new backups rely on parts of older backups that are still relevant to them.

“Replicated” doesn’t seem to fit, but “kept” may, for example if this was original backup with lots of files that are still kept. Duplicati won’t replicate the older blocks, but will just reference them in new backups.

Deduplication
Duplicati analyzes the content of the files and stores data blocks. Due to that, Duplicati will find duplicate files and similar content and store them only once in the backup. As Duplicati analyzes the content of files it can handle situations very well if files and folders are moved or renamed. As the content does not change, the next backup will be tiny.

If you want to explore increasing compact aggressiveness, you can adjust the settings, however B2 will charge for downloads (it’s giving a higher allowance after the recent pricing changes though), so there’s tradeoff between letting usage build up versus increasing download fees to try to reduce storage fees…

The large remote volume size will slow down restores and rack up download fees as the article explains. Adjusting it is probably possible, but sudden large adjustment may produce a major burst of compacting.

Keeping current settings, you can open About → Show log → Live → Verbose and push Compact button which won’t actually compact unless it’s needed – meanwhile that log will show statistics on the situation.