Cost optimisation on wasabi

Hi,

I’m about to start using duplicati with wasabi. I was wondering if there is an optimal configuration to reduce the costs, particularly “Timed Deleted Storage” fees. The files need to exist for 90 days in order to avoid fees.

The plan is to use this setup as a daily backup, for about 300GB

thanks

Hi Din,

I am using Wasabi with the “Smart backup retention” preset in Duplicati and pay a little over 8$/month for ~1.6TB.

People with Wasabi can correct me, but it sounds like it’s not exactly a fee, but 90 day minimum charge whether or not the file is deleted sooner. It does sound like early deletion might bill days remainder when delete happens because it’s known then, instead of spreading it out over months, but total amount is the same, so any deletion in 90 days is same total. The question is whether there’s a waste in early deletes.

I suppose one might say one wastes potential restores. If this were a simplistic backup system where a backup version was a full set of files, no point in deleting a set earlier, because amount due is the same.

Duplicati doesn’t work like that. It uploads changes to previous version, but over time, any of it obsoletes.
Compacting files at the backend runs when obsolete data (space waste) accumulates, and it has some controls. This is not time-based, but Backup retention on Options screen 5 can be, if you choose that.

Those (plus backup frequency which you’ve already decided on) are the controls you have to tune things.

Probably your goal should be to minimize total spend, but it’s complicated, and might be counter-intuitive, however possibly the no-reason-to-delete-versions-early holds true for Duplicati, just as it would normally.

You also don’t want to get hyper-active compacting, because that will cause a lot of download and upload that you aren’t charged for, but as files get compacted, the old ones get deleted, maybe invoking charges.

If you never compact, then the total space use will grow forever, which will also wind up getting expensive.
Possibly setting no-auto-compact and manually using Compact now button may cost less than automatic.

Because you are likely below the minimum of 1 TB Timed Active Storage, that variety is basically free, so you might seek benefit by using more of it, unlike someone who is past 1 TB so billed for additional use…

I suppose in the below-1-TB case it does look like more of an extra cost beyond the $5.99 active 1 TB fee.

1 Like

thank you both for your help.

Sorry, I’m still a little confused.

I too was recently hit with an extra fee from Wasabi for files that were deleted before 90-days. Based on @ts678’s reply, I’m not sure exactly what to do (if anything).

It would be nice if Duplicati had the ability to follow the Smart Auto Retention, but “Keep files for a Minimum Number of Days” Or a Smart Retention plus a minimum retention period.

Thanks for any assistance.

./s4z

Welcome to the forum @s4zando

Let’s get some specifics about your backup size and rate of change from your backup log files in the Complete log version. What are usual values for “BytesUploaded” and “KnownFileSize” (can vary)?
What Backup retention are you using, and how often are backups run?

This would say whether you’re in the “doesn’t matter” range that I’m theorizing exists when past 1 TB. There’s also a low-use range where the Wasabi 1 TB minimum timed active storage charge will hurt.
There are other storage vendors without a minimum. Backblaze has none, but charges for download.

How does Wasabi’s monthly minimum storage charge work? shows why I think 1 TB is a special size.
How does Wasabi’s minimum storage duration policy work? shows why past 1 TB you pay either way.
Anybody who actually uses Wasabi (or is willing to ask them), feel free to correct, if I misunderstand…

image

Smart backup retention is identical to Custom backup retention of 1W:1D,4W:1W,12M:1M which progressively thins out backups as versions age, by deleting versions that are too close to each other, however earlier deletions, e.g. down to one per 7 days between the end of first week and end of fourth, probably don’t benefit you because you are stuck with a 90 day minimum retention. You could try using
90D:U,12M:1M to say unlimited versions for 90 days, but lots of versions can also slow down Duplicati.

The problem won’t be how long Duplicati hangs onto it’s backups, it’s the files that Duplicati creates on the s3 endpoint that are the issue. The moment the objects are deleted, Wasabi starts its deletion clock timer.

Duplicati creates its dblock/dindex, AES files, etc files on the s3 endpoint everytime a backup runs. Now if you set compaction (not even sure if you can schedule it) to occur every 91 days that still won’t get you out of the hole with Wasabi’s 90 day minimum storage charge; because yesterday’s backup has created files on the endpoint that are “1 day old”. Following me so far?

Good. So a few days goes by and Duplicati feels like compacting. It’s going to download the everything it needs to do this from the endpoint, compress everything on the client computer, and tidy up the file count. It then uploads that new group of files back to the s3 endpoint then deletes the old files that were already on there. (maybe I have the order wrong, but that’s what’s happening from what I can tell by watching the folders). This deletion of files is where the 90 day issue comes into play. You can’t delete files Duplicati made yesterday on the s3 endpoint or Wasabi is gonna say, hey whoa, no, no, no, those files have to stay put for a minimum of 90 days before deletion if you don’t want to incur a fee. (91 days to be safe with DST if backups are run around 2:00am).

The only thing I can think of is to let Wasabi hang onto those files until 91 days have passed. Then have Wasabi remove them. Duplicati will have uploaded a new dataset then(?) technically performs a deletion on the s3 endpoint of old files it doesn’t need any longer.

I’ll have to dig around in Wasabi and see if the immutable setting is applicable. This is highly dependent on the order in which Duplicati deletes its files, as deletion needs to be last in this chain and Duplicati must not have or want anything to do with files its supposedly already deleted.

What would that get you? If Wasabi charges a minimum of 90 day storage, deleting them early vs waiting until 90 days makes no difference. Disallowing deletion on the remote side when Duplicati wants to delete will just cause Duplicati to throw errors.

Wasabi charges a fee based on the amount of gb that is deleted per month under that 90 days. Their wording is confusing as all can be but it’ll cost you more than 5.99 + Sales Tax per month if you delete files earlier than 90 days.

Here’s an easier explanation: Wasabi Storage Review: How Good is this Object Storage Provider?

Still based on this they don’t have a real time calculator to see at the end of each day (with projections into the next 90 days) what the current running cost will be or an approximation is. You just get a surprise bill at the end of the month. They tell you how to calculate it but are you really going to sit there and manually calculate it based on every single file you deleted?

I think it’s pretty clear:

… a Timed Deleted Storage charge equal to the storage charge for the remaining days will apply …

But yeah you may be right that you get billed all at once instead of it being spread out.

Have you considered B2? It’s hot storage like Wasabi at very close to the same cost, but there is no minimum object storage time. But unlike Wasabi you pay some amount for egress.

I think the issue is that an early deletion adds actual new cost, while keeping data may be no-extra-cost.
I will detail that more below, and maybe I’ll hear if I understood that correctly after seeing the references.

Backblaze B2 might also be lower cost because there’s no minimum. Are you well below the 1 TB point which was used in that example which confirms my theory about small backups being hit worse on this because of the minimum 1 TB active storage charge that makes some active usage “no extra charge”?

TL;DR Setting backup retention to match Wasabi 90-day retention can help keep storage charge “active”.

If for some reason you want to stay on Wasabi but seek to pay $5.99 per month for a below-1-TB backup, you can encourage backup data to not be declared wasted space by not deleting versions before 90 days.

image

1W:1D,4W:1W,12M:1M is the “Smart backup retention” setting. What do you now use on Options screen?
You could adapt the above to 90D:U,12M:1M to try to hang onto more backups. This will increase Wasabi size (hopefully not past 1 TB, or the costs go up that way instead) and may make database bigger/slower.

By volume, most of the files are probably dblock files which contain the changes found in the source files. Backup will typically upload some number of full dblock files, then a partially filled dblock, then a dlist file.

Deleting a version deletes its dlist and makes some of its dblock file contents wasted space. Compact looks at that but it will trigger on either too much wasted space by percentage or too many small dblocks which will take more work to subdue, but they’re small, so any new ones that get deleted won’t hurt much.

If you really want to study hard on this, look at The COMPACT command, no-auto-compact to do manuals, and auto-compact-interval but setting it to 90 days won’t be sufficient because it may compact newer data whenever 90 day interval lets compact run. Adding 90 day retention will try to minimize data deletions done.

Interesting, I hadn’t considered the minimum monthly charge and maybe this early deletion penalty might push someone over that.

The cited article’s section on How Wasabi’s pricing works? seem to say that it’s a separate charge, however the example was a less-than-1-TB case. My theory is that for more than 1 TB, early delete
makes no difference in monthly expense except you pay for either deleted storage or active storage.

Below 1 TB, keeping the storage active just eats further towards the 1 TB minimum active we hope, producing no extra charges, while an early deletion causes an additional charge for deleted storage.

Or so the theory has been. Basically, the 1 TB active minimum can freely soak up some active use,
but it won’t eat deleted storage charges which arrive as an additional charge. See pricing section in:

@s4zando does your backup fall in the sub-1-TB range where I theorize the billing quirk may occur?
OP @din had a smaller backup, and I’m guessing @patg84 does, but I’m hoping to get the details.