Making sure Duplicati only add files to remote storage, no updates

Hi,

I am planning to use Oracle OCI Object Storage (S3 compatible) as a remote file location for my Duplicati backup files.

The cheapest pricing tier (Archive Object Storage, equivalent to AWS Glacier) is only charging about $0.0026 per GB per month (or $2.6 per month for 1TB stored), or 10% of the normal standard tier.

However, there is a penalty when deleting or modifying files within 90 days of their creation/storage.

Is there a way to configure Duplicati to only create new files at each backup, as opposed to modifying existing files? From what I can see, Duplicati is creating and deleting a few index files at each backup, possibly making the Glacier/Archive S3 remote location more expensive.

Thanks

1 Like

Good day @TedTomato,

You may as well just write a batch file to copy the files each day, Duplicati isn’t going to provide you with any benefits over a script/direct copy in this situation. I guess if you only wanted a Duplicati once backup every 91 days or so you could just leave things at default but I doubt that’s often enough.

Duplicati is not made for one-time backups, it’s meant for daily use, it works specifically with increments, only uploading what changed but not if you only run it once. By forcing it to make new files each time (which I do not think is possible) it would think it’s a new backup and have to reupload all the data over again surely taking longer to complete than a straight copy. In this situation you would be creating a new backup each time, keeping track of all sorts of file data during the backup process to never use it again.

If you search around the forums you’ll find others that have tried using Glacier or other penalty facing cold storage and from what I recall it’s generally not worth it. Maybe you could backup to another location on the daily then once every three months script/manually copy the backup files into OCI to archive them.

Have you run a test backup of your data to see how big it will be deduped? Some of my users see massive space saving from the deduplication engine, maybe you don’t need to be at AOS level to stay on budget.

2 Likes

There may be some confusion here, along with actual challenges. Some are easier to bypass than others.

The dindex files are small. Each is associated with a large (default 50 MB) dblock file that would cost more.
What you see should only happen at each backup if your source data is turning over enough to do compact which puts its statistics in the backup log if you want to look at it to match the log up to what you are seeing. Complete log section gives detailed destination statistics. Duplicati almost never updates a destination file, however reducing wasted space requires delete (unless you want to never delete, which also raises cost).

The main problem with cold storage as opposed to hot storage that just has price incentives is from need to tell storage which of the Duplicati files you want it to restore for Duplicati access. You don’t know which files. Disaster recovery of everything could bring everything out of cold storage, I suppose (and wait until finished).

I don’t use it because I like to restore occasionally, short of disaster recovery, but preferences might vary.

https://www.oracle.com/cloud/storage/archive-storage/faq/

It’s described as a minimum retention period, but could also be viewed as a prorated early deletion penalty.
Cost optimisation on wasabi has some thoughts on at least getting some value out of minimum retentions, however if source turnover is rapid, you’ll just be paying for 90 days of storage instead of early deletion and remaining storage fee of 90 days. Only way to never delete is to let storage grow unbounded, which is bad.

1 Like

Thank you both @JimboJones and @ts678 for your comprehensive replies.

Sounds like Duplicati and cold storage options like Oracle Object Archive or other Glacier offerings are not going to be good fit, unless the files are stored elsewhere first (primary backup destination, like BackBlaze B2) and then copied periodically to cold storage (as a secondary backup destination).

Hey @TedTomato check the links in this post. I was trying to do exactly what you’re trying to do now with Oracle although I used Wasabi. They have the same thing in place. $5.99/month per TB. There’s a minimum retention policy in place for 90 days for normal accounts. If you prepay for larger quantities upfront they’ll kick this down to 30 days. There’s no egress fees however you can only download as much data as you store with them in a month.

ie: You store 400gb with them, so that means you can only have up to 400gb worth of egress data in a month before they take notice and say, “whoa bud your egress was 900gb every month for 3 months” lol.

Check out #8 here: https://wasabi.com/paygo-pricing-faq/

Here’s a few posts that are worth reading (All of them created before I fully understood exactly how Duplicati runs under the hood):

https://forum.duplicati.com/t/denied-delete-requests-for-good-reason-pls-read-2nd-3rd-posts/13427/5
https://forum.duplicati.com/t/deleting-unwanted-files-how-and-which-files-based-on-date/13395
https://forum.duplicati.com/t/cost-optimisation-on-wasabi/11803

…and yes I still use Duplicati and Wasabi.