I’ve just started using duplicati in a linuxserver.io Docker container, with the web GUI.
What are the best practices for backing up to Amazon Glacier? I found this page, but it’s from 2013.
As far as I can tell:
- Set the destination to back up to an S3 bucket
- Don’t set the storage class to “GLACIER”, it does nothing.
- In the AWS management console, set up a lifecycle rule for the bucket to move objects with names starting “duplicati-b” (but not others) to Glacier after a day or two.
Is this correct? Does it work well? Are there any other special options I need to add?
To the best of my knowledge, Glacier isn’t ideal because Duplicati needs to read the remote files for verifying, compacting, etc, and Glacier retrieval is too slow to work well with that. There are probably configurations within Duplicati that would allow you to use Glacier, but that is a significantly more involved process.
Well, that’s what I’m asking about. What are those configurations?
I just did this the other day using that old page you found. It worked ok. See Duplicati and S3 Glacier - Features - Duplicati
Thanks. Are there any problems with Duplicati trying to read the glaciered S3 objects?
Glacier doesn’t allow immediate read access to the objects. As such, Duplicati fails when it tries testing the files. I personally advise against using archive tier storage. (You can get hot storage at about the same cost as Glacier from other cloud providers such as Backblaze B2 or Wasabi.) Google does have an archive tier that allows for near immediate access, so that’s another option.
If you really, really want to use Glacier, then you should set these options:
--no-auto-compact = true
--no-backend-verification = true
These should get your backups to run without error.
But note that I don’t know how restores even work, since object availability on archive tier storage is measured in hours. I imagine Duplicati will fail on restores unless you move the objects to a higher tier. You should definitely test this thoroughly.
That’s unfortunate. The 2013 article says “The new storage engine for Duplicati 2.0 was designed with support for Amazon Glacier in mind.”, but I guess it turned out to be harder than they thought?
Dunno… I guess it works if you use the two options I mentioned, but I don’t use it myself. I am curious how restores work when object availability can take up to 12 hours. Maybe someone who has done it can share their experience.