The PR mentioned will essentially allow you to do testing of files that are in hot S3 storage and avoid reading files that are in Glacier.
With that PR it is possible to continue with some testing, but any modification will fail, so --no-auto-compact and infinite retention is still needed.
Thanks. I guess we’d want to set expectations appropriately (in manual) if this gets publicity.
Restore or anything else that needs Glacier reads still get no idea of what files are required.
Maybe it’s too much trouble to work individual files, so in emergency, just unfreeze them all?
So some small help at least. I hope it also stops delete attempt after file upload error occurs. Immutability attempts get caught in that. I don’t know if Glacier could leave partial-file clutter.
Perhaps the schemes that use S3 lifecycle can dodge those better than direct Glacier store?
It is no problem producing a list of files to unfreeze, but we need a good way to supply this list.
Having a user click on thousands of files is borderline useless, so we need a good way to either request this or provide the list in a format that works with Glacier (or AWS tools).
This is not directly supported. The idea is to have a small buffer (say 7 days for a once-a-day backup) before files are moved to Glacier. This will enable such edge cases to be handled in hot storage before the lifecycle rules kick in.
The next canary build will include a synchronization tool that supports copying files between storage, so you could have a local disk destination, and then sync that to Glacier. This will allow all the operations locally (compact, retention deletion, etc) with all data stored in Glacier, but requires two copies (so you get 3-2-1 backup storage).