Scaleway Glacier backup

Hi
Does anyone have success performing backups on Scaleway Glacier?
Tnx

I have tested backup/restore and they are finished with an error, but it seems the files are OK.

Errors 2 2023-04-26 15:38:16 +02 - [Error-Duplicati.Library.Main.Operation.RestoreHandler-PatchingFailed]: Failed to patch with remote file: "duplicati-b539c46726f974231b5e8f25fa10d9a40.dblock.zip", message: Access Denied. 2023-04-26 15:38:59 +02 - [Error-Duplicati.Library.Main.Operation.RestoreHandler-PatchingFailed]: Failed to patch with remote file: "duplicati-b25ef225ff48f4defa4da1c59bff70f05.dblock.zip", message: Access Denied.

Cold/archive storage tiers don’t work very well with Duplicati. You can potentially get it to work if you set a couple options as mentioned here:

But I wouldn’t recommend it. Restores will be problematic…you’ll need to change the storage tier of objects before you can restore successfully.

Google Archive storage tier seems to be unique in that it allows near instant access to objects. That should eliminate the main functional issues that Duplicati usually has with archive tiers. Note that you should think about costs associated with downloading or early deletion when considering archive tier storage.

I looked, and it seems that you are right. Google Archive is a better solution than Glacier.
I am unable to determine the Data Retrieval Size and how it is calculated in the Google Archive price.

Meaning what exactly? The amount of egress (which they appear to charge as network usage) for some particular restore or maintenance action? That can range from everything to a fairly unknowable amount.

To restore files, Duplicati has to figure out which dblock files have the blocks needed and go obtain them. There’s a calculation necessary, and you currently get the count but not the names, e.g. for cold storage, which makes it hard to hand-pick files from Glacier, but also makes it hard to know the size of those files:

EDIT:

References

How the backup process works
How the restore process works
Choosing sizes in Duplicati

The downside of using larger volumes are seen when restoring files. As Duplicati cannot read data from inside the volumes, it needs to download the entire remote volume before it can extract the desired data. If a file is split across many remote volumes, e.g. due to updates, this will require a large amount of downloads to extract the chunks.