Hi folks, new here and testing my backups by attempting to restore.
I understand that there is limited support for S3-IA, Glacier, and Glacier DA, but I am committed to test this use case for AWS. Also, this is not the primary back and is intended to sit for long periods and only there for restoring from significant disaster.
It is also understood that the aforementioned AWS storage types are not immediately available for restore as it is with S3 Standard and other Cloud targets. However, as mentioned, I still plan to test this out.
Thus far, my backups have worked as expected and the catalog appears accurate for all of my versions. As expected, there are errors (i.e. “not valid for the object’s storage class” and “Could not find file”) when attempting restore via the Duplicati interface because AWS requires their own limited restore request for the Glacier DA storage type.
While testing, I requested an AWS restore of the four affected Duplicati files that are needed to restore my file. Hours later (when the AWS restore was completed for the archived objects,) I was able to restore my backup files and directories as expected.
So, since restores at this level will be infrequent at the most, this may be a viable option for me. There is still one issue - significant for large archives which leaves me with two questions:
-
If I need to restore “file.xyz”, is there a way to determine which dlist, dindex, and associated dblock(s) need to be manually restored from AWS in order to complete the restore without pulling down ALL of the backup files?
-
The AWS CLI or API can be hit with a command to request the restore of the needed backup files. If I can determine what affected Duplicati files are needed to complete my restore, it would be easy to script it manually which leads me to believe that it can be reasonably easy to integrate these restore requests into each of the backups. Any hope for something like this in future releases?