Hello and welcome!
-
I think the automatic tiering between frequent and infrequent would be ok, but the archive tiers should be avoided in my opinion. Some here have experimented with Glacier and got backups to work, but I don’t know how their restores work when object availability is measured in hours.
-
Duplicati accesses (reads) the files in the bucket when you are doing a restore, doing verifications, doing compactions, recreating the database, etc. Some of these functions can be disabled. Also, unless you’re using unlimited retention, data blocks are going to be pruned. Even older files in your bucket may end up having ‘wasted’ space, possibly triggering a compaction event. You would still want to use --no-auto-compact if your goal is avoid accessing lower tiers of storage.
-
If you turn off verification and compaction, then yes the files in your bucket will not be accessed and will be automatically moved to a lower tier per your settings. But as mentioned above, archive tiers could be a serious issue. If you need to restore or recreate the database, it may not work right.
-
The most recent files aren’t accessed any more often than older files, if you disable compaction and verification. With those things disabled, once a file is placed in the bucket it won’t be read again unless you do a restore, database recreation, etc. If you are not using unlimited retention, then Duplicati will delete files as data ages off. With compaction disabled, it won’t be able to delete dblock files until all data referenced in the file has aged off.
-
Specific custom retention settings don’t really affect this with compacting disabled.
-
I don’t think there’s a way to have duplicati test the files that were just uploaded, but I could be wrong. The automatic verification chooses random files to test.
If your goal is to reduce costs, I recommend checking out Backblaze B2 or Wasabi. They are both hot storage and are about the same price as AWS S3 Glacier tier, and you’ll avoid all the potential issues.