It’s all part of the deduplication, compression, and encryption processes. Files will NOT be in native format on the back end (S3). But what you get from this is efficient storage (dedupe, compression), versioning, and retention.
For example on one of my PCs I protect 35GB of data. I now have 152 backup versions yet it only takes 69GB on the back end.
If you want direct replication of your data in native format, you can look at tools like rclone.
Sorry for interfering with an old thread - but I am missing the option to choose the storage class “Deep Archive”. With the latest Duplicati docker container (184.108.40.206_beta_2019-07-14), I cannot choose it.
I also tried on my Mac - no chance. What was interesting to see is that also not all regions are showing up (in my case, I am missing Stockholm).
Thank you for providing the link - I’ll read through it.
Ignoring the early unstable canary releases, a big .dll update is in v220.127.116.11-18.104.22.168_canary_2019-09-05, however it’s only out three days now so is a bit of a risk. I’d also consider Duplicati + cold storage risky… Duplicati is intended for backup not archiving, and it likes to interact with storage to ensure things are OK.
If you go this route, please be sure you have a plan for restoring or for verifying the integrity of the backup.
Thank you for the suggestion. My idea was to have a disaster recovery in the cloud - files that only need to be accessed when the house is burning down or drowning… On the other hand, I would need access to some metadata to ensure that “new” files are added to the archive without needing to upload the whole 10 TB again…
If you feel that Duplicati isn’t the right piece of software for that - what would you recommend?
You can dig through Glacier and other cold storage posts here and on the Internet. Opinions from here:
I don’t know if any backup programs give you any help in figuring out what files you need to get back from cold storage or even making the request for you, or if it’s always a manual process of analysis & retyping.
If up to you, the fewer the better, i.e. for disaster recovery you may prefer to do something like image with Macrium Reflect Free to a USB hard drive, then upload/track large images. Between image backups you can use Duplicati or something to inexpensive hot storage such as Backblaze B2 or Wasabi, whose cost might not be unbearable for smaller amounts of data. Some people even try to have a hybrid solution with older files backed up in a more economical way, and newer files backed up on hot storage. Duplicati does not yet have built-in support for that, but there are scripting solutions given in the forum if you’re interested.
One drawback of big images in cold storage is that downloading can take a long time. Keeping some local backup can help with that, provided it’s not where the same disaster is likely to take out the local backup…
EDIT: Is this lots of data or multiple computers? Do you prefer simple backups or more configurable ones?
It is simply a bunch of data (roughly 10TB) that I want to repossess in case of emergency. These are my home videos that are currently sitting on my NAS. And while my photos are duplicated to OneDrive and Adobe Cloud, I haven’t found a reasonable solution for the videos. If I place another NAS in my basement, I gain nothing if something is happening to my house. Additionally, I am unable to set the secondary NAS somewhere else. And from the economical approach roughly 10 USD/ month (just in case of never touching the files) sounded better to me than an investment of roughly EUR 1.300 for a NAS and two HDD…
Add support for “Glacier Deep Archive” storage class (Manu)
Have you looked at rclone to just clone whatever video file tree your NAS has (and never rearrange it…)?
The typical backup program (or at least Duplicati) tries to do better than just copying files, with things like block-level deduplication,. uploading only changes, compression, encryption, etc. Most of those are poor fits for videos which are typically already compressed, and are hugely different from each other unless a direct copy exists or a file tree is rearranged in which case the new file would use all the blocks from old.
Assuming videos are usually written once and not changed, Duplicati and cold storage fit a bit better than backup of a highly changing file environment, where cold storage would get in the way of recycling space. Disaster-recovery aspect might ease its use of not-human-friendly filenames if asking for “all” is possible. Still, a simple clone might be all you need, and the simpler the software run, the less that might go wrong.
For retrieval, if the time ever comes for that, you’ll probably still need to use the Amazon S3 console since rclone seemingly doesn’t know how to ask. I don’t keep track of software to suggest something that might.
Yes and no. It pulls the list of storage classes (not the storage regions/hosts) from the S3 library, not from the S3 server. It is updated when the libraries are updated (and maybe that is recent enough).