Hi @theFlash, welcome to the forum!
Yes and no. You are correct that for some users when a problem arises it appears to be catastrophic / unrecoverable. However different people have different ideas of those terms.
For example, if your local database becomes corrupted it can be rebuilt from the remote data but yes - that can take some time. While this is being improved in newer versions, there’s no hard numbers to say by how much.
If your remove files become corrupted (say your destination was a USB drive and it got dropped) then you would likely run into a situation where you couldn’t do anymore backups. This is by design due to how Duplicati does backups.
As @drwtsn32 described, only changed parts of files are uploaded with each backup run. If you choose to restore a particular version of a file Duplicati will know which versions of which blocks to restore to rebuild that version of the file.
But if a very old block of a file is corrupted and that block was never changed, then EVERY version of that file will have that corrupted block. Adding another backup of that file on top of ones that are sitting on a corrupted block will just result in a bad backup.
To avoid that happening, Duplicati chose to disallow backups when corrupted files are found on the destination. Recovery from this issue is possible (by deleting the bad versions of the files) but not as smooth as we’d like. For some people that is enough of a reason to not use Duplicati.
But even when a backup is in a corrupted state and Duplicati won’t let it be written too, it usually CAN be restored from. Granted, the corrupted blocks will still be corrupted in the restored files - but Duplicati will restore everything it can, filling in the bad blocks with zeros.
As far as backing up to Amazon S3 - yep, Duplicati can do that. But backing up to Amazon Glacier is more difficult. Because of how Glacier works, Duplicati will start thinking files have gone missing and complain. To get around this you have to disable many of the features that Duplicati uses to verify your backups and clean up old versions.
Personally, I think you’d be better off using your FTPS destination for all your backups then set up something to mirror that to Glacier. Of course doing that means if you ever need to restore from Glacier you’ll have to copy ALL the Glacier files to somewhere Duplicati can see (such as back to your FTPS server) - but it’s doable.
Hey, @Pectojin - do you think it would be possible to support Glacier as a restore-only destination?