The compacting process is very dangerous!

I don’t know if it’s correct to say “no way” to recover. Decryption doesn’t have to fail completely if a checksum error is detected. It will just result in corruption in the decrypted data. Duplicati designers seem to have decided that’s a failure situation.

Compression though… corruption will amplify to a greater number of bits. And you didn’t even mention deduplication. Corruption when dedupe is in use amplifies the damage even more greatly. Those are some downsides of using compression and dedupe.

I was speaking more on the native filesystem support for healing. The ones that support corrective action seem to require multiple drives, but I could be wrong.

Speaing of par2, there is an open issue and even some initial development work, but I haven’t seen any update in a few months.

Seems like par2 is a good way to go to help make backups more resilient. In the meantime I don’t think you can beat the storage durability of major cloud providers, but of course that has downsides (costs, etc).