After 2 years, I’m throwing in the towel on my write-only chattr +i attribute solution. I’ve since utilized a different approach.
The difficulty
I just had too many times when a backup would die midstream (such as a loss of internet connection from xfinity). My process was to watch my duplicati-monitoring email reports, see something was amiss, go to Duplicati’s web interface, look at logs, find the bad backup file that Duplicati wasn’t allowed to delete, manually delete that on the remote end, then re-run the backup. Sometimes it would trip up on another backup file, so I’d manually delete that, etc. It was just too time consuming.
The root problem
Duplicati storing plaintext passwords isn’t the problem. Duplicati unable to cleanly work with a “no delete” remote server isn’t really the problem either. The problem is the same issue standard ssh/rsync backup users have been facing for years and years and years: the server needs some form of a recycle bin or snapshot system to hold onto deleted files instead of really deleting them.
I first came up with a bad fix, and then found the right fix.
Bad fix
Use symlinks often on the server, let Duplicati access symlinks, but not the actual file. The goal being if an attacker deletes a backup file, only a symlink is removed. An admin still retains access to the actual file.
The idea was that Duplicati would start by creating a backup file on the server. Once the backup file is written, something on the server then moves the file to a protected directory which Duplicati/ssh account can’t access. Then the server exposes the file in the directory Duplicati sees via a symlink. This way Duplicati works the same as before.
The right fix
Use the ZFS file system on the remote server and utilize ZFS snapshots. Snapshots can be made at any point in time. Older snapshots can be restored easily. Minimal extra data overhead is needed to create a snapshot. With ZFS, you can allow the ssh user rights to create a snapshot, and prevent the user from deleting snapshots. Having Duplicati create snapshots at the end of a backup is easy to script…
So far the biggest downside of ZFS is it’s just a different way of thinking about file systems. I’ve tripped up several times already getting it started. I have to think in terms of a pool, datasets, and mounting datasets. I had another small issue in that I used a 32-bit Raspberry Pi 2 which doesn’t support ZFS, so I needed to get a 64-bit Raspberry Pi4 and obtain a 64-bit OS for it.
Overall, Duplicati + ZFS feels like a match made in heaven. Duplicati is exactly what I want in a client side backup program, and ZFS is what I want on a server side file system to protect my data.