My backup destination location is FTP. FTP server is Linux and I have made script that sets protection attributes on files after upload find "$BACKUP_LOCATION" -type f -mmin +60 -mtime -1 -exec chattr +i {} \; . That flag protects file from changing and deleting. If source server is compromised, hacker could read FTP credentials from duplicati configuration and delete my backup files.
I have set retention 100 days and I have added in that script to remove protection attribute find "$BACKUP_LOCATION" -type f -mtime +90 -exec chattr +i {} \; after 90 days.
Problem is that duplicati can’t delete some backup files and fails to make new backups. I have checked the file that is in mark for deletion and it 23 days old.
No, because any files that have never changed still have their data in the original upload files.
EDIT 1:
If you did this after it ran a compact, the compact might already have decided to delete the file.
“listed as Deleting” is (I think) meaning it tried a delete that failed, so it’s going to try that again.
This sort of situation might also be possible in some error situations, e.g. an upload error gets retried under a new name, and the old file (whose integrity is unknown) might then be deleted.
If a maintenance window is possible (hoping no attacker slips in), that might be one solution…
EDIT 2:
New currently not-quite-Beta Duplicati versions have more secure ways to store the credentials.
I haven’t used it yet, and it’s probably not as solid as chattr, but chattr is sometimes too solid…
This meand that Duplicati has decided that this file should be deleted. After this, it will keep trying to delete the file, and there is no flexibility when it fails to delete.
I don’t think this is possible. It works initially, but after a compact has run, you may end up with a dblock file that has blocks of different “ages”. At a later compact, this may be slated for deletion, causing it to be deleted before enough time has passes.
If you disable compaction it could work, but the size may become unacceptably large.
That protects the secret, but I think the OP wants to prevent the files themselves from being deleted.
moved to doing ZFS snapshots. There are probably some awkward ways to use delete access removal. Maybe some FTP server can do that. It also needs to block overwrites from attackers. Duplicati won’t overwrite backup files. New data gets uploaded, obsolete data gets compacted.
If you need a faster way than full chattr removal for maintenance, maybe folder sticky bit will do, combined with a chown of new files after upload. That can also prevent the file overwrite attack.
EDIT:
If filesystem support is not available, and you have extra space, you can clone the destination to another place not accessible from the client system and its attack. If you use rclone you can use
--backup-dir on your rclone sync to have both a clean sync and a way to cover delete attack, although the folder would also need cleaning occasionally to prevent it from growing without limit.
Reconstructing the original directory might get kind of painful, but at least the files would be there. Throwing in some dindex and dblock files that compact deleted might not bother a Direct restore.