Ransomware protection with chattr

My backup destination location is FTP. FTP server is Linux and I have made script that sets protection attributes on files after upload find "$BACKUP_LOCATION" -type f -mmin +60 -mtime -1 -exec chattr +i {} \; . That flag protects file from changing and deleting. If source server is compromised, hacker could read FTP credentials from duplicati configuration and delete my backup files.
I have set retention 100 days and I have added in that script to remove protection attribute find "$BACKUP_LOCATION" -type f -mtime +90 -exec chattr +i {} \; after 90 days.

Problem is that duplicati can’t delete some backup files and fails to make new backups. I have checked the file that is in mark for deletion and it 23 days old.

$ stat duplicati-ba5a219a3b99341aeb03323593652887c.dblock.zip.aes
Access: (0644/-rw-r--r--)  Uid: ( 1001/  vsftpd)   Gid: ( 1001/ nogroup)
Context: system_u:object_r:default_t:s0
Access: 2024-11-15 02:22:20.174618454 +0100
Modify: 2024-10-29 12:51:49.424501876 +0100
Change: 2024-11-21 14:02:09.356630303 +0100
Birth: 2024-10-29 12:51:49.311501885 +0100

I have also added no-auto-compact=true, but id didn’t helped.

    "2024-11-21 14:00:24 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2024-11-21 14:00:24 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (56 bytes)",
    "2024-11-21 14:00:24 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoveUnwantedRemoteFile]: removing remote file listed as Deleting: duplicati-ba5a219a3b99341aeb03323593652887c.dblock.zip.aes",
    "2024-11-21 14:01:04 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-ba5a219a3b99341aeb03323593652887c.dblock.zip.aes (6.63 MB)",
    "2024-11-21 14:01:04 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Failed: duplicati-ba5a219a3b99341aeb03323593652887c.dblock.zip.aes (6.63 MB)",
    "2024-11-21 14:01:04 +01 - [Information-Duplicati.Library.Main.BackendManager-DeleteFileFailed]: Failed to delete file duplicati-ba5a219a3b99341aeb03323593652887c.dblock.zip.aes, testing if file exists"
    "2024-11-21 14:01:04 +01 - [Warning-Duplicati.Library.Main.BackendManager-DeleteFileFailure]: Failed to recover from error deleting file duplicati-ba5a219a3b99341aeb03323593652887c.dblock.zip.aes\nNullReferenceException: Object reference not set to an instance of an object"
    "2024-11-21 14:01:04 +01 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error\nWebException: The remote server returned an error: (550) 550 Delete operation failed.\r\n.",

Is there way to make this work? I assumed that Duplicati deletes files after retention period.

No, because any files that have never changed still have their data in the original upload files.

EDIT 1:

If you did this after it ran a compact, the compact might already have decided to delete the file.

“listed as Deleting” is (I think) meaning it tried a delete that failed, so it’s going to try that again.

This sort of situation might also be possible in some error situations, e.g. an upload error gets retried under a new name, and the old file (whose integrity is unknown) might then be deleted.

If a maintenance window is possible (hoping no attacker slips in), that might be one solution…

EDIT 2:

New currently not-quite-Beta Duplicati versions have more secure ways to store the credentials.

I haven’t used it yet, and it’s probably not as solid as chattr, but chattr is sometimes too solid…

This meand that Duplicati has decided that this file should be deleted. After this, it will keep trying to delete the file, and there is no flexibility when it fails to delete.

I don’t think this is possible. It works initially, but after a compact has run, you may end up with a dblock file that has blocks of different “ages”. At a later compact, this may be slated for deletion, causing it to be deleted before enough time has passes.

If you disable compaction it could work, but the size may become unacceptably large.

That protects the secret, but I think the OP wants to prevent the files themselves from being deleted.

moved to doing ZFS snapshots. There are probably some awkward ways to use delete access removal. Maybe some FTP server can do that. It also needs to block overwrites from attackers. Duplicati won’t overwrite backup files. New data gets uploaded, obsolete data gets compacted.

If you need a faster way than full chattr removal for maintenance, maybe folder sticky bit will do, combined with a chown of new files after upload. That can also prevent the file overwrite attack.

EDIT:

If filesystem support is not available, and you have extra space, you can clone the destination to another place not accessible from the client system and its attack. If you use rclone you can use

--backup-dir on your rclone sync to have both a clean sync and a way to cover delete attack, although the folder would also need cleaning occasionally to prevent it from growing without limit.

Reconstructing the original directory might get kind of painful, but at least the files would be there. Throwing in some dindex and dblock files that compact deleted might not bother a Direct restore.