Immutability feature for randsomware protection. S3 / B2 'Object Lock' or application keys without deleteFiles permission

An idea. To protect against malicious attacks it would be great if our backup jobs cannot delete files. For someone with malicous intent takes control of the machine running Duplicati, backup data can be corrupted (e.g. source paths and exclude filters changed, remote files deleted or altered).

Two approaches come to mind:
1) BackupOnly application keys
We could create b2 keys that do not have the deleteFiles ability and use bucket lifecycle settings in b2 to delete files older than a desired number of days.
e.g.
b2 create-key --bucket bucketname --namePrefix duplicati BackupOnly listBuckets,listFiles,readFiles,shareFiles,writeFiles
refer to: Application Keys

2) Immutability feature
Object Lock, a feature recently released in Backblaze b2 for Veeam (backup software) allows you to set the time for which a file is immutable. One could set a period of, say, 1 year.
refer to: Object Lock FAQs – Backblaze Help

Feature request:
As an enduser concerned with randsomware and other malicious hacking activities,
I would like to prevent duplicati from being able to delete files,
So that an adversary cannot corrupt backups.

NB The backblaze account would have to be protected by multi-factor authentication to prevent an adversary to simply terminate your b2 account.

3 Likes

Please view a github feature request here (probably a better place to discusse this): Feature request: immutability feature for randsomware protection. S3 / B2 'Object Lock' or application keys without deleteFiles permission · Issue #4364 · duplicati/duplicati · GitHub

1 Like

You can prohibit deletion on the back end if you do two things:

That will probably break your backup because data blocks remain in use in current files.
Duplicati uploads file changes as they are found, but unchanged data is just cited again.

Features (see Incremental backups and Deduplication)
How the backup process works
Compacting files at the backend (which you can’t do, as it deletes the compacted files)
The COMPACT command (more details – it’s not only version deletes that can do this)

So, as noted, the general plan for this is no version deletes and no automatic compacts.
One possible remaining delete is hard to avoid, but may or may not bring you problems.
If a file upload files, the upload is retried using a new name, and the old name is deleted:

( $50 Bounty ) Attempting deletion of files before retention span is over

I’m kind of curious how backup software that can live with immutability manages to do it.
One simple crude way would be to store multiple full copies of files, but that’s inefficient.

It will definitely break your backup. Don’t use lifecycle settings or versioning on the backend.

This should be solved on the destination - file should not be marked as completely uploaded unless it is marked as so… Unfortunately, some storage backends keep partially uploaded files so the upload can be resumed…
I just checked b2_upload_file function description for Backblaze and do not see how it might keep partially uploaded file, so this shouldn’t be an issue with B2…

I suspect that cloud services typically don’t keep partially uploaded files, but the question is whether the Duplicati viewpoint sees it as uploading, then a failure happens, then it decides to delete. States run like

Duplicati’s marking scheme is (in part) shown above, and an old (2015) description of the flow is here.

Current code may or may not follow 2015 description, but this is some of the delete-fail recovery logic:

and ultimately the proof of behavior is what is seen. I’ve seen some unexpected behavior in the above (unable to supply additional details, but I have a failure test program and some run notes somewhere).

( $50 Bounty ) Attempting deletion of files before retention span is over would need to be explained too.
Without actually seeing the database, it’s not certain what happened, but one known behavior is that at Duplicati backup start, it cleans up files that it was trying to delete on a previous run, i.e. state Deleting.

I actually make use of this to fix a problem that comes up when Compact is interrupted and it forgets it successfully deleted some dindex files. Doing a database transaction rollback is the suspected cause. Workaround is to set state of the dindex files to Deleting, which is what the no-issue dblock files had.

If you’re willing to dig in code more, feel free to take this to GitHub Issues to see if a developer can chat. There’s a major shortage of developers though, so an even better course would be to do a pull request.

There have been several forum user attempts to work with immutable destinations, and possibly even more attempts to work with cold storage, where I’m not sure if the retry delete may present a problem.

The destination does what it does. Duplicati knows a few capability differences, but has to deal with all.

If you want to poke at this more, beyond just reading code, an SQLite browser helps, and I can offer a script to make upload errors, to look at retry handling. That’s the test I tried awhile ago that I mentioned.

Well, if I get time for this, I’ll look, but now this is not a priority. I use Duplicati in some capacity, where I can avoid some shortcomings, but have no time to dig in into older code… Maybe if there is an effort to move to .Net Core…
But I agree - supporting many different types of storage backends is not an easy task.

I found my old test notes, but an actual production backup log to OneDrive was more useful to show effect.

This is the history of a dindex file which suffered an upload error, got set to Deleting, and cleaned up later. Maybe. There’s nothing that shows the actual Backend event: Delete, so maybe it looked before trying that, which in best case means everything is already all set for a no-deletes destination that won’t leave partials.

2020-09-28 18:20:00 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2020-09-28 18:22:34 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (123, "duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes", "Index", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2020-09-28 18:22:34 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (123, "duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes", "Index", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.005
2020-09-28 18:22:34 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes (48.62 KB)
2020-09-28 18:23:15 -04 - [Retry-Duplicati.Library.Main.Operation.Backup.BackendUploader-RetryPut]: Operation Put with file duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes attempt 1 of 2 failed with message: Failed to authorize using the OAuth service: Server error. If the problem persists, try generating a new authid token from: https://duplicati-oauth-handler.appspot.com?type=onedrivev2
2020-09-28 18:23:15 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes (48.62 KB)
2020-09-28 18:23:25 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes (48.62 KB)
2020-09-28 18:23:25 -04 - [Information-Duplicati.Library.Main.Operation.Backup.BackendUploader-RenameRemoteTargetFile]: Renaming "duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes" to "duplicati-iab6e6b556c3940ad82f7f834baa81b99.dindex.zip.aes"
2020-09-28 18:23:25 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: UPDATE "Remotevolume" SET "Name" = "duplicati-iab6e6b556c3940ad82f7f834baa81b99.dindex.zip.aes" WHERE "Name" = "duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes"
2020-09-28 18:23:25 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: UPDATE "Remotevolume" SET "Name" = "duplicati-iab6e6b556c3940ad82f7f834baa81b99.dindex.zip.aes" WHERE "Name" = "duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes" took 0:00:00:00.000
2020-09-28 18:23:25 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (123, "duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes", "Index", "Deleting", -1, 0, 0); SELECT last_insert_rowid();
2020-09-28 18:23:25 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (123, "duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes", "Index", "Deleting", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2020-09-28 19:20:00 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2020-09-28 20:20:00 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2020-09-28 20:20:37 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes
2020-09-28 20:20:37 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Remotevolume" WHERE "Name" = "duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes"
2020-09-28 20:20:37 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "Remotevolume" WHERE "Name" = "duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes" took 0:00:00:00.000
2020-09-28 21:20:00 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started

Not sure what you mean. I showed very latest code. If you don’t like the 2015 commentary, you can ignore.

You can find a pull request in GitHub that needs lots of help (but not in this area) in order to be ready to go. Major change in discussion so far seems to be replacing the current autoupdater which doesn’t work well.

For maybe less time taken, you could test that on a small test backup with all versions kept and no-auto-compact, to see if it holds up. It will eventually build up a lot of wasted space, but (if B2 allows it, and you don’t mind some vulnerability) maybe you could add that back in for cleanups when waste gets too huge.

1 Like

I took a closer look at cleanup. Profiling logs get huge, so I use regular expression filter, in this case

2020-09-28 20:20.*(Backup has started|PreBackupVerify|Backend event: List|Unwanted|duplicati-ie0f0f89a6e164ac3aae8bcc8919e27fb.dindex.zip.aes)

2020-09-28 20:20:00 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2020-09-28 20:20:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.BackupHandler-PreBackupVerify]: Starting - PreBackupVerify
2020-09-28 20:20:28 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()
2020-09-28 20:20:37 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (141 bytes)
2020-09-28 20:20:37 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-ie0f0f89a6e164ac3aae8bcc8919e27fb.dindex.zip.aes
2020-09-28 20:20:37 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-id36ed9475eae44f8a620a893db749b59.dindex.zip.aes
2020-09-28 20:20:37 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Remotevolume" WHERE "Name" = "duplicati-ie0f0f89a6e164ac3aae8bcc8919e27fb.dindex.zip.aes"
2020-09-28 20:20:37 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "Remotevolume" WHERE "Name" = "duplicati-ie0f0f89a6e164ac3aae8bcc8919e27fb.dindex.zip.aes" took 0:00:00:00.000
2020-09-28 20:20:38 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.BackupHandler-PreBackupVerify]: PreBackupVerify took 0:00:00:09.735

so again, at this look-level it looks like it did a backend List, and did not feel the need for a backend Delete.

If your test actually gets into Delete requests, it will probably be upset that the bucket will fail the requests.
My test script which replaced the rclone program with a script wrapper that randomly fails) found a mess:

Version delete:
2020-08-20 09:50:32 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Failed: duplicati-20200820T131000Z.dlist.zip (1.61 KB)
2020-08-20 09:50:32 -04 - [Information-Duplicati.Library.Main.BackendManager-DeleteFileFailed]: Failed to delete file duplicati-20200820T131000Z.dlist.zip, testing if file exists
2020-08-20 09:50:32 -04 - [Warning-Duplicati.Library.Main.BackendManager-DeleteFileFailure]: Failed to recover from error deleting file duplicati-20200820T131000Z.dlist.zip
System.NullReferenceException: Object reference not set to an instance of an object.
   at Duplicati.Library.Main.BackendManager.ThreadRun()

Compact delete:
2020-08-20 10:30:42 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Failed: duplicati-icd07271238a346309f101338f9d4b083.dindex.zip (44.69 KB)
2020-08-20 10:30:42 -04 - [Information-Duplicati.Library.Main.BackendManager-DeleteFileFailed]: Failed to delete file duplicati-icd07271238a346309f101338f9d4b083.dindex.zip, testing if file exists
2020-08-20 10:30:42 -04 - [Warning-Duplicati.Library.Main.BackendManager-DeleteFileFailure]: Failed to recover from error deleting file duplicati-icd07271238a346309f101338f9d4b083.dindex.zip
System.NullReferenceException: Object reference not set to an instance of an object.
   at Duplicati.Library.Main.BackendManager.ThreadRun()

however in theory a test without delete from version retention or automatic compact shouldn’t go there…
For good understanding of the retry delete handling, someone might have to go looking through the code.