Defense against trojans

Hi

Starting to use duplicati again after years of absence (had been using crashplan instead), I thought to think this true indepth.

The one threat I still see is that a particularly nasty Trojan will steal my cloud logins and delete the blocks there. Only real solution I could think of so far is to backup using sftp to a machine that will prevent deletion by changing permissions accordingly (or using a copy on write type FS). But obviously that brings the headache (and expense) of having to maintain such a machine vs simply using Cloud storage…

Is anyone aware of a solution to address this? (I was thinking of reaching out to backblaze to ask if such a limited permission account could be introduced to b2).

Thanks a lot.

1 Like

Gabriel, are you proposing nobody would be able to delete files? If so, then certain of Duplicati’s normal maint. functions would have to be disabled or you’ll get a bunch of errors.

Through what method are you worried about your cloud logins getting stolen? A trojan on your machine, something that hacks the cloud provider, a man-in-the-middle attack or something else?

If you’re that worried about the issue you might get better piece of mind by using Duplicati to back up to a local USB drive that is only connected during backups. That sort of air gap security is REALLY good at stopping trojans. :slight_smile:

Of course an even better solution would be to use the cloud backup as normal AND have the local USB backup. That way you’re protected against cloud failures as well.

FWIW, some cloud providers have a feature where deleted files can be recovered within some grace period. I guess depends how paranoid you are I suppose.

There was seperate discussion on one-backup-source-to-multiple-targets which should help for the USB/Cloud solution mentioned above.

In essence I am proposing that duplicati would not be able to delete anything , meaning it would not be able to prune backups, indeed.

What I am worried about is the user catching a nasty trojan somewhere which then goes on to mess with the cloud backup (admittedly small risk but who knows, ransom ware is getting ever more sophisticated). Problem with the deletion grace time seems to be that as soon you have access to the cloud account (at least seems true for Backblaze, OneDrive and GDrive) you can very easily circumvent that…

I am not so worried about MITM or security issues at cloud provider (if those happen, only second or third provider can help you and I am already considering using B2 in addition to OneDrive).

I did consider USB targets but honestly, if the users are my parents, chances are that will not happen all too often… I think I will reach out to backblaze and see what they say.

I believe it is possible to configure Duplicati to not attempt any commands that involve a delete to the destination, but that wouldn’t stop a bad actor using the same authentication Duplicati uses to go out there and do whatever they want.

I think your best bet is to find a destination / cloud provider with fine enough permissions controls to dissallow deletes for whatever user account Duplicati connects with - and of course still configuring Duplicati to not try to do any delete-type things.

You could also consider using a local always-connect USB drive with similar permissions appropriately set for ALL users except special not-used admin account. That way the only way a trojan could do anything would be to write to the drive until it’s full or reformat the drive all together.

I read this as concerning ransomware attacks. Thankfully I have yet to hear about ransomware that disables and destroys backups, but I would not be surprised if that was a next move.

Duplicati can support this (as mentioned by @JonMikelV)m, because Duplicati is designed to never update a remote file. You need to have a backend that allows you to write new files, but not update or delete the files.

If you have that, you can simply set “keep backups forever” and --no-auto-compact and you are good to go.

I know that S3 can be set up with an IAM policy that prevents deletion, but it does not prevent overwriting a file (uploading a new zero byte file has the same effect really). You can set up S3 buckets to make a copy of new files, but this will cost you in storage fees.

If you have control over the server, you can set permissions to “read-only” a little while after a file has been uploaded to achieve the same effect.

I think some providers also offer “cold storage” features which essentially make a file inaccessible without manual intervention requesting their move back into “warm” or “hot” storage (I picture “cold storage” as a tape drive where somebody has to insert the tape before the files can be move out to hard drives and become accessible again). That might be another option to look at if the provider doesn’t offer delete permission options.

1 Like

Cold storage does not really protect files from deletion in most cases (you need to be able to manage it after all). Neither do most versioning schemes as again the app can manage versions in general.

I will reach out to Backblaze and ask about adding append only accounts…

I think when researching the wasabi service I saw they had a no-delete option (useful for company to meet regulatory retention needs) which might address OP concerns.

S3 also recently added a bucket type (called a “vault” I believe) that doesn’t allow deletions for industries with regulatory retention requirements.

Feedback from backblaze is that a permission system is in development to be released in a few months. Will look into Wasabi and s3 again.

3 Likes

Digging out this old topic…
Ransomware safe backups become more important every day. Our best practice currently includes to use a dedicated username/passwort to access the backup destination. (Ransomware tries to access every network share it can find).
The problem I see with Duplicati is, that it does safe the username/password in an accessible way. So theoretically a ransomware could go and read out the credentials.

Is there a way to encrypt the passwords duplicati uses to access the backup destinations?

This would significantly increase the protection against ransomware.

1 Like

This has already been discussed a little in

This is my main concern too.

On one hand duplicati can access network shares Windows normally does not know about (like SFTP),
but on the other hand the credential can be read quite easily.

Is there a file system / permission sheme on Linus available to only allow read and append operations, but not deleting and changing existing data.

Maybe running a “second” duplicati instance on the backend to do things like compacting, smart retention, …
Just a thought.

I know, 100% security is not possible, but maybe we can achieve 99% :slight_smile:

1 Like

@IngoM using a Synology you could run Hyper Backup on the NAS itself further backing up the duplicati backup. If you get the access permissions right, your client (from which the backup originates) has no way to access the backup of the backup.

It’s certainly 99% safe but I would still prefer a solution where no other backup tool is involved. There should be a way to make sure duplicati does not store user credentials in a readable way or leak them somehow.

1 Like

I agree, but this is actually not possible. Duplicati needs access to the destination, so the process that runs Duplicati needs the credentials in clear.

On a compromised machine, ransomware can read such credentials, and do whatever Duplicati can.

Storing the data in an OS keychain only guards against cold attacks, not when Duplicati is running.

The only true way to guard against ransomware is to guard the server, and have a “no updates, no deletes” user. This means that backups grow forever, but that they are always intact.

Network shares do not offer this, AFAIK. Until now, network shares really have been the killer path for ransomware, so for that reason I would recommend not using it if possible.

If you control the backup storage, you could always make sure that duplicati can create files but not change them afterwards. Among other ideas, a cronjob could simply chown them to a different user (and making sure the duplicati user has no write permissions), some filesystems also have special attributes to achieve just that.

The question is much more salient with cloud onbject storage where that is not really possible.

1 Like

Feature Request for trojan safe backup?

The solution working with a file share with write and read only access, but no changes and deletes permitted, is a very interesting and the achieable solution (in my eyes) solution!
(Have not tried so far)

Having set up this, we “need” a duplicati instance running on the back end (a NAS, server, …), which we have under our own control, to verify the back end locally, deletes unfinished files from a stopped front end operation and do the smart retention automatically.
Is there a way to do this?

Maybe some locking mechanism would be fine (just a simple description):

  • front end starts and creates a file “fe_.lock” or “fe_.lock”
  • front end has finished and creates a file “fe_.unlock” or “fe_.unlock”
  • back end looks for the *.lock and *.unlock
    • if a pair exists, it deletes it and does its job
    • …
  • …

Just an idea

One of the underlying idea of Duplicati is that we can’t necessarily trust the backend, so “needing” something running there doesn’t really fit. Plus, many currently supported backends could run something even if we wanted them to.

However, I could see providing a client API to call a backend service should a person be working with one that supports it.

Or perhaps a “clean up” client which I see as the current / main client runs as usual with create only access but also drops a “here’s what I would clean up if I could” file at the destination.

A 2nd client (could be running at the destination, could be another more secure machine) then can process that file and do the cleanup.

Issues include:

  • concurrency (don’t want cleanup & backup to overlap)
  • complex reporting
  • complex destination user access setup (backup = create only, maintenance = full)
  • yet another part to have to secure and could potentially fail

Not to be paranoid, but …
Just to think about some issues and avoid traps and caveats.

Why we don’t trust it?
Security?
Availability?
Safety?

(Just thinking about some cloud data that has to move to another country :wink: (Hint: fruit and a very big country :slight_smile: )

Running any service to enhance duplicati makes only sense, when we can trust the server, because it is our own.

That was my first idea too, but if a trojan creates that file?

In the first step not necessary , because how often the smart retention and the cleanup needs to run?
I would have no problem to start this process manually on the weekend (maybe just once per month), when I know, that no backup job is running.

To avoid a worst case, there is almost no other way.
Only a backup chain with versioning would help (Just simple rsync’ the backend to somewhereelse is not a good idea).

If you have an own server / NAS, wouldn’t you secured even a little?

I have two NAS at different places.
…

Because most of the supported backends are storage on someone else’s computer. It’s one of the defining features of Duplicati in the white paper :slight_smile:

As a user who only uses backends that I do not control I might be biased, but I think adding local services that only work on some backends is a waste. It will diverge resources from development on the central Duplicati server and the resulting local service will complicate usage of Duplicati because you suddenly may need to control your destination to gain features that may not be in the Duplicati server.

To me, not having to manage a VPS in the cloud is a blessing and provides me with much more stability because someone else actually gets paid to manage the backup destination.