Any updates on the plaintext password security problem?

Thanks!

That makes sense now. :slight_smile:

A question for you, I want to contribute back to this project a bit. Would it be ok if I wrote a documentation page or two in the manual’s articles (here) on setting up SSH keys and Duplicati for backups to an SFTP backend? Additionally, a security section describing how to configure to prevent unwanted deletions? I ask because it took me a few rounds of reading and thinking to get auto backups functioning to a clean, secure state.

I’d just write it up, then do a git pull request for review.

Perhaps @brad could clarify what help was asked for

My goal is simple, configure the SFTP destination to allow read/append access, but prevent overwriting existing data or deletion. Suppose the source machine is compromised, then the attacker cannot compromise the backup.

1 Like

Contributions are warmly welcomed. Code, documentation, testing. Anything goes :slight_smile:

1 Like

Any help to improve the documentation is welcome!

If you want to explain how to configure a specific operation in Duplicati, posting it in the #howto section of this forum is your best choice.

If you want to improve something in the manual (better explanation, fix a typo), just do a PR at Github.

If you want to add some technical information about how the software works that’s applicable to any backend, please add an article to the Articles section of the manual and do a PR.

My guess is that a step-by-step guide on how to configure SSH/SFTP and how to configure blocking tampering backend files fits best in the #howto section of the forums. There are some great documents with comparable content in that category (example).

But you’re the author ofyour files, if you think that your docs relate to general use of Duplicati (instead of a specific part, like a particular backend), feel free to submit a PR on Github!

1 Like

I’ll try to get something written up this week. Also, I like to play well in the sandbox, so I’ll start with the #howto approach in the forums.

As a heads up, I never knew the #howto even existed until you pointed it out. I went straight to the online documentation first, and explored the manual first, and articles second. I’ve always consider forums for people solving issues, rather than for FAQ guides. I bet others have done the same approach as me too.

Maybe it would make sense to add links to howto articles in the documentation.

That way users of the documentation will be guided the right way and we won’t fill the actual documentation with information that’s not strictly duplicati information.

How does this look? SFTP/SSH backups to a Linux server with added security

1 Like

Looks good, thanks for the article!

Kenkendk, I’ve got a question. I set up a solution that forbids overwriting but allows appending. Last night I realized appending allows a hacker a vector of attack, just append some garbage bits on the end of a .zip.aes duplicati file, and Duplicati chokes.

Are you saying that appending rights aren’t needed, and that with no-auto-compact, every 1 file on the source gets 1 file on the destination?

No source files “becomes” a file on the destination since files are split into chunks to be uploaded.

Any file “chunks” that need to be uploaded are bundled into zip files. Any zip file will always have a unique name.

Duplicati doesn’t understand the concept of appending to files. It can only put, delete, get, and list.

Ah, perfect. I can update the guide then. Before I used an IN_CREATE trigger to set an append option. But I should be able to change it to an IN_CLOSE_WRITE trigger so that it just sets no append and no delete.

I’ve updated the #howto so that it prevents file modification/deletion after Duplicati writes one.

Now it’s at the point where you could essentially freely publicize a properly configured SSH account, including the password, and not worry. Only the Linux admin can delete the files, and if you used a good encryption password, only you can read your files. I suppose people could write new files to the same directory of your backup, but that can’t modify or affect your existing backup.

One final question. In the #howto writeup, I’m using GUI instructions, and instructing to both

  • Set backups to “keep all backups”
  • Then use additional options to set “keep-versions” to zero.

I’m assuming both aren’t needed? Does “keep all backups” run “keep-versions”? Or does keep-versions trump whatever retention settings you specify prior?

Hi all. If I set --no-auto-compact and --keep-versions=0 when using GPG, does Duplicati still need to have the private key to make a backup?

I like Duplicati idea very much. I think GPG should be the solution to this issue. Maybe Duplicati could use publick key for everything except restoring, verification and database recreation.

I have a couple of ideas about how this could be implemented. For instance, some kind of journalling or append-only mode, in which what’s in the backup doesn’t need to be read again in order to add data to the backup.

Another solution would be using one key (GPG) for chunks, and another (maybe symmetric) for metadata. This second key perhaps is not so critical and could be stored in Duplicati DB.

I hope anyone is interested in discussing this, as I think it would greatly improve Duplicati’s security and therefore, usability.

Regards.

At least, when running Duplicati as a LaunchDaemon on macOS as root gets you the database in ~root/.config/Duplicati and that can be protected by giving it user-only POSIX permissions. That would mean an attacker needs root access before getting to the backup passwords and it would protect a bit against attacks (not physical theft, though, but then you should use an encrypted file system anyway).

Understanding the configuration storage in ~/.config/Duplicati. Can I remove stuff? wandered into the password protection problem (and subtle bugs from tight permissions), but this is a much better spot.

Oh well, for easy reference, here is the LaunchDaemon file which is called /Library/LaunchDaemons/com.duplicati.server.plist on my system:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.duplicati.server</string>
    <key>ProgramArguments</key>
    <array>
        <string>/Applications/Duplicati.app/Contents/MacOS/duplicati-server</string>
        <string>--webservice-port=8200</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>LaunchOnlyOnce</key>
    <true/>
    <key>KeepAlive</key>
    <true/>
</dict>
</plist>

This boots duplicate at boot time. It can still be managed via Safari at http://localhost:8200/ or on whatever port you set the server to start with. As this starts as root, the config is created in /var/root/ which is root’s home directory.

And while we’re linking threads, anybody with an answer to How do I tell an S3 backend the exact region to use? (sorry, could not resist).

This solution has worked for me for over a year now. But I ran into a new glitch and wondered if there is a setting or a trick to avoid it.

The problem: Some bad .aes files got left on the remote server end, and Duplicati is trying to remove them, but it can’t. The server is configured to forbid deletions. Duplicati refuses to proceed until it can delete those files.

Why this happened: I stopped a backup file midway through (due to a Duplicati bug where it said the file had negative bytes remaining). Clicking the “X” in the client didn’t stop anything, Duplicati got stuck. So I had to restart the server’s Duplicati service. That seemingly left a bad .aes file behind.

What I’m wondering: Is there a way to tell Duplicati to not attempt to delete halfway completed files or bad aes files? The server forbids deletion, so it would be nice if Duplicati could recognize somehow “just leave this file alone, it’s a useless file”

Not that I know of. You can probably prevent halfway completed or corrupted files with a different backend. Many newer protocols require that uploads pass a hash (e.g. MD5, SHA1, SHA256) to verify data integrity. A hash mismatch gets an error not a save, meaning there’s either a good file at destination, or none at all.
I’m not 100% sure that Duplicati looks before deleting, but I think it looks afterwards to see if file is gone…

Overview of cloud storage systems from rclone gives an overview but doesn’t really detail the hash uses.

If you want to get really tricky with scripts, you might be able to get an rclone backend to do uploads in two stages, one being an upload to a temporary file name that isn’t a duplicati- file, then do a quick rename. Instead of running rclone directly, you’d run the atomic-upload script which would do those two operations.

Corrupted backup files can’t just be flagged in the local database, as that database can be recreated from destination files. Duplicati is very careful about checking for missing and extra files on the destination, and relaxing its care seems unwise. While one might ask for backend file redesign to “remember” the ignores, the design hasn’t changed in ages, and changing it would mean changing many programs to pick up clue.

Duplicati is not a great fit for a never-delete destination. If nothing else, inability to compact eats up space. Using an occasional maintenance window can solve this – and also let it clean up the unwanted bad files.

Does it forbid renames? If not, just rename the bad file to something that doesn’t use a duplicati- prefix.