Any updates on the plaintext password security problem?

Looks good, thanks for the article!

Kenkendk, I’ve got a question. I set up a solution that forbids overwriting but allows appending. Last night I realized appending allows a hacker a vector of attack, just append some garbage bits on the end of a .zip.aes duplicati file, and Duplicati chokes.

Are you saying that appending rights aren’t needed, and that with no-auto-compact, every 1 file on the source gets 1 file on the destination?

No source files “becomes” a file on the destination since files are split into chunks to be uploaded.

Any file “chunks” that need to be uploaded are bundled into zip files. Any zip file will always have a unique name.

Duplicati doesn’t understand the concept of appending to files. It can only put, delete, get, and list.

Ah, perfect. I can update the guide then. Before I used an IN_CREATE trigger to set an append option. But I should be able to change it to an IN_CLOSE_WRITE trigger so that it just sets no append and no delete.

I’ve updated the #howto so that it prevents file modification/deletion after Duplicati writes one.

Now it’s at the point where you could essentially freely publicize a properly configured SSH account, including the password, and not worry. Only the Linux admin can delete the files, and if you used a good encryption password, only you can read your files. I suppose people could write new files to the same directory of your backup, but that can’t modify or affect your existing backup.

One final question. In the How-To writeup, I’m using GUI instructions, and instructing to both

  • Set backups to “keep all backups”
  • Then use additional options to set “keep-versions” to zero.

I’m assuming both aren’t needed? Does “keep all backups” run “keep-versions”? Or does keep-versions trump whatever retention settings you specify prior?

Hi all. If I set --no-auto-compact and --keep-versions=0 when using GPG, does Duplicati still need to have the private key to make a backup?

I like Duplicati idea very much. I think GPG should be the solution to this issue. Maybe Duplicati could use publick key for everything except restoring, verification and database recreation.

I have a couple of ideas about how this could be implemented. For instance, some kind of journalling or append-only mode, in which what’s in the backup doesn’t need to be read again in order to add data to the backup.

Another solution would be using one key (GPG) for chunks, and another (maybe symmetric) for metadata. This second key perhaps is not so critical and could be stored in Duplicati DB.

I hope anyone is interested in discussing this, as I think it would greatly improve Duplicati’s security and therefore, usability.

Regards.

At least, when running Duplicati as a LaunchDaemon on macOS as root gets you the database in ~root/.config/Duplicati and that can be protected by giving it user-only POSIX permissions. That would mean an attacker needs root access before getting to the backup passwords and it would protect a bit against attacks (not physical theft, though, but then you should use an encrypted file system anyway).

Understanding the configuration storage in ~/.config/Duplicati. Can I remove stuff? wandered into the password protection problem (and subtle bugs from tight permissions), but this is a much better spot.

Oh well, for easy reference, here is the LaunchDaemon file which is called /Library/LaunchDaemons/com.duplicati.server.plist on my system:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.duplicati.server</string>
    <key>ProgramArguments</key>
    <array>
        <string>/Applications/Duplicati.app/Contents/MacOS/duplicati-server</string>
        <string>--webservice-port=8200</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>LaunchOnlyOnce</key>
    <true/>
    <key>KeepAlive</key>
    <true/>
</dict>
</plist>

This boots duplicate at boot time. It can still be managed via Safari at http://localhost:8200/ or on whatever port you set the server to start with. As this starts as root, the config is created in /var/root/ which is root’s home directory.

And while we’re linking threads, anybody with an answer to How do I tell an S3 backend the exact region to use? (sorry, could not resist).

This solution has worked for me for over a year now. But I ran into a new glitch and wondered if there is a setting or a trick to avoid it.

The problem: Some bad .aes files got left on the remote server end, and Duplicati is trying to remove them, but it can’t. The server is configured to forbid deletions. Duplicati refuses to proceed until it can delete those files.

Why this happened: I stopped a backup file midway through (due to a Duplicati bug where it said the file had negative bytes remaining). Clicking the “X” in the client didn’t stop anything, Duplicati got stuck. So I had to restart the server’s Duplicati service. That seemingly left a bad .aes file behind.

What I’m wondering: Is there a way to tell Duplicati to not attempt to delete halfway completed files or bad aes files? The server forbids deletion, so it would be nice if Duplicati could recognize somehow “just leave this file alone, it’s a useless file”

Not that I know of. You can probably prevent halfway completed or corrupted files with a different backend. Many newer protocols require that uploads pass a hash (e.g. MD5, SHA1, SHA256) to verify data integrity. A hash mismatch gets an error not a save, meaning there’s either a good file at destination, or none at all.
I’m not 100% sure that Duplicati looks before deleting, but I think it looks afterwards to see if file is gone…

Overview of cloud storage systems from rclone gives an overview but doesn’t really detail the hash uses.

If you want to get really tricky with scripts, you might be able to get an rclone backend to do uploads in two stages, one being an upload to a temporary file name that isn’t a duplicati- file, then do a quick rename. Instead of running rclone directly, you’d run the atomic-upload script which would do those two operations.

Corrupted backup files can’t just be flagged in the local database, as that database can be recreated from destination files. Duplicati is very careful about checking for missing and extra files on the destination, and relaxing its care seems unwise. While one might ask for backend file redesign to “remember” the ignores, the design hasn’t changed in ages, and changing it would mean changing many programs to pick up clue.

Duplicati is not a great fit for a never-delete destination. If nothing else, inability to compact eats up space. Using an occasional maintenance window can solve this – and also let it clean up the unwanted bad files.

Does it forbid renames? If not, just rename the bad file to something that doesn’t use a duplicati- prefix.

Hmm…I was afraid of that.

I’m forging a path to see how a never-delete destination would work. Seems fair to say that occasional manual maintenance is required with some kind of root access to the file system or enabling deletion for a bit.

I tried that on 6 files. Duplicati wasn’t happy with me, wouldn’t me run repair or purge. I tried variations to fix it, but in the end I just moved the local database file, then let it recreate a database from scratch. That did the trick.

I’m not sure exactly what you saw, but did you get past “Duplicati refuses to proceed until it can delete those files” and perhaps hit the next issue? I’m surprised Recreate survived, but that’s good news if there’s ever a large push to achieve your original wish of just leaving partial junk on the destination. Still seems dangerous.

I’m trying to remember all that I saw in the order I saw them. Originally I would get a message in my log file that Duplicati couldn’t delete a duplicati-…aes file (this goes back to when Duplicati goofed and said I had negative bytes remaining on the remote and got stuck and wouldn’t proceed). My attempted fix to this error was to use root on the remote end to delete the listed file, then try again. It happened for several more files, each time I deleted them one-by-one. Eventually I just deleted all files for that date.

Then it gave an error that 6 files were missing. It gave me a Repair button, and I tried clicking on that, no luck, it kept saying 6 files were missing. I dug deeper and saw an error message saying I should turn on “rebuild-missing-dblock-files” or to just purge. I did turn on “rebuild-missing-dblock-files”, but that didn’t work either. I never tried purge because I think that said it would delete files, but the remote end doesn’t allow deletions.

That’s when I decided to move my local database and do a database recreate. Took a day. I got a couple of local errors: " Remote file referenced as duplicati-…aes by duplicati-…aes, but not found in list, registering a missing remote file" But then the log file gave me a green checkbox that the backup worked.

Ultimately I’m still happier with this approach. I just don’t trust that if my machine is compromised, my backup could be compromised since Duplicati stores passwords plaintext. The idea of a no-delete remote end helps put me at ease, even if I have to manually manage things from time to time. (I also have a completely separate rsync backup I do every few months on all my files in case my duplicati backup goes bad.)

After 2 years, I’m throwing in the towel on my write-only chattr +i attribute solution. I’ve since utilized a different approach.

The difficulty
I just had too many times when a backup would die midstream (such as a loss of internet connection from xfinity). My process was to watch my duplicati-monitoring email reports, see something was amiss, go to Duplicati’s web interface, look at logs, find the bad backup file that Duplicati wasn’t allowed to delete, manually delete that on the remote end, then re-run the backup. Sometimes it would trip up on another backup file, so I’d manually delete that, etc. It was just too time consuming.

The root problem
Duplicati storing plaintext passwords isn’t the problem. Duplicati unable to cleanly work with a “no delete” remote server isn’t really the problem either. The problem is the same issue standard ssh/rsync backup users have been facing for years and years and years: the server needs some form of a recycle bin or snapshot system to hold onto deleted files instead of really deleting them.

I first came up with a bad fix, and then found the right fix.

Bad fix
Use symlinks often on the server, let Duplicati access symlinks, but not the actual file. The goal being if an attacker deletes a backup file, only a symlink is removed. An admin still retains access to the actual file.

The idea was that Duplicati would start by creating a backup file on the server. Once the backup file is written, something on the server then moves the file to a protected directory which Duplicati/ssh account can’t access. Then the server exposes the file in the directory Duplicati sees via a symlink. This way Duplicati works the same as before.

The right fix
Use the ZFS file system on the remote server and utilize ZFS snapshots. Snapshots can be made at any point in time. Older snapshots can be restored easily. Minimal extra data overhead is needed to create a snapshot. With ZFS, you can allow the ssh user rights to create a snapshot, and prevent the user from deleting snapshots. Having Duplicati create snapshots at the end of a backup is easy to script…

So far the biggest downside of ZFS is it’s just a different way of thinking about file systems. I’ve tripped up several times already getting it started. I have to think in terms of a pool, datasets, and mounting datasets. I had another small issue in that I used a 32-bit Raspberry Pi 2 which doesn’t support ZFS, so I needed to get a 64-bit Raspberry Pi4 and obtain a 64-bit OS for it.

Overall, Duplicati + ZFS feels like a match made in heaven. Duplicati is exactly what I want in a client side backup program, and ZFS is what I want on a server side file system to protect my data.

1 Like

I also use filesystem snapshots. My duplicati backups are stored on a Synology NAS with a btrfs filesystem. Regularly scheduled filesystem snapshots are a great way to add an extra layer of insurance.

Since this topic has a handful of pageviews, I’m adding my conclusion:

I have the SSH server’s filesystem running ZFS, and the SSH server creates ZFS snapshots after every backup. Only the root user on the server can delete snapshots. Obtaining prior snapshots is incredibly easy, just check the .zfs directory and ZFS presents you a directory of files and directories at that snapshot’s point in time. This approach protects me from some attacker wiping out my files and then SSHing into the server and wiping out my backup.

While this is complicated, Duplicati is only a client side backup tool. Duplicati’s theme actively avoids managing and protecting the server side. So we need to come up with our own solutions.

The following is a bash script which automatically creates ZFS snapshots on the server-side for multiple duplicati backups. I run it with a cron job every hour to detect backups and then act. (There is simply no good way to invoke a server side script the moment a duplicati job completes, so cron it is.)

#!/bin/bash

# Bash script to create ZFS snapshots on a Linux backend with a ZFS filesystem
# This script assumes ZFS datasets are found in /tank/duplicati/name_of_dataset
# and also that client duplicati backups are stored in /home/name_of_dataset/duplicati-backup

# Add locations as needed
declare -a arr=("brad-computer" "brad-laptop" "family-samba")

for directory in "${arr[@]}"
do
  # Check the directory to ensure all files are older than 60 minutes.  This
  # is done to ensure we don't snapshot at the same time a duplicati backup is occuring
  if [[ -z $(find /home/$directory/duplicati-backup -type f -mmin -60) ]]; then
    # Confirmed all files in this directory are at least 60 minutes old.  
    # Get the most recent ZFS snapshot
    snapshot=$(zfs list -t snapshot -o name -s creation -r tank/duplicati/$directory |& tail -1)
    if [[ "$snapshot" == "no datasets available" ]]; then
      # No snapshot yet, perhaps this will be the first snapshot?  Check if there is 
      # at least a ZFS dataset for it
      if [[ -n $(zfs list -t filesystem tank/duplicati/$directory 2> /dev/null) ]]; then
        # Create the first ZFS snapshot for this dataset
        /usr/sbin/zfs snapshot tank/duplicati/$directory@`date +"%b-%d-%y_%H:%M:%S"`
      fi
    else
      # Check if this directory has any changes since the last ZFS snapshot
      diffresult=$(zfs diff $snapshot)
      if [[ -n $diffresult ]]; then
        # Changes detected.  Create a snapshot.  
        /usr/sbin/zfs snapshot tank/duplicati/$directory@`date +"%b-%d-%y_%H:%M:%S"`
      fi
    fi
  fi
done

I’ve ran this solution for a week and it’s working great.

1 Like