Crashplan user, just have a few questions

I couldn’t find answers to these questions online. My set up: Both my brother and I use Linux and have big hard drives in our homes. We back up our data to each other’s houses. So far, I’m a happy person switching over, I love when a tool gives me freedom if I want it. My questions:

  • I put in a username and password to connect to his ssh server. How is that username and password stored for reuse? Plain text in the sqlite database in ~/.config/Duplicati/ Does Duplicati generate some kind of authorized keys to log in without a password? Do I need to set a Duplicati master password to protect my credentials to get to my brother’s server?

  • The sqllite settings database, is that password protected?

  • Should I be backing up the folder ~/.config/Duplicati/ in case something happens to it?

  • What’s the cleanest system to email me prior backup status (especially if my Linux server dies and no backups take place for a while)?

Welcome to the forum @brad, thanks for checking out Duplicati!

I’m going to leave the sqlite questions to somebody who knows that area better than I do, but I can answer the last two questions.


There is nothing in the Duplicati folder than can’t be regenerated. That being said, if you lose that folder you’ll have to manually re-create the backup job (unless you’ve exported the config to a file somewhere, in which case you can just import it) and the local sqlite backup database will have to be rebuilt from the remote destination (which can take a while to do).

If you do choose to back up the ~/.config/Duplicati/ folder, note that you may get file access errors when trying to backup the .sqlite file used during the backup process since the file is already open for writing by the backup job itself.


I’m not sure what you have in mind when you say “prior backup status” but at present there are two tools I know of being developed to provide reporting functionality beyond the current one-email-per-job option.

(My apologies to any options I may have missed.)

Thanks Jon!

Regarding the password being encrypted, best I can tell is that this is NOT happening yet: Fix not revealing stored passwords from the UI · Issue #2024 · duplicati/duplicati · GitHub

It sounds like the best approach is to create backup specific accounts that are limited to only writing to the backup destination folder. That way if your server gets hacked, then theoretically the hacker could get into sqllite, get the account info, log into the remote server, and delete your backups there (but only have rights to do that on the remote server).

I’d like more security knowing a hacker couldn’t also wipe the remote backup, but I don’t see how to avoid it. Duplicati needs to automate writing to the remote end, to do that duplicati needs saved credentials, and a hacker could easily simulate duplicati and use the saved credentials the same way to log into the remote server.

Unless I’m missing some big key security insight here, I think a limited access remote account is the best avenue available.

I don’t know about “best”, but it’s probably the safest way to go. :slight_smile:

I’m no expert but I assume the OAUTH process used for cloud services helps avoid this particular issue.

Another safer approach would be to generate SSH keypairs and for OP and his brother to authorize each other’s.

The private key is not stored in the backup job, just the file path to it.

Here’s a great step-by-step I used for this
http://andykdocs.de/development/Linux/2013-01-17+Rsync+over+SSH+with+Key+Authentication

1 Like

Thank you, yes - keys would be more secure than passwords.

I look forward to checking out that link in detail!

I set up the SSH keys tonight. It worked pretty good!

The only catch is that it still insisted I enter a password. So I entered a password like “a”, and then it allowed the test connection to proceed.

I noticed that too, about the password. Should probably be a bug but it’s not really an impact on the functionality, just a hiccup in the UX.

We noticed the speed up backups isn’t…ideal.

My server is a little UDOO card (an x86 version of a Raspberry Pi, and this processor is found on low end laptops) My brother’s server is much beefier. In a little over 12 hours, mine backed up 90 GB, and his server backed up about 400 GB.

I have about 1 TB total, he has about 3 TB. So we assume this backup will take a few days.

Is it normal to take this long? Or should it be going faster?

Due to the encryption, hashing, and compression performance can vary greatly depending on hardware and settings. A large number of files can also affect the .sqlite database performance.

There are some hashing improvements in the testing stage and starting with 2.0.2.8 canary there’s an improved hashing function that might help.

You can also try reducing your compression level with --zip-compression-level=1 (or even 0). Choosing an appropriate compression format can help as well (for example, zip is generally less taxing than 7z).

The database functionality is slowly being reviewed but so far nothing major has been improved in the backup job process (but there are some great advances in restore performance).

Removing compression and changing the hash algorithm sped things up for this user by 33% and 15% respectively…

Is the bottleneck your Internet circuit?

If it is you could seed the initial backup using a local USB drive. Once that backup completes, take the USB drive to your brother’s house and copy the backup files to the ultimate backup destination. Then you reconfig the destination in your backup set to point to that location.

I did this when backing up my dad’s computer to my home NAS for the first time. Made a HUGE difference as the upload speed on his internet circuit is not that great.

@brad, drwtsn32’s post reminded me that the initial backup is the worst in terms of performance. Once completed, future backups should be MUCH faster due to the deduplication process cutting down on the amount of data needing to be processed.

The computers are sitting right next to each other connected through a gigabit Ethernet connection (and a decent switch in between capable of sustaining 1 Gb transfers in between. I personally have my backup on a USB 3.0 enabled 6 TB drive, so that’s not the issue either. Unfortunately, it seems that Duplicati is simply slow with big data + low end processor. Even a high end processor isn’t backing up data as fast as we would like, though we expect his 3 TB backup should complete in a little over 48 hours total.

Yes, subsequent backups will go faster as the initial seed is done.

Is it really this slow?

So after about 6 days, it backed up roughly 750 GB out of 850 GB. I took my computer back home, set it back up, and now…it needs to revalidate the backup? I hooked it up, it’s counting the backups, and a number the file number and total amount is decrementing. But it doesn’t say why (though my network monitor indicates it doesn’t appear to be actually sending all the data again.)

Does it revalidate the backup? And at this pace, it has validated 15 GB in 1 hour, so I need about 2.5 days to validate it again?

Is there a better way to see exactly what the program is doing at any given time?

It’s most likely rescanning your files to identify changes needing to be backed up. Since the first 750G was already backed up it lonely won’t find anything.

There are some advanced parameters you can set to do things like only check timestamps of files, lower compression levels, and use different hashing algorithms which can speed things up.

The main menu “Show Log” option has a “Live” selector offering various detail levels up to “Profiling”. Note that the appears to be an issue with live logs not displaying correctly in Chrome with at least the 2.0.2.12 and 2.0.2.13 canary versions.