Backup Plan Suggestions

Hey All,

Just started playing with Duplicati, but so far, I’m really liking it. A couple minor annoyances, but nothing I can’t work around. My biggest question relates to finding the preferred method for making both local and off-site backups.

Currently I have a desktop system that’s my daily driver, and I’ve been working to bring an unRAID server online (well, it’s online, but constantly working on it to add features). In my testing of Duplicati so far, I’ve just been backing up my user files from my desktop to a share on unRAID. All good. But eventually, I’m going to want to have Duplicati backing these files, as well as the other data on my unRAID server, to a remote location.

Given this context, I see two options: 1) Create a second backup set on my desktop that backs up to the remote store, so now I have 2 backups going locally, plus creating a backup set on unRAID to backup all the other stuff (ie. Both instances manage their own remote backups). Or 2) Have the unRAID Duplicati instance backup the Duplicati data that my desktop has created on the server (ie. Desktop -> Local server -> Remote).

Talking myself through it now, it would seem like Option 1) would be the better way to go. Just means there’s some increased management overhead. But I’d be curious what the community has done.

Hello @Caldorian, welcome to the forum!

I use SFTP on my unRAID back as a backup destination for both local and off-site machines. I’ve tested with a NextCloud Docker container but haven’t found a driving need to shift to it. I have a Minio Docker container running but never got around to testing with it.

As for setting up a source to go to multiple destinations, the easiest method is your option 1 - two jobs with the same sources going to different destinations. A potential bonus to this design is you can use different schedules and retention policies if you have storage or bandwidth limitations in one place vs. the other.

Another option a few people have used is local backups (either on local machine or to a local server) then synchronizing that backup to a remote location with something like rsync or Syncthing (which does have a Docker container).

Note that this option has a few side effects to keep in mind before choosing it:

  1. If you want to restore ANYTHING from the remote sync you pretty much need to get ALL the remote files to somewhere that Duplicati can see. (Note that alternatively you could run the Python based restore script at the remote sync location.)

  2. Be aware that if something happens to your “local destination” that causes files to be deleted, those deletes will be synchronized to the remote location as well.

Hi @Caldorian,

just as @JonMikelV described, i always use 2 jobs for the same set of data and backup to local NAS and offsite.

The overhead you mentioned is (at least in my case) minimal, as i set up the local job, test it and once it is running fine i just copy the thing (export the job and import it as a new job in duplicati - adjusting the destination and perhaps job time and frequency) and all is set.

I’ve done this for a while now with around 6 jobs and think, this is the best option (unless duplicati incorporates more than one destination for one job ;-)).
So you end up with 2 independend backups - you never know :wink:

The only thing i’ve done (and others as well), is for the big offsite jobs to deselect some files and directories at the first run and adding these again over time to minimize the initial run time (depending on your connectiion) as the local jobs will wait 'till completion of the offsite job.

Thanks guys. As I was typing my post, I had pretty much convinced myself that using dual-sets on my daily system was the right option. If for nothing else, then that I would be able to restore my daily system directly in case of catastrophe, rather then having to first restore the unRAID server, restore from there.

1 Like