Local+offsite (B2) backup destinations

Howdy!

I’m just starting to monkey around with Duplicati, and one question I have is this: What is the best way to implement a backup scheme that includes local and off-site storage?

I realize that Duplicati supports both local and B2 based storage, but I don’t think it makes sense to do every backup operation twice (and I also expect, although haven’t confirmed, that operating off of a live backup on B2 will use more transactions than rclone does simply copying out the new/updated files).

This thread discusses the topic to some extent, but doesn’t entirely cover the restore options that would be available.

If I were to backup to a local path first and then mirror the data to B2 (rclone, perhaps?) I would get the benefit of restoring locally when possible with a full disaster recovery plan available. If I did experience a total loss of my local storage, would it be possible to use Duplicati to restore selected files directly from B2, or would I need to download the entirety of my data to local storage before I could attempt a restore?

With a full mirror of your local backup you can simply point any Duplicati instance at your B2 mirror and start restoring from it. It’ll take a minute or two to build a local database, but then you can start restoring just the files you need.

Mirroring a local backup to a cloud solution is probably the cheapest too if you’re billed on download as well as storage.

The downside to mirroring the local backup is that if, somehow, the local backup was corrupted without Duplicati noticing, then that would be replicated to the off-site backup.

However, Duplicati does frequent validation and even corrupted backups can be mostly restored in Duplicati, so it’s not a huge risk.

Oh, one more thing. If you never download the B2 mirror files to verify integrity or to do restore tests on them you don’t really know if the B2 backup will actually work.

1 Like

Awesome! It looked like this would work, and I’ll do a bit of real-world testing, I just wanted to make sure that there weren’t any big gotcha (like using different file structures for different backends). Thanks!

I’m willing to chance this one, as the files are hashed as they’re uploaded (and B2 then verifies the hash). Bitrot or other corruption on B2 after upload is not impossible, but is an acceptable risk threshold.

1 Like

B2 seems to have their end covered, so I wouldn’t worry either. But in the spirit of Duplicatis trust-no-one architecture it ought to be mentioned :slight_smile:

1 Like

Indeed. I’ll do some smaller tests, but ultimately once I go live, doing a full restore needlessly or just for verification purposes is impractical due to the download costs.

As a side note, they have a new partnership which offers compute resources and no bandwidth/transfer charges for transferring content from B2. This might make it feasible to test the entire backup without incurring the download costs. I haven’t explored their offerings to see if there are any gotchas, but it seems like it would be quite possible to either test the files directly, or hash everything and compare against local storage. A project for another day.

2 Likes

If you try this out please consider keeping some notes on how you did it as I think it might make a useful How-To topic!

Definitely! I’m going to be winding down a couple of my public servers in a month or so, I will see if I can figure out how to grab a backup and bring it home (cheaper than downloading) at that time. If so, the next logical step would be to use that same connection to verify a backup within B2.

(Or if I have some time to kill, I’ll poke at it before then. That’s about 50/50 these days).