Local and Offsite redundancy

I’m trying to not duplicate post here, nor am I trying to re-invent the wheel :slight_smile:

I have a use case of wanting local to my environment backups (got this) that are also further backed up “to the cloud” (looking a Backblaze B2) for offsite redundancy. Local for speed and easy restore on day-to-day opps kind of things, and remote for “the house burned down” kind of stuff. There have been several discussions in the forum concerning this kind of thing, but I haven’t seen a particular “answer” - please forgive me if I missed it.

As I see it, I currently have a couple of options:

  1. Configure multiple backup jobs on the different clients to backup both local and remote
    a) this has the issue of duplicate work (compression/encryption/bandwidth/configuration)
  2. Backup local and then somehow sync the backups offsite
    a) this has the open question of “is this safe”? How resilient is the format if I am copying files offsite and the backup job runs locally again?
    b) I’m currently using Minio as my on-site “host” program… easier to configure in my environment than SFTP, etc. which would also work - minio offers a “mirror” command to copy a bucket offsite and keep it mirrored - anybody tried that?

Any other ideas or pointers?

Thanks! (and obviously, thanks for a great product!)

Bruce

I have the exact same situation that you have. I believe the general consensus on #2 was that it would be difficult to ensure the backup file block set was not in use when you are making your duplicate copy offsite.

I have opted for #1, and have 2 separate Duplicati jobs (local and remote) on all my systems. It’s been working great so far (several months of use). I believe your big bottleneck would be upload bandwidth, not encryption/compression. Configuration is a one-time effort, so the overhead there is negligible.

Yeah… probably the most reliable way - and for backups, reliability counts :slight_smile:

I looked into it a little but haven’t gotten around to trying it.

While I agree with @handyguy that a block set in use when mirroring offsite would be a “bad thing”, to me the reality of it is that if that were to happen, I would expect only a single archive file to be affected.

As I said I haven’t even tried any of this yet and I could be completely wrong, but the way I see it most likely the affected file would either be the newest backup or an old one being compacted. In either case for my data use (strictly non-business) it’s very likely I’d have slightly older versions that would be “good enough” or much newer ones such that losing an old version wouldn’t be too painful.

Obviously the amount of what’s potentially lost varies by dblock (archive) and block size - and the pain of losing data would vary depending on what you’re backing up.

Again, for me, the relatively low chance of me losing my onsite backup combined with the relatively low chance of the offsite mirror being out of sync all added to the relatively low chance of a single offsite archive block being bad makes this a relatively “safe bet”. At least compared to onsite only (or no backup at all).

Then again, have we ever confirmed that the Minio mirror doesn’t take in-use files into account and just not sync them until the file is closed? :thinking:

Of course some huge benefits of two backup jobs include:

  • if an offsite restore is needed you don’t need to figure out how to connect to the offsite mirror or bring the files local
  • you can schedule different backup AND retention frequencies (such as often & short for local backup vs. less-often & longer for offsite)

I agree that this is a big benefit… some things I might not rank as important and not include in (for pay) offsite storage, and regardless of that, a different frequency sure makes sense.

1 Like