You just have the possibility to have two destinations in a job. The two backup filestores should be kept 100% identical. As long as nothing goes wrong it’ll be a pretty quick task to check that they are in sync and update with the files to each location. Just upload/delete each new/changed file from the temp directory to both targets as you go, running the hashing and compressing once.
IF something goes wrong, i.e. the two targets are not identical when starting the job (or when finsished…) you could either isssue a warning and fail or give the option to sync. Choosing to sync target a to b or b to a will be done by:
- if one target is corrupt -> copy non corrupt target over corrupt target
- if the targets are ok but don’t have the same last job run -> copy newest over to oldest
- if both are corrupt… well that’s obvious
What I see from testing is that my backup jobs run for many hours each day, even while just doing updates and not full backups. Full backup take over a week. So what I’m VERY interested to know is how much time could be saved doing it this way instead of running each job twice, currently exactly doubling the total time compared to running only one destination.
A real cool thing with this way is that you can choose to have some backup jobs run to one location and some to two, you can decide when setting it up depending on your type of data, size and so on. Yes in two-target jobs you’d have to have identical block sizes, retention, frequency of running job and so on identical. But that sounds extremely ok to me.