How to duplicate a backup with history aso

I have a number of backups, with version history of about 2 years.

Now I want to “duplicate” this backups into the cloud.

So I have Backup A, some sources, target a local DIR and I want a SECOND Backup with the same sources and the target should be a Cloud BUT: as first step I want to copy the local backup-files into the cloud.

So after this step a Backup A-1 should exists with the history aso. and a local DIR as target and a Backup A-2 sould exists with the same history aso. and a cloud DIR as target.

Same for Backup B, C, D, E, F, …

What is the easiest way to do this in duplicati for 8 to 10 backup jobs?

Personally I would use rclone for this and have it regularly sync your primary backup target to the cloud. Or if you are targeting a Synology NAS, you can use their “Cloud Sync” package.

This is what I do and it works great. If a disaster happens I can simply repoint the backup job’s “back end” to the secondary cloud copy and do restores. I find this better than doubling up the backup jobs in Duplicati itself (one for each target).

1 Like

Good point. But the Cloud is OneDrive (Office 365 Family => 6 x 1TB) and I think there is no rsync :frowning:

My plan for now is:

  • Batchfile with calls to BackendTool.exe PUT to upload all the duplicati files to OneDrive (works) in a temporary folder.
  • Export the backup configuration
  • Import the backup configuration with different name and change the target to Onedrive, start one backup and cancel it to have a local database
  • Delete the local database of the new job
  • Delete the files in OneDrive and move the temporary files to the right directory
  • Copy the database of the old job and rename it to the database of the new job

I think this should work.

The suggestion was rclone. Even if you prefer independent backups (rather than a local plus offsite clone), initial file copy might be easier and faster with rclone, or Cyberduck if you prefer GUI. For database cloning, easier path is to copy original database to where cloned job wants it, after remote is set but before backup.

Database path is visible before database exists. No need to find it with the run/cancel/DB delete sequence, which I guess is where the temporary folder comes in, ensuring backup run for DB name does no damage.

  1. Export & import config, modifying it to OneDrive folder
  2. Copy local files to new remote folder using something
  3. Copy old database to path on new job Database page

I think so too, but you were looking for the easiest way, so I’m suggesting some simplifications of the plan. More work in the short term, but maybe avoiding double job maintenance would be to use parameters-file.

Or given the rclone misunderstanding, perhaps you’ll prefer the local-backup-plus-files-clone plan instead. That might be even easier to set up and maintain, but it offers less redundancy than two fully independent.

1 Like

Oh sorry. I read rsync. Ok. This sounds good. I have installed Cyberduck and the upload is running.

And thank you for the advice with the parameters-file and that it isn’t necessary to create the database. This helps!

1 Like

Wow, wow, wow! Thank you for the tip with rclone! This is such a cool tool! I use it now for 2 or 3 hours and I am fascinated what it can do.

1 Like

Yep, it’s awesome! Hopefully it will do what you need!