Backup rotation on removeable drives fails after drive change

We wish to back up to removable hard drives however after the first backup and drive change backups fail with the message:

“Found 471 files that are missing from the remote storage, please run repair”

The repair button doesn’t seem to do anything.

All drives are the same, have the target directory and use the same drive letter, The label on the drives are the day of the week though.

we are running 2.0.4.12_canary_2019-01-16

Helpful clues as to how to get this resolved would be appreciated…

Thanks… Bob

A Duplicati backup expects source and destination to stay in sync, and the local database must as well. Changing destination to a suddenly-blank one will get the sort of complaint you saw. It doesn’t match… Duplicati needs such matching because it only backs up changes, and builds heavily on previous work.

Unfortunately Duplicati is not now well-suited for a day-of-the-week plan. I think you’d start with running Export/Import or something to clone backup configurations. To help avoid accidents of mismatching the backup and drive (which if you’re lucky will complain as it did, and if unlucky might make a major mess),
Windows Drive Letters says how --alternate-destination-marker and --alternate-target-paths might help.

Since my first message I have had " AllData Starting backup " at the top of the console page. It also has created a file in the target drive and directory " duplicati-20190117T143728Z.dlist.zip,aes "

Windows has been told that all of the drives in play shall mount as Drive f: and tests of drive switching in fact does show that is the case.

It is also seems to be using processor and memory resources but there is no evidence that it is going to rebuild create a new base collection of files to use for differential reference on that drive.

Having reviewed your first reference I get that it only does differentials off the first pass (which creates a full backup ) and, from my observations and checking, works very well ( Somewhat Awesome in fact )

I will see if either of those " alternate " options help.

Our goal is that the last drive in a weekly series will be held in in a 4 week rotation besides the daily drives. In order to be able to restore the data in it’s entirety after a disaster we need to have the full backup and related differentials to do so on every drive to avoid having to go through every drive ( which I would hope works… It did on the test we did on one drive with many backup runs for testing purposes.

Bob

.

The usual backup file output to the destination is a stream of dblock files of the default 50MB size with a small dindex for each dblock and a dlist at the end whose filename contains the date and time of the backup’s start.

Because it’s no longer 20190117, the dlist you see (is it new?) seems like it would be from before. If this is the empty drive swapped in, repair can often rebuild missing dindex and dlist files from the local database, but it can’t rebuild dblocks. These contain the actual backup data, and I don’t think a repair will do backup though there’s an issue currently where starts of non-backup operations are announced in the UI with “backup” word.

Could you check the drive with the first backup, to see if it has a file by the same name? That would fit theory.

Every backup saves changes from the one before. The first backup has a lot of changes from nothing at all.

Complete, incremental or differential gets into this some, for those who know traditional backup terminology.

How the restore process works says how files are rebuilt from blocks. There’s no full/differential/incremental, and disaster recovery only needs one drive. Other drives would be additional complete backups (for safety). Sometimes off-site storage needs can influence plans. If a disaster takes out the primary, how old is too old?

Ok So far I have confirmed my picture of how Duplicati works… IE: after any backup job the actual data on the backup media is entirely current.

So If that drive is taken off site it will be useable to restore data up to the last time it was used to backup.

So I will now see if we can figure out a way to do that with each of the rest of the drives… I think it can be done by using the command line to enable the appropriate job when the drive changes based on the drive labels… This would run before the scheduled backup times…

Bob

.

That could work, but you have to create a backup job for each backup target you use.

I’m wondering what would happen when you use advanced option –dbpath to change the location of the local DB to the backup drive itself. Result we be that each backup drive will contain a database that is in sync with the DBLOCK/DINDEX/DLIST files on that drive.

I’ve never tried this, but I guess there could be some pitfalls:

  • Performance drop could be expected. Database queries use the same bus as the backup data that is read/written from/to the external disk.
  • Before starting any operation, one of the external disks must be attached. Basic operations, like displaying a list of backed up folders, need access to the local DB.

Because Duplicati is basically an “incremental forever” type of backup solution, it won’t want you to rotate the backup destination drive. It’s just not built for that type of usage.

If you really want to take a physical copy off site, what you might consider is something like this:

  • Leave your main backup drive in place at all times so as not to confuse Duplicati
  • Every so often plug in your second backup drive and synchronize the data from the main backup drive
  • You’ll want to sync everything exactly - including any file deletions that have happened on the main backup drive (robocopy /mir is your friend)

One caveat is that the local database will get out of sync with this secondary backup pretty quickly. Not a showstopper but it may make restores more complicated (you may need to rebuild the database from the secondary backup copy).

Other backup software like Windows Server Backup or Altaro VM Backup are also incremental-forever backup solutions, both support backup rotation on removable drives. So I guess it’s a valid question to ask for support for this in Duplicati.

Using a third party sync tool could work, but you have to connect 2 external drives manual sync the files after backup completes. Native support for drive rotation would be a lot more comfortable.

When I have time, I’ll try how backups work with local DB’s located on the backup target.

Device mount detection (USB or otherwise) is a feature request where someone had a partial solution that used an event to start a task which ran Duplicati. If your scripting is good, you could probably build on that. Using Duplicati from the Command Line runs independently from GUI tasks, so be careful to avoid collision.

One thing that sounds nice about that arrangement is that it should avoid the sometimes-extremely-slow database Recreate that would be required if the local DB is lost along with the rest of the source system, however that nice “all-in-one” package might also (as you suggested) offer you “nothing-at-all” without it.

Duplicati runs into somewhat similar bumps with cloud storage systems that use “cold storage”, meaning destination files aren’t instantly available. There are probably some forum posts about that, but taking a further step, and not even having the database there might get Duplicati more upset. Happy testing. :wink:

Yes but what complicates this is deduplication. Each unique block is only backed up once. (I’m not familiar with Altaro but Windows Server Backup doesn’t use dedupe.)

Say Duplicati behavior was altered so that it does not complain when a target drive is changed out. New blocks of data could be stored on this newer backup drive. But it would not be able to prune backups and delete blocks that are only present on the older (now disconnected) backup drive. You’d also probably need BOTH drives connected in order to restore data.

It just will get insanely messy.

Maybe another possible solution: set up two independent backup jobs - one targets a drive that is always on site, the other targets a drive that is taken off site regularly.

Personally I think targeting cloud storage is the best way to get backups off site!

Solved!

Windows server solution.

  1. Open Control Panel, Administrator Tools, Computer Management, Storage, Disk Management,.

  2. Insert First drive and change the drive letter to another letter ( preferably in a series of available letters).

3,Replace drive with next drive and repeat step 2 choosing a unique letter.

4, Repeat 2 and 3 as required,

5, Set up a backup task for each drive ( one for each drive ) changing the drive letter for the storage path and you should be good to go.

This appears to work reliably and is simple to implement. This also works on the same if you set the drives to mount on empty directories under a single drive letter

Hope this helps others with similar requirements…

I had a scare when the update from .12 to.13 occurred …

and will now go off and deal with that issue

Bob

.

I’ve tried it for a couple of days and it works surprisingly well! This is what I did to configure the backup:

  • Created 3 virtual harddisks (VHDX) to simulate 3 external disks as backup target.
  • On each disk, configured the same driveletter in Disk Management (F:) and created the folders F:\Duplicati\Backup and F:\Duplicati\Database.
  • With 1 of the 3 disks attached, created a new backup job. Destination type: “Local folder or drive” and path for backup files: F:\Duplicati\Backup.
  • In the main view, clicked to expand the backup job and clicked Database… Moved the Local database path to F:\Duplicati\Database\xxxxxxxxxxxxxxxxxxxx.sqlite.

After the backup had been set up, this is what I did to test it:

  • Started the initial backup (about 20 GB) which worked fine. The local DB was automatically created in F:\Duplicati\Database after the first backup was started.
  • After the first backup completed, swapped backup disk 1 with backup disk 2 and started a new backup. This worked without problems too. Local DB created automatically on the second disk and initial backup completed successfully. Same for backup disk 3.
  • Made lots of changes to the source files (additions, deletions of files from different sizes) and started backups using random backup drives. None of them had an issue.
  • Deleted the local DB from one of the backup drives and started a new backup. DB was recreated automatically and backup completed successfully.
  • Executed come commands, like COMPARE and DELETE. All operations worked as expected.
  • Restored some files from different drives. No issues.

I expected that backup/restore operations would be significantly slower (source files and VHDX containing local DB and backup data are on the same spinning HDD), but I didn’t notice a performance drop.

Synchronizing backup files and creating multiple backup jobs have a few disadvantages:

  • Synchronizing backup files is an additional (manual?) task that must be maintained outside the Duplicati environment. You also need 2 backup targets to be available at the same time.
  • If Synchronizing backup files is scheduled somehow, there is a risk that the sync process starts before the Duplicati backup completes, resulting in inconsistent backup data at the “backup-of-the-backup” location.
  • If Synchronizing backup files is configured incorrectly, or aborted, data at the “backup-of-the-backup” location could become unusable.
  • In case of a system crash, the local DB has to be recreated.
  • When using Multiple backup jobs, the job list could become confusing if there is more than 1 backup job that is replicated for every backup target.
  • When using Multiple backup jobs, a local DB for every backup job to every backup disk is stored on the Duplicati host. Host drive could be filled up with a bunch of local DB files.

There are a couple of benefits for this strategy, compared with using multiple backup jobs or synchronizing backup files to another location:

  • There are (in my scenario) 3 sets of remote filesets. If one of them becomes corrupt (deleted/corrupted DBLOCK files), I still have 2 fully working backup sets.
  • If a database becomes corrupt, recreating the local DB is not required to restore files, just us another backup disk.
  • No need for configuring/maintaining additional/manual tasks for replicating backup files.
  • Just one backup job for every source fileset.

DISCLAIMER: I tested this just for a few days, so use this strategy at your own risk! However, I did not have any issue.