Device mount detection (USB or otherwise)

One of the features I liked about CrashPlan was the ability to plug in a USB drive and have a backup start.

Has there been any discussion about supporting drive mount detection as a trigger for a backup?

I think that combined with --alternate-destination-marker would make for a great way to allow simple “even my mom can do it” local backups.

1 Like

I have not seen any mention of this (or at least I do not recall it).

I do not know how to install such triggers, but I like the idea.

@kenkendk, OK - I’ll bite. :slight_smile:

I’ve done C# work in Windows but no other environments so assuming Mono takes care of the .NET translations in Linux and BSD (Mac OS) I might be able to test some ideas out. :crossed_fingers:

1 Like

:fish: :tropical_fish: :fishing_pole_and_fish:

Sounds great! And yes, it takes care of all that, you mostly just have to use System.Path.Combine and not assume case-insensitive filesystem:

I think for the auto-mount, you will probably do something that is very Windows specific and then have it trigger some code that can (eventually) be used on other platforms (when they get triggers implemented too).

Former Crashplan (CP) Home user here. I found Duplicati as a useful and powerful alternative to CP. Not as easy to use, yet, but with forums and threads like this, it’ll get there.

W10 and Duplicati 2 user. I have a backup set going to a normally detached USB HDD. I found a Windows event that is triggered when plugging in the HDD (see picture).

I have a Windows Task Scheduler set to trigger a script to run when this event happens. The script is a very simple BAT file containing the export to command line output of my already-set-up backup set. The script works, and the backup starts when I plug in my drive. The problem I have is that I think it is building another remote backup set instead of using the one established from the web interface. The script-triggered backup was taking way longer to run than the quick one I did via the web interface, and the free space on my drive kept decreasing! It would be great to have it use the same set.

I hope this helps someone with more coding experience than I have.

3 Likes

Beller0ph1, that sounds like a great work-around until I (or somebody faster) gets it into the code. To be honest so far I haven’t even been able to get the checked out code to compile yet (something about duplicate Nuget issues - I hope to post in the Developers forum about it soon).

My GUESS about the disk space issues is that perhaps your even is running under a different user account than your web GUI which might be storing things in a different location (and thus indeed working with a second account).

If that IS the case, you should be able to resolve it by either making sure the Windows event is running under the same account as the web GUI and/or I THINK there are some advanced parameters you can add to tell it where to store it’s SQLite database & such.

It MIGHT be related to the --tempdir parameter, but I’m not sure.

1 Like

Please do post it. It should just run straight from checkout.

I would love to see this feature because it was one of the many things I liked when I had a MacBook and used Time Machine, the backup tool that comes with macOS. You just plug your USB drive that you configured with Time Machine, and the backup process starts without any user intervention.

1 Like

Thanks for your input! Unfortunately, I don’t know that anybody has had the time to implement this yet but it’s good to know there are additional people who would find it useful. :slight_smile:

On Windows I know I could set up an auto-run event that runs the backup via command line, but that’s a lot of effort and may not work in newer Windows version where I think auto-run is disabled by default (so even more work to turn it on AND potentially expose the machine to more risk).

Unfortunately, I don’t even know if such a hack-around is even doable on Linux or MacOS.

1 Like

Why not just poll?
Every minute or so check, if the backup destination is available. If it is do the backup. This would not only work with USB drives, but also with network, so the backup starts when the server is reachable.

That would be an OK alternative to native Duplicati support, but at the moment I don’t know how one would implement polling other than through something like a --run-script-before call with whatever polling method they feel like building.

This post mentions an existing feature that might work…

Wow your’re fast in responding!
I thought of it as implemented into Duplicati. There is no need to get into notifications from the OS, so it should be quite easy to implement. The only thing which has to be taken care is that we don’t get stuck in any network timeouts. So maybe an extra thread for this polling would be good.

Just lucky timing. :slight_smile:

Unfortunately, I don’t think it’s quite that simple.

For example, Duplicati currently runs backups linearly - so if you have a backup scheduled for 10 AM daily and another for 11 AM daily, the 11 AM one won’t start until the 10 AM has finished.

That means if the 10 AM job is polling for a USB drive that only gets attached every few days then the 11 AM job won’t actually even get a chance run for a few days.

If there’s only a single backup job, then this isn’t an issue - but as soon as there is more than one then the entire queuing design has to be rewritten to something like “if a job gets shifted into the polling position then take it out of the queue and start a 2nd thread for the polling that will put it back into the queue when the poll requirement is satisfied.”

Or something like that.

Well haven’t thought about that. I just have one backup job.
But it should not be a complete show stopper. I don’t know duplicatis internals. But as I would design it, the polling is not part of a job. There should be a scheduler complete independent of all jobs. Which decides start job x at time y. This also does the polling for all jobs which have polling enabled. If the device is reachable while an other job is running, bad luck it has to wait. (until duplicati supports multiple jobs at a time :slight_smile: )

There sould also be a minimum time between each backups. Do the backup when device is reachable and the last backup is 6 houres ago.

That makes sense to me.

Of course that also means we now have a possible scenario of:

  1. destination is offline, shift from active queue to polling queue
  2. next active queue job starts
  3. polling queue detects available destination so moves job back to active queue
  4. destination of moved job goes back offline
  5. active queue job finishes
  6. moved job (now in active queue) re-starts, destination is offline, shift from active queue to polling queue…goto 1

It’s not very realistic, but in theory we end up with a scenario with there “offline destination” job never actually gets to complete because other jobs end up taking it’s run-time when the destination IS available.

Not that I’m saying it shouldn’t be done the way you described, just that there’s always an edge case to watch out for and decide how to handle. :wink:

Once polling, or mount detection, is built in then that would make sense though it’s kind of like just scheduling the job to run every 6 hours isn’t it?

A agree that we have to watch any edge case, but as fas as I can see there will always be a scenario, where the backup won’t run and right now it’s even more possible. Currently we can get this situation quite simple. (I only have one job runnning on every computer, I think this is also the most common case.)

  1. Backup is scheduled for 8 o’clock, but the computer is turned off
  2. 9 o’clock the computer is turned on backup starts
  3. 9:30 computer is turned off, backup gets interrupted.
  4. 12-16 o’clock computer runs, but the backup is not startet anymore

Yes thats how i meen it.

You know what would be useful to add to the scheduler? A “run only if it’s been more than X min since last successful backup completed” and / or “run only if it’s a different day than the last backup completed”.

I imagine it being used something like this:

  • job scheduled to run every 5 minutes using --alternate-destination-marker and --alternate-target-paths so it only backs up when the destination (let’s say a USB drive) is found (This approximates a “USB drive was inserted, start a backup!” effect)
  • once backup is complete the every-5-minutes schedule will still fire, but only kick off the backup IF it’s been more than X hours (or a new day) since the last backup finished

This would:

  • allow for no more than 1 backup per day that would start within 5 min. of the drive being inserted
  • NOT block other potential jobs like a high --retry-count would
  • potentially “fail” if “not same day” used due to backup starting today but ending tomorrow then not starting again until the following day

Unknowns include:

  • how expensive is it to start a job (though not necessarily a backup) that frequently
2 Likes

It would be good to add an option to send a warning email if a backup has not been completed for more than X days, starting from the timestamp of the last scheduled but unsuccessful run (very useful for removable drives). This would cover edge cases and allow user intervention.

Agreed, thought it doesn’t cover situations where the Duplicati server itself has crashed.

It’s probably better to have a positive notification every X days no matter what - that way if you don’t even get the “Everything backed up fine” message, you know there’s a problem.

For device notification on Windows, you can take a look at how I implemented this to detect audio devices plugged in/out:

It worked rather well and the above code has a parameter to detect only USB devices (probably relevant for Duplicati). In my case I ended up using the MMDeviceEnumerator class as it provides more details for audio devices, but it is not relevant here.

Example calling and listening code is here:

Sample message filter function:

protected override void WndProc(ref Message m) {
   base.WndProc(ref m);
   switch (m.Msg) {
	   case DeviceNotification.WmDevicechange:
		   switch ((int)m.WParam) {
			   case DeviceNotification.DbtDeviceRemoveComplete:
			   case DeviceNotification.DbtDeviceArrival:
			   case DeviceNotification.DbtDevNodesChanged:
				   this.InvokeOnDeviceChangedAsync();
				   break;
		   }
		   break;
		   //case UsbNotification.WmDisplayChange:
		   //    this.InvokeOnDeviceChangedAsync();
		   //    break;
   }
}

During my tests, I noted the above message filter would run even if we had not called DeviceNotification.RegisterDeviceNotification(this.Handle);

It could be that these events are already provided as part of the default message loop for applications with a GUI, so they just need to be filtered and the DeviceNotification class is not even necessary.

I hope it helps.

1 Like