Backup only when host is reachable; retry backup that was missed due to connection issues

Hi,

I am one of the people looking into alternatives for crashplan. Crashplan was the better “all-complete” package, but duplicati is very promising and it is going to be not only a replacement but an advancement. Keep up the good work!

What I really miss in duplicati is the ability to do a backup once the connection is available / i.e. retry till it is available. The usecase is as follows:

A laptop is not always connected and/or the target host is not always reachable. I set up duplicati to run once a day a 2pm. When I turn up the laptop on 3pm, the missed backup is run. However, when duplicati cannot connect to the storage host, the backup fails and will run the next day. I would be nice if duplicati could not thread this as a failed backup, but as a backup to retry in a moment again. When I run the laptop at 3pm everyday for one week, there will not be a backup for one week…

I think that this is a valid use-case for duplicati. It shouldn’t be too complicated to add to the duplicati scheduler. Does anyone have an idea how to script that? (I am running duplicati with the backup configured in the GUI)

I would suggest to add the feature to the duplicati vs crashplan comparison as well (Duplicati vs. CrashPlan Home).

1 Like

I was thinking the same thing yesterday!

I’m not fully clear on how the scheduling in Duplicati works, but it seems once a job is interrupted (for whatever reason) it doesn’t start up again until the next scheduled start time.

I’m imagining something like a --destination-retry-interval type setting that would:

  1. check if there’s an incomplete or past due scheduled backup
  2. if so, every --destination-retry-interval ping the destination
  3. if found, continue the incomplete (or start the past due) backup
1 Like

exactly. I was thinking of handling this in pre-run and post-run scripts as a workaround: Configure backup job to run very frequently (every 30 minutes or every hour). Then with a pre-run script, check connectivity and see if a run was completed today. In case no connectivity or in case a job was already completed today, return non-zero to abort the job. On every successful run, the post-run could create a file that the pre-run checks for existence.

Too bad these kind of scripts depend on the platform.

A potentially related topic on general (not just destination) interruptions is also here:

Hi,

below, I provide a script that addresses my initial use-case. It is a very quick workaround for this feature. I hope that the feature will make it into the next release as I am convinced that this is a very common use-case if you are using duplicati in a mobile laptop environment.

To use the script (linux only), set up duplicati to run very frequently. I tested it with “every 6 minutes”. 30 minutes or hourly might work as well. Point the options “run-script-after” and “run-script-before-required” to the script.

When called before the backup, the script checks if a prior backup run for today was successful. If not, it checks if the remote host is up. if it is up, it lets duplicati run the backup. If it is down, it will postpone the backup to the next scheduled run to retry then.

However, there seems to be a scheduling issue. In case the backup is running and another backup is started due to the scheduler (i.e. the backup runs longer than 6 minutes), the scheduler seems to queue the run. That means that all missed runs are executed in a queue after the backup finished. Does someone know how to stop this?

The check whether the backup was successful still needs some improvement, though :slight_smile:

Improvements are welcome.

duplicati-pre-post-run-2.sh.zip (929 Bytes)

(.zip because it is the only archive type allowed by the forum)

3 Likes

I’m revisiting this now that I have a slightly better idea of how scheduling works.

I’m imagining --destination-retry-interval and --destination-retry-interval-silent (same but NOT reporting missing destination) parameters that would work like this:

  1. Job runs but sees no destination
    2a. If no --destination-retry-interval parameter abort and put back in queue on schedule as it does now
    2b. ELSE abort job (potentially silently) and put back in queue for retry-interval later (unless greater than schedule based time)

This allows for use of non-continuously available destinations (including USB drives) without hogging the queue from other potential jobs.

Possible issues include NEVER being notified if a destination is “permanently” offline. This could be addressed by adding some sort of failure-notification-interval-max that would alert if more than interval duration has passed since last successful job ruin.

2 Likes

Or you use a monitoring solution like Duplicati Monitoring

I have setup all my jobs there and for me it is also disturbing that missing destinations and also missing sources lead to failed backup (Failed: The folder \host\share\folder does not exist) and missing source folders lead to an error or if --ignore-missing-source lead to an warning.

For me it would be ok to have the possibility to run a pre-backup-script and depending on the errorlevel this scripts returns the backup is started or skipped. That would help!

I have a backup computer in the house of a friend and this computer not running 24/7 but it normally runs from 08am to 20pm. So it is not a problem to make a backup with a “run every 10 hours”. But I will get errors every night.

And also I am backup up folder from my laptop. The laptop is running 06am to 22pm and the backups are made every one or two hours. So I get many errors or warnings every night.

To me it makes sense that not having a source to back up or a destination to which the backup should be saved means a backup failure. Can you help me understand what you think should happen (other than a failure) in these scenarios?

That should certainly be doable with a --run-script-before-required parameter - the “tricky” part is knowing how to verify the availability of your destination.

It should retry! Duplicati claims it is ready for laptop environments. So assume travelling and a laptop is not connected for a few hours. If the backup is set up to run once a day and it happens to start in this disconnected period, it is just a failure and will never retry. So, retrying the backup will run the backup once it is again connected. Of course, retrying endlessly makes no sense either. You may retry for a certain period of time or may retry until the next scheduled run. The retry then could stop and produce a failed backup.

Like DennisDD mentioned: Retry later!

Or alternative a switch which allows to do nothing! There is already a switch for the source “–ignore-missing-source”. But the switch does not ignore it. It reports a warning. For my monitoring a warning is better than failure or error. But a real “ignore” is not possible at the moment.

Why not handling this “–handle-missing-source” with a tristate: error, warning, ignore. And ignore means ignore! So do just skip it.

And for the target a smilar solution would be fine. --handle-missing-target. Possible to handle it like now, to ignore it or to ignore and replan it in a given timespan. So other job will not wait for this.

Can you explain? A “Wait” can be done in this script. But is it possible that the script “cancels” a job run?

While I agree with you and @DennisDD that internal support for retries would be great (see my older post above) it’s not likely to appear in the short term.

So to get around that a --run-scriot-before-required process can be run. If that script returns anything other than 0 then the job should abort. (Note that I’m not sure what state that is treated like; normal end, warning, error, fatal).

Thank you. I didn’t know that. For your Information, here the result if the batch file returns exitcode 1:

CLI:
C:\Program Files\Duplicati 2\Duplicati.CommandLine.exe exited with error code 100.

GUI:
Duplicati Backup report for xxxx

Failed: The script "\\thorn1\c_rw\Users\server\AppData\Local\Duplicati\precheck.cmd" returned with exit code 1
Details: Duplicati.Library.Interface.UserInformationException: The script "\\thorn1\c_rw\Users\server\AppData\Local\Duplicati\precheck.cmd" returned with exit code 1
at Duplicati.Library.Modules.Builtin.RunScript.Execute(String scriptpath, String eventname, String operationname, String& remoteurl, String[]& localpath, Int32 timeout, Boolean requiredScript, IDictionary`2 options, String datafile)
at Duplicati.Library.Modules.Builtin.RunScript.OnStart(String operationname, String& remoteurl, String[]& localpath)
at Duplicati.Library.Main.Controller.SetupCommonOptions(ISetCommonOptions result, String[]& paths, IFilter& filter)
at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)

The job has a FAILED state and this is also reported to http (in my case www.duplicati-monitoring.com)

So it does not help me but nevertheless it is good to know that this is possible! I think I will implement an external pre-quota check for the target.

Thanks for letting me know!

I would need to ask around but I wonder side effects I’m not thinking of if we allow something like a negative code from the script to abort the backup job but without the failed state… Or perhaps just without the notification…

Similar request here:

1 Like

As another crashplan refugee, the key feature I’m missing from crashplan is the ability to backup when the destination is present. I was thinking of other ways of achieving this by having another program monitor the destination and kick off the backup, but I really like the idea of having an option like was suggested above to allow duplicati to ignore a failed destination and not report it as a failure. Is it possible to add something like that simply? I’m just thinking that it would be an extra condition before entering the error reporting section of the code - eg. if --ignore-destination-missing is present and the error code is destination is missing, do nothing.

Have you looked at the link kees-z posted?

For me this would solve some problems with the monitoring of Duplicati because it would be possible to avoid error if the target is not reachable.

At the moment I have every morning about 15 to 20 Messageboxes with errors in Duplicati GUI because some of my targets are not reachable at night (computers in private households which run several hours a day but normaly not at night).

@thommyX Yes, I added my vote of support to that idea as I think it would help greatly. Thanks

For Windows, something like this could help:

Note that in the current version the backup job will fail if the scripts exits with errorlevel 1 and higher.
Alternatively, you can run a similar script using an external task scheduler and start Duplicati using the command line if the destination is available.

And that is my problem :wink: My Duplicati Webscreen is full of messageboxes if I start it in the morning and I have to make 20 clicks or something like that.

Yes, until handling of errorlevels is improved, you can start a script using an external task scheduler and launch the backup job from the command line if the script has detected the backend.