A potentially related topic on general (not just destination) interruptions is also here:
A potentially related topic on general (not just destination) interruptions is also here:
below, I provide a script that addresses my initial use-case. It is a very quick workaround for this feature. I hope that the feature will make it into the next release as I am convinced that this is a very common use-case if you are using duplicati in a mobile laptop environment.
To use the script (linux only), set up duplicati to run very frequently. I tested it with “every 6 minutes”. 30 minutes or hourly might work as well. Point the options “run-script-after” and “run-script-before-required” to the script.
When called before the backup, the script checks if a prior backup run for today was successful. If not, it checks if the remote host is up. if it is up, it lets duplicati run the backup. If it is down, it will postpone the backup to the next scheduled run to retry then.
However, there seems to be a scheduling issue. In case the backup is running and another backup is started due to the scheduler (i.e. the backup runs longer than 6 minutes), the scheduler seems to queue the run. That means that all missed runs are executed in a queue after the backup finished. Does someone know how to stop this?
The check whether the backup was successful still needs some improvement, though
Improvements are welcome.
duplicati-pre-post-run-2.sh.zip (929 Bytes)
(.zip because it is the only archive type allowed by the forum)
Improvements for --run-script-before/after options
I’m revisiting this now that I have a slightly better idea of how scheduling works.
--destination-retry-interval-silent (same but NOT reporting missing destination) parameters that would work like this:
- Job runs but sees no destination
2a. If no
--destination-retry-intervalparameter abort and put back in queue on schedule as it does now
2b. ELSE abort job (potentially silently) and put back in queue for retry-interval later (unless greater than schedule based time)
This allows for use of non-continuously available destinations (including USB drives) without hogging the queue from other potential jobs.
Possible issues include NEVER being notified if a destination is “permanently” offline. This could be addressed by adding some sort of
failure-notification-interval-max that would alert if more than interval duration has passed since last successful job ruin.
Solved: SFTP "finishes" but does not complete, use FTP instead?
Or you use a monitoring solution like Duplicati Monitoring
I have setup all my jobs there and for me it is also disturbing that missing destinations and also missing sources lead to failed backup (Failed: The folder \host\share\folder does not exist) and missing source folders lead to an error or if --ignore-missing-source lead to an warning.
For me it would be ok to have the possibility to run a pre-backup-script and depending on the errorlevel this scripts returns the backup is started or skipped. That would help!
I have a backup computer in the house of a friend and this computer not running 24/7 but it normally runs from 08am to 20pm. So it is not a problem to make a backup with a “run every 10 hours”. But I will get errors every night.
And also I am backup up folder from my laptop. The laptop is running 06am to 22pm and the backups are made every one or two hours. So I get many errors or warnings every night.
To me it makes sense that not having a source to back up or a destination to which the backup should be saved means a backup failure. Can you help me understand what you think should happen (other than a failure) in these scenarios?
That should certainly be doable with a
--run-script-before-required parameter - the “tricky” part is knowing how to verify the availability of your destination.
It should retry! Duplicati claims it is ready for laptop environments. So assume travelling and a laptop is not connected for a few hours. If the backup is set up to run once a day and it happens to start in this disconnected period, it is just a failure and will never retry. So, retrying the backup will run the backup once it is again connected. Of course, retrying endlessly makes no sense either. You may retry for a certain period of time or may retry until the next scheduled run. The retry then could stop and produce a failed backup.
Like DennisDD mentioned: Retry later!
Or alternative a switch which allows to do nothing! There is already a switch for the source “–ignore-missing-source”. But the switch does not ignore it. It reports a warning. For my monitoring a warning is better than failure or error. But a real “ignore” is not possible at the moment.
Why not handling this “–handle-missing-source” with a tristate: error, warning, ignore. And ignore means ignore! So do just skip it.
And for the target a smilar solution would be fine. --handle-missing-target. Possible to handle it like now, to ignore it or to ignore and replan it in a given timespan. So other job will not wait for this.
Can you explain? A “Wait” can be done in this script. But is it possible that the script “cancels” a job run?
While I agree with you and @DennisDD that internal support for retries would be great (see my older post above) it’s not likely to appear in the short term.
So to get around that a
--run-scriot-before-required process can be run. If that script returns anything other than 0 then the job should abort. (Note that I’m not sure what state that is treated like; normal end, warning, error, fatal).
Thank you. I didn’t know that. For your Information, here the result if the batch file returns exitcode 1:
C:\Program Files\Duplicati 2\Duplicati.CommandLine.exe exited with error code 100.
Duplicati Backup report for xxxx
Failed: The script "\\thorn1\c_rw\Users\server\AppData\Local\Duplicati\precheck.cmd" returned with exit code 1 Details: Duplicati.Library.Interface.UserInformationException: The script "\\thorn1\c_rw\Users\server\AppData\Local\Duplicati\precheck.cmd" returned with exit code 1 at Duplicati.Library.Modules.Builtin.RunScript.Execute(String scriptpath, String eventname, String operationname, String& remoteurl, String& localpath, Int32 timeout, Boolean requiredScript, IDictionary`2 options, String datafile) at Duplicati.Library.Modules.Builtin.RunScript.OnStart(String operationname, String& remoteurl, String& localpath) at Duplicati.Library.Main.Controller.SetupCommonOptions(ISetCommonOptions result, String& paths, IFilter& filter) at Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action`1 method)
The job has a FAILED state and this is also reported to http (in my case www.duplicati-monitoring.com)
So it does not help me but nevertheless it is good to know that this is possible! I think I will implement an external pre-quota check for the target.
Thanks for letting me know!
I would need to ask around but I wonder side effects I’m not thinking of if we allow something like a negative code from the script to abort the backup job but without the failed state… Or perhaps just without the notification…
Similar request here:
As another crashplan refugee, the key feature I’m missing from crashplan is the ability to backup when the destination is present. I was thinking of other ways of achieving this by having another program monitor the destination and kick off the backup, but I really like the idea of having an option like was suggested above to allow duplicati to ignore a failed destination and not report it as a failure. Is it possible to add something like that simply? I’m just thinking that it would be an extra condition before entering the error reporting section of the code - eg. if --ignore-destination-missing is present and the error code is destination is missing, do nothing.
Have you looked at the link kees-z posted?
For me this would solve some problems with the monitoring of Duplicati because it would be possible to avoid error if the target is not reachable.
At the moment I have every morning about 15 to 20 Messageboxes with errors in Duplicati GUI because some of my targets are not reachable at night (computers in private households which run several hours a day but normaly not at night).
@thommyX Yes, I added my vote of support to that idea as I think it would help greatly. Thanks
For Windows, something like this could help:
Note that in the current version the backup job will fail if the scripts exits with errorlevel 1 and higher.
Alternatively, you can run a similar script using an external task scheduler and start Duplicati using the command line if the destination is available.
And that is my problem My Duplicati Webscreen is full of messageboxes if I start it in the morning and I have to make 20 clicks or something like that.
Yes, until handling of errorlevels is improved, you can start a script using an external task scheduler and launch the backup job from the command line if the script has detected the backend.
It’s not a good idea to use the built in command line if you intend to continue to use the web UI since it doesn’t really communicate with the Duplicati server. You’d need to repair your database every time you need to use the server to run jobs.
A better way to initialize the backups would be to have the cron job interact with the server web API to start the task. I made a script that can help with that
As far as I know, there’s no problem running a backup job from the command line that’s exported from the GUI. Both can be used in turn without issues.
The only small backdraw I’m aware of is that the number of available versions showed in the main screen is not updated when run from the command line.
When using an external scheduler, the schedule in the backup job configuration can be disabled.
This video demonstrates how it works
(btw your Duplicati client is great!)
Ah, of course, If you specify the dbpath it should work the same. Then yes, it’s just the metadata that’s wrong. My bad