I have six jobs that run on a schedule. However, I do not always have Duplicati running. When I do launch the program, it immediately kicks off the jobs instead of waiting for the schedule. It’s almost like it has logic that says, “Oh, I missed the last job run. Let me run now.” This is a big problem for me because I then have to sit there canceling the jobs (“Stop Now.”) and wait upwards to 30 minutes to get through all the jobs as each one sits on the “Stopping after the current file” regardless for about 5 or more minutes. If this is intentional functionality, is there any way to prevent Duplicati from launching all the jobs upon launch and just let it run the next time they are scheduled?
Please look at the bottom line in the image. What you said is pretty much what it says.
How can I disable start of missed backups when duplicati start? is a Features topic with ideas.
Add ability to skip missed scheduled backups #3251 which it mentioned has other discussions.
Issue is not closed though, which I take to mean there’s no easy option to get what you’re after.
How can I stop a stuck backup job in Duplicati without uninstalling the entire program?
talks about a very convoluted method, but basically catchup-at-start is sometimes a pain to get.
Using Settings to pause a little while (to allow for manual work) helps, but is not a great answer.
There are also some bugs (I think) where changing schedule in GUI doesn’t clear old schedule.
What OS is this? Trying to script something would be easier on Linux than in a .bat file.
PowerShell is rather powerful I think, but I don’t know it well. Basically, you can do this:
Example Scripts shows how to use run-script-before exit code to have backups not run.
Technically, I guess each one gets far enough to figure out not to continue with backup.
Trick will be to code when to nullify the backup. Maybe run none for minutes after start?
Linux and PowerShell can get start time, then I guess it’s time math to set the exit code.
Wow. First, I have never noticed that line. My eyes must have just passed right over it a dozen times. Second, after looking through the Feature topic and the Github issue, it looks like this has request has been stuck in the mire of “over engineering.” Now, obviously I do not know how things work under the hood, but the idea of a simple checkbox (or CLI switch on the server process) that instructs the system to compare the current timestamp with the scheduled timestamp and forgo the backup accordingly seems fairly simple from an engineering perspective. But again, perhaps there is more to it than that. Or perhaps more was asked than simply that (which obviously means more time which is the most precious commodity on the planet). Third, I use Debian Linux. I have fairly advanced scripting skills and will take a look at that. But if it proves to be too convoluted for a barely desirable outcome, I may have to settle for just launching Duplicati on demand–a horrible practice of course, introducing human error into the equation.
One concern is that there is a run queue kept because only one operation can run at a time. Interference between things can make runs late, but maybe late backup is better than never.
I have a daily backup in progress now, and will look at About → System info:
proposedSchedule : [{"Item1":"1","Item2":"2025-01-27T12:20:00Z"}]
schedulerQueueIds : []
While that’s running, I’ll test another job manual Run now
(limited by what’s already running):
proposedSchedule : [{"Item1":"1","Item2":"2025-01-27T12:20:00Z"}]
schedulerQueueIds : [{"Item1":7,"Item2":"2"}]
This information seems not meant for easy human reading, and I don’t know scheduling code.
Frequently people (e.g. on forum) want certain run orders of backlog. Maybe you’ve seen that because it sounds like you may have multiple jobs spring to life. Is order top-down on screen?
If so, it’s probably based on the job ID number, but people also want to rearrange the screens.
Could you settle for the current scheme intended to be pretty insistent on making the backups? Perhaps someday a feature for better control will happen, but there’s always a work backlog…
Thank you for the insight into how things run. I read something about a queue, which agreed with my previous observation of only one job running at a time.
I am typically slow to hit the forums about an issue, preferring to deal with the design/limitations of a system until it becomes unbearable. Unfortunately, I have run into too many instances where this has become a problem for me. I understand the design notion of a system that is insistent on making backups. But for me, a day missed is no issue. My personal RTO/RPO are flexible. I have divided my data into “most important”, “less important”, and “least important” with various schedules configured accordingly. For me, backups run every day, some even intradaily. But if a day is missed due to my computer usage for that day, that is entirely acceptable. But multiple days missed obviously is not, nor a pattern of missed days.
I think the current design issue, for me, is the inflexibility. A system that assumes a schedule is fixed and mandatory and must absolutely be caught up if missed is a system that assumes too much. If such design suits the larger user base, then a simple “cancel and clear queue” process needs to be available to suit the other users, such as me, who find this assumption unrealistic. This process could be integrated into the UI, made available through a CLI command, or even codified into an API call (I forget at the moment if Duplicati exposes an API interface).
Hopefully, none of what I have written is taken as anything more than an expression of my perspective. I have nothing but appreciation for the countless hours and years of devotion to this amazing program. It is a small thing for me to deal with this issue, or create a workaround for myself.
As a support (not quite proof) of concept, maybe put $(()) part in a bash if
to set exit code:
echo $(( `date +%s` - `stat -c %Y /proc/$PPID` < 10 ))
That’s meant to be current time minus start time of Duplicati, which I hope is its direct parent.
I’ve only run this test with a bash started from a bash though. If 10 seconds it too low, adjust.
This in a run-script-before
is intended to let missed jobs rapidly drain away as you prefer.
So I think this may work…
I created the simple script below and configured it in the “run-script-before-required” advanced option. I also enabled the “run-script-with-arguments” advanced option.
#!/bin/bash
current_time=$(date +'%k%M')
lower_bound="$1"
upper_bound="$2"
if [ "$current_time" -ge "$lower_bound" ] && [ "$current_time" -lt "$upper_bound" ]; then
exit 0
fi
exit 5
Example: nowIsBetween.sh “1655” “1715”
Result: Returns 0 if between 4:55pm and 5:15pm or 5 if not.
It errors out if I attempt to run the job outside of scheduled hours, which is desired behavior. I will know tomorrow if it lets the job run when the job kicks off within the specified range. Will report back.
It is working fine. Thank you for pointing me in the direction of scripting!
So I realized that it would be slightly inconvenient to run a manual backup. Therefore, I updated the script to return 0 when no params are specified. This means to run a manual backup, I only have to expand the job in the UI, click Commandline… under Advanced, and then eliminate the params from the “run-script-before-required” field before clicking Run “backup” command now. This is easy enough for those rare moments I want to run a manual backup.
New script:
#!/bin/bash
current_time=$(date +'%k%M')
if [ "$#" -eq 0 ]; then
echo "No params specified, so returning with val 0."
exit 0
elif [ "$#" -ne 2 ]; then
echo "You must provide no params for an immediate return val 0 or two params for testing between a range of time."
exit 5
fi
lower_bound="$1"
upper_bound="$2"
if [ "$current_time" -ge "$lower_bound" ] && [ "$current_time" -lt "$upper_bound" ]; then
echo "Current time between range, so returning with val 0."
exit 0
fi
echo "Current time outside of range, so returning with val 5."
exit 5
It is certainly not hard to do the way you describe.
The reason why it is implemented this way is to cater to non-professional users. If you set a backup each Monday, but you do not turn on your machine one Monday, it will run a backup when you turn it on.
This is less accurate than only running when requested, but more convenient if you do not have a deep desire for control or knowledge about scheduling.
If you prefer to have control over the backups, you can use cron or similar to start the jobs when you like. Simply set the backups to have no scheduled run in the UI.
Then from the cron job run this when you want it to run:
duplicati-server-util run "Backup name"
If you need to run it manually, no extra work required, just click in the UI.