A couple of suggestions

Hi,

I’ve been happily using Duplicati for a while now, but have found a couple of things which (hopefully) could be implement relatively simply, and make things easier for me (and hopefully others).

  1. Provide a means of stopping any queued jobs from operating. Whenever there (for example) a mono update, I’d like to be able signal the current job to stop after uploading, and prevent any further queued jobs from running. Then onces Duplicati goes idle, I can stop the service and perform the update. At the moment, I can signal the current job to stop, but then if there are any further jobs queued they will start, and I have to signal them to stop (individually as each queued job starts) and then wait for them to abort.

  2. Avoid adding the same entry to the queue more than once. I have a number of jobs that take a long time to run. Sometimes, I’ll run one manually, and while it’s running the timer for it to run automatically will expire. This seems to cause the job to immediately run again once it’s complete. I think it’d be preferable to only have one instance of each job in the queue at any time.

Appreciate any others thoughts on the above.

Thanks

Andy

1 Like

I’d love to see this! Currently Duplicati can go into paused mode for some time when it starts, perhaps this could be extended in some fashion to stop the scheduler from starting anything new?

I’d definitely want current jobs to finish, and I think I’d want to be able to start my own jobs from the command line if that were feasible.

I think there was a discussion about this sort of thing but I’m having trouble finding it at the moment. Using “Pause” as suggested by @TheDaveCA might do the trick, but I’m actually wondering if an update of mono while Duplicati is paused might interrupted the pause…

I know this one has been discussed a few times and the issue is that the active job isn’t considered part of the queue.

I think that functionally this is kind of related to How can I disable start of missed backups when duplicati start?

I’m not finding it now but I believe there are also some posts suggesting a “don’t run if it’s been less than X time since the last successful run” feature.

Hi.

I want to be able to ‘disable’ the queue so that I can get to the situation where I can stop the Duplicati service to carry out the mono upgrade. When Duplicati restarts, I wouldn’t be too bothered if the queue automatically restarted again.

I also want to do this when a Duplicati upgrade needs to be installed. I don’t think ‘pause’ will allow you to carry out the Duplicati upgrade, as it will refuse if any active jobs are present.

Andy

I believe you are correct. And I need to test but I assume once paused you can’t stop the job.

So you are looking for a button somewhere that does something like “pause Duplicati queue and stop current task (if applicable)”?

Hi,

Doesn’t necessarily need to be the same button. I’d be happy to have a way to prevent further queued jobs from running, and then separately use the existing controls to stop the current job once uploads have finished.

I would then shutdown Duplicati and do any upgrades I need to do. When Duplicate restarts, if I had to re-enable the queue again then that’d be perfectly acceptable.

So I think all I’d like is a button up at the top (near the existing pause and stop controls) to disable execution of entries that are either currently in the queue, or added to the queue at a later date). Ideally this button should indicate the current state so I can tell if I need to re-enable processing again.

Thanks

Andy

Again this has started to cause me issues. Due to (presumably) a mono upgrade, all of a sudden Duplicati had to examine every file in all my backup sets, as it thought the timestamp had changed.

As a result of this, my daily backups now seem to have about 3 or 4 copies of each one in the queue, so there’s a good chance it’d never catch up by itself!

I’ve temporarily disabled all the schedules, and once the queue goes idle I’ll enable them again. However, it seems pointless scheduling another copy of a backup job if one is either already running or queued up.

Any thoughts?

Andy

Can go to main menu “About” -> “System info” page, scroll down to “Server state properties” and you the value of your “proposedSchedule” and “schedulerQueueIDs” fields?

I think in another topic we figured out how a job could be in there twice (one running, one scheduled) but there shouldn’t be a way to get multiple schedules of the same job…

I think an adjustable “cooldown timer” on any given backup job would be a great idea, and maybe not too hard to implement(?).

That would be useful for the “last job run went long and now next job is starting immediately” scenario, but I’d really like to figure out this “queue buildup” problem before adding a feature that would mostly hide some underlying issue.

[quote=“JonMikelV, post:8, topic:3255, full:true”]
Can go to main menu “About” → “System info” page, scroll down to “Server state properties” and you the value of your “proposedSchedule” and “schedulerQueueIDs” fields?[/quote]
This is what it’s currently showing:

proposedSchedule : [{“Item1”:“4”,“Item2”:“2018-05-22T01:00:00Z”}]
schedulerQueueIds : [{“Item1”:26,“Item2”:“6”},{“Item1”:28,“Item2”:“4”}]

Doesn’t appear to have multiple entries in at the moment, but I’ve disabled all the scheduled backups for now to try to allow it to catch up.

Will keep an eye on it.

Thanks

Andy

Ok, I didn’t see any duplicate entries in the About -> System Info page, but it’s just completed a backup, and then immediately started running the same one. This backup is scheduled to run daily, but there has been a couple of long backups running (one about 3.5 days, the other about 20 hours).

So it definitely seems to have either had two copies of this job in the queue, or added another instance immediately the queued one finished.

Andy

Thanks for the details.

Since you’re not seeing duplicates in the system info, it sounds like you’re running into the known scenario where:

  1. a job starts (which takes it out of the queue)
  2. this puts the next run of the job in the “scheduled” state (all as it should be)
  3. the running job goes so long it passes the start time of the “scheduled” one causing the “scheduled” one to be put into the pending queue (causing it to run as soon as the currently running job ends)

There have been some suggestions on how to handle this scenario (including a “cooldown timer” @drakar2007 mentioned) but so far nothing has been implemented.


Here’s a summary of some “Server state properties” fields:


Here’s a summary of how the queue is supposed to work:

I don’t think that’s the case in this instance. The scheduled job is times at 2am every day.

Today, it ran from 22/05/2018 15:18:55 (1526998735) to 22/05/2018 16:16:49. It then ran again from 22/05/2018 16:16:53 (1527002213) to 22/05/2018 17:35:32 (1527006932).

As you can see, the first instance of the job today did not straddle the configured scheduled time (scheduled for 2am, but the job ran from 3pm to 4pm approx). However, almost immediately that first instance completed, a second started (about 4 seconds later in fact).

Hope this helps track things down. If I can provide any further information then please let me know.

Andy