I’ve been happily using Duplicati for a while now, but have found a couple of things which (hopefully) could be implement relatively simply, and make things easier for me (and hopefully others).
Provide a means of stopping any queued jobs from operating. Whenever there (for example) a mono update, I’d like to be able signal the current job to stop after uploading, and prevent any further queued jobs from running. Then onces Duplicati goes idle, I can stop the service and perform the update. At the moment, I can signal the current job to stop, but then if there are any further jobs queued they will start, and I have to signal them to stop (individually as each queued job starts) and then wait for them to abort.
Avoid adding the same entry to the queue more than once. I have a number of jobs that take a long time to run. Sometimes, I’ll run one manually, and while it’s running the timer for it to run automatically will expire. This seems to cause the job to immediately run again once it’s complete. I think it’d be preferable to only have one instance of each job in the queue at any time.
I think there was a discussion about this sort of thing but I’m having trouble finding it at the moment. Using “Pause” as suggested by @TheDaveCA might do the trick, but I’m actually wondering if an update of mono while Duplicati is paused might interrupted the pause…
I know this one has been discussed a few times and the issue is that the active job isn’t considered part of the queue.
I want to be able to ‘disable’ the queue so that I can get to the situation where I can stop the Duplicati service to carry out the mono upgrade. When Duplicati restarts, I wouldn’t be too bothered if the queue automatically restarted again.
I also want to do this when a Duplicati upgrade needs to be installed. I don’t think ‘pause’ will allow you to carry out the Duplicati upgrade, as it will refuse if any active jobs are present.
Doesn’t necessarily need to be the same button. I’d be happy to have a way to prevent further queued jobs from running, and then separately use the existing controls to stop the current job once uploads have finished.
I would then shutdown Duplicati and do any upgrades I need to do. When Duplicate restarts, if I had to re-enable the queue again then that’d be perfectly acceptable.
So I think all I’d like is a button up at the top (near the existing pause and stop controls) to disable execution of entries that are either currently in the queue, or added to the queue at a later date). Ideally this button should indicate the current state so I can tell if I need to re-enable processing again.
Again this has started to cause me issues. Due to (presumably) a mono upgrade, all of a sudden Duplicati had to examine every file in all my backup sets, as it thought the timestamp had changed.
As a result of this, my daily backups now seem to have about 3 or 4 copies of each one in the queue, so there’s a good chance it’d never catch up by itself!
I’ve temporarily disabled all the schedules, and once the queue goes idle I’ll enable them again. However, it seems pointless scheduling another copy of a backup job if one is either already running or queued up.
That would be useful for the “last job run went long and now next job is starting immediately” scenario, but I’d really like to figure out this “queue buildup” problem before adding a feature that would mostly hide some underlying issue.
[quote=“JonMikelV, post:8, topic:3255, full:true”]
Can go to main menu “About” -> “System info” page, scroll down to “Server state properties” and you the value of your “proposedSchedule” and “schedulerQueueIDs” fields?[/quote]
This is what it’s currently showing:
Ok, I didn’t see any duplicate entries in the About -> System Info page, but it’s just completed a backup, and then immediately started running the same one. This backup is scheduled to run daily, but there has been a couple of long backups running (one about 3.5 days, the other about 20 hours).
So it definitely seems to have either had two copies of this job in the queue, or added another instance immediately the queued one finished.
I don’t think that’s the case in this instance. The scheduled job is times at 2am every day.
Today, it ran from 22/05/2018 15:18:55 (1526998735) to 22/05/2018 16:16:49. It then ran again from 22/05/2018 16:16:53 (1527002213) to 22/05/2018 17:35:32 (1527006932).
As you can see, the first instance of the job today did not straddle the configured scheduled time (scheduled for 2am, but the job ran from 3pm to 4pm approx). However, almost immediately that first instance completed, a second started (about 4 seconds later in fact).
Hope this helps track things down. If I can provide any further information then please let me know.