My understanding is that a single job will never queue up more than once - if a run is queued and it doesn’t get a chance to start (for whatever reason) then the run time comes around again (say the next day) then the previous queued-and-not-started version will be replaced (and potentially queued) with the new version.
So no, you shouldn’t get double runs of the small job.
As for the queue lasting through a reboot, I believe it does not. I did a quick test of pausing my Duplicati, “starting” a backup (essentially queuing it), stopping the server, then starting the server back up. The queued job did NOT service the server stop/start.
Correct. If a start time is missed (say due to shutdown) it is not noticed at power on and the next backup won’t occur until the next scheduled time. Note that this can also happen if the Duplicati server (whether as tray icon or a service) is not running / stopped. So if you are running Duplicati SOLELY via Tray Icon and turn your computer on but don’t log in (thus starting the tray icon), no backups will run.
Note that if the service is not actually stopped (for example the computer was just asleep during scheduled run time) then on wakeup Duplicati WILL notice the missed event and go ahead and run the backup. (This is what happens on my laptop all the time.)
Personally I’d like a setting to allow Duplicati to start a “missed” backup event if the last run time is OLDER than the last scheduled time. This would help in scenarios that are the inverse of yours such as weekly backups where missing a scheduled backup start time due to being powered down means no backup for up to another whole week!
As far as scheduling hourly backups, that would help get around the current functionality. Note that by default Duplicati doesn’t save an actual backup if no files have been changed, so it’s not like you should be getting 24 backups every single day (unless you’re including temp files and the like).
If you do go this route, you might also want to look at the --retention-policy feature that was added to one of the more recent canary versions. It basically lets you “thin out” versions as they get older - for example, you could say you only want to keep your hourly backups for a day (resulting 1 daily backup for anything older than a 1 day) then set daily backups to be kept only for a week (resulting in 4 weekly backups for a month), etc.
I performed the following test: I scheduled a backup at 11 pm daily and turned off the computer (a Windows 10 laptop) before this time. The next morning (today) when I turned on the computer, it immediately performed the “lost” backup, and in the interface “next” backup was always shown as the next night (did not show anything about the lost backup, just ran it).
But that was not what occurred in my test above.
Note: I’m not running as a service, just the trayicon and the server.
I think this is because in Windows 10 the “shutdown” actually is not a full shutdown, but a “semi hibernate”. Windows only starts at the beginning if you “restart” it.
I also prefer it this way, for me the current way is ok!
The scheduler is fairly simple, but schedulers are always confusing
When you create the backup, and set a time to run, Duplicati records the “next time” it should run.
Once that time comes, it will send the backup into the queue of tasks to run, and update the “next time” it will run.
If the scheduler is restarted (e.g. by a reboot) it will look at all the “next time” values and send items into the queue.
When sending an item into the queue, Duplicati will first check if it is already in the queue, and if so, just not put it in there (otherwise there could be a case where the queue contains millions of repeated entries).
Once a task completes, the “next time” is comitted to the database. This ensures that if a task is in the queue, but did not run, the “next time” will automatically revert on restart and the items will be queued again.
It clears up how the queue works and that it is retained across restarts, but I’m still not clear what happens in this scenario (sorry if I’m just not thinking straight):
Job is set to run daily at 1 PM
1 PM today comes around so the job runs to completion and tomorrow 1 PM is added as the “next time” for this job
When tomorrow 1 PM comes around, the PC happens to be powered down so the “next time” event is missed
Tomorrow 2 PM comes around and the PC is powered back on
How does the scheduler handle the historical / missed “next time” event? Ignore it? Replace it with a future version (based on the job schedule)? Retroactively add it as something needing to be started ASAP? Go crazy and attempt to take over the world?
I also wanted to verify this scenario:
Job is set to run daily at 1 PM
1 PM today comes around so the job runs NOT to completion (PC is powered down) so tomorrow 1 PM is NOT added as the “next time” for this job
Tomorrow 10 AM comes around and the PC gets powered back on
Does the scheduler see the unfinished job and start it again? Since “next time” never got set due to the incompleted previous run, how does it know to kick off at 1 PM again?