How to clear the Duplicati scheduler queue

I’ve copied my Duplicati config and databases to a new computer. There are several actions listed in the scheduler queue on the System Info page.

I don’t want these jobs to kick off. I have other tasks I want to run now. How do I delete/clear these queued jobs (permanently)?

Thank you… Steve

Hello

first thing, thanks for pointing this to me, I had never yet thought to scroll down to this part of Duplicati interface. I have argued recently that Duplicati is intuitive, it’s generally true but not so much for this part, unfortunately.

You surely know that you can pause all jobs, this part is just the job scheduling, only described in a rather cryptic form - if you go on each job and clear the ‘Automatically run backup’ checkmark on the fourth tab (Schedule) this list will become empty.

If its already queued then disabling auto run backup shouldn’t stop it there. I tested this with a backup queued that has that disabled and it still ran after the first backup finished. It might be possible that toggling it will affect it differently but I wouldn’t hold your breath on that. That would be a accidental that happened to be guessed right with a 50/50 chance of either being true.

Using Pause will also pause the backup currently running so that it never finishes. That’s also a problem.

I don’t see a valid actual way of axing it from there, and anything that will, looks to be accidental or hacky. You could for instance disable internet or pull the target drives right before they run and it will solve it too.

Thanks for the discussion. The situation is that I’m rebuilding a database, and it is taking several days so far, and don’t wish to have other already queued jobs to start automatically once it completes. None of my configs are set to run automatically at this point, until I clean up what I have moved, and, as mentioned, pausing Duplicati is not the answer.

Seems like there should be one or more Duplicati commands to manage the queue, but I didn’t see any in a quick look. It would be neat if there was a way to edit the queue directly.

Any help?

Thank you… Steve

I don’t fully understand the problem with that start, but you could arrange for no backup change.
For example, if destination has an AuthID, add a readable note to the start, so as to make it fail.

I think the OP should be okay but I figured I’d add the following as a reference for others on where Duplicati stores these items.

When you poke at /serverstate you’ll find two queues and an active task. The ActiveTask for listing the actively running job and it’s task Id, the SchedulerQueueIds holds the id for any manually (not normally scheduled) added jobs by means of “Run now” or CLI submission and then finally there is the ProposedSchedule queue where jobs that are scheduled via Duplicati’s scheduler in Step 4 are held until they are due to run and they get moved to the SchedulerQueueIds queue at that time.

I’m currently knee deep in trying to make my powershell scripts “wait until current or scheduled jobs are complete prior to submitting itself” which is what lead me to discover the above. If I get finished with my random file generator script (got tired of ‘changing’ my data so a backup would run for more than 10 seconds, so now I just make new data), I’ll see if I can write something to manually clear the SchedulerQueueIds. Clearing the ProposedSchedule will probably take a bit more than nulling out it’s array, likely finding the specified jobs, editing them so they are not scheduled then clearing the ProposedSchedule array.

I wrote one in Python last April for similar reasons. You’re welcome to it, but FYI I’m new at Python…

Cheers, that sounds great. It’s been quite a few years since I’ve played with Python but it’s always a fun language.

“Finished code” is a relative thing… the script works fine if you don’t mind doing manual edits to the script but in trying to setup it up so it prompts for the number of files to create and the storage units (kb/mb/gb/tb/pb) to use, I’ve broken it somehow while validating the data, remove the prompts and it works fine. I personally don’t need the prompts but I figured others may prefer to have it ask, so I started adding them in.

It has some interesting standard capabilities which I’m using and is used in Duplicati too (sorry macOS).
The below is random_file2.py but I’m still not sure what tool I put around it. The focus here is on change, potentially Duplicati-style, efficiency, predictability, and flexibility to make random files or change existing randomly, probably either within current size or extending randomly depending on how options were set.

#!/usr/bin/python
# path --block-size= --id-size= --percent= --create= 
import os
import random
import argparse
import sys

parser = argparse.ArgumentParser()
parser.add_argument('path')
parser.add_argument('--block-size', type=int, default=102400, help='size of each block of this file')
parser.add_argument('--change-size', type=int, default=2, help='size of change at start of a block')
parser.add_argument('--percent', type=int, default=10, help='percent of blocks to get changed')
parser.add_argument('--create', type=int, help='create file with this many blocks', dest='blocks')
args = parser.parse_args()

if args.blocks is not None:
    size = args.block_size * args.blocks
    fd = os.open(args.path, os.O_WRONLY | os.O_CREAT | os.O_TRUNC | os.O_BINARY)
    os.ftruncate(fd, size)
    blocks = args.blocks
elif os.path.exists(args.path):
    fd = os.open(args.path, os.O_WRONLY)
    size = os.path.getsize(args.path)
    blocks = size // args.block_size
else:
    print('Please supply path of an existing file, or use --create to create a new file.', file=sys.stderr)
    sys.exit(1)

changes = blocks * args.percent // 100

for index in random.sample(range(blocks), k=changes):
    os.lseek(fd, index * args.block_size, os.SEEK_SET)
    change = random.randbytes(args.change_size)
    os.write(fd, random.randbytes(args.change_size))

Efficiency comes from only changing a small amount in a “block”, which need not match Duplicati size.
As little as 1 byte is “different”, although changing just 1 byte will eventually run out of unique blocks…

Predictability comes from being able to orient to Duplicati blocks, unlike trying to predict random bytes.

Flexibility lets you make a completely random-content file of arbitrary size if you want for some reason, and I “think” you can probably also extend an existing file (though only so far, and I haven’t just tested).

An ambitious workload simulator would also add and delete filenames, with this doing changes in them.
I’ve got a very old very idle PC I could run tests on, if anybody can figure out how to make it a workload.

1 Like