How can I stop a stuck backup job in Duplicati without uninstalling the entire program?

My setup: Fresh computer/duplicati install on Windows 11, imported configs from file, and Duplicati is set to pause for 10 minutes on startup.

I want to delete the queued backup jobs before Duplicati unpauses, but there’s no option to get rid of it.

My only choice is to let it start and immediately click “Stop Now”. It has been stuck on “Stopping after the current file: remote-cloud-backup” for about an hour.

Apart uninstalling the whole damn thing, how can I get it to stop these queued backup tasks?

I’ve even tried to delete the whole backup config, and it still starts backing up.

I just want my files back, and for it to not run a backup until they’re restored.


well, after reading your post 3 times I still can’t really understand what is the situation and what you are trying to do.

There is no magic in Duplicati, if you have removed the backup job and stopped the application, it will start in a blank state. While killing the job in Task monitor is not generally recommended, if you have never backed up something it don’t matter anyway. And actually uninstalling it in the same situation (no existing backup) to install again either.

If what you want to do is to install Duplicati on a new computer to restore data backed up from another computer (disaster recovery in other words), restore the job from the saved json file, before saving it, disable the schedule, then save. It will not start automatically.

Then test if your setup is working, because Duplicati hanging in the last backup phase usually happens because something is wrong - it may have worked on another setup, but possibly a firewall or antivirus is blocking it.

When testing actually succeeds, minds to actually setup an appropriate timeout in the advanced setup, say, one minute for a http cloud backup, more for sftp especially if you have a slow upload; if the network is very unreliable expect slow retries anyway.

All this will help Duplicati not being ‘stuck’ for long times when backend or network is not reliable.

Once all this is done, test your backup manually and if it succeeds, setup a schedule.

Thanks for your response. Is too late to re-save a config.

I have two configs set up that I saved ages ago, just the way I like them, one for my lan and one for the cloud.

As soon as I import these saved configs, it goes to rebuild the database (takes for ever, fair enough).

I come back later and it has backups queued obviously, and there’s no way to get rid of these backups out of the queue, apart from running them and immediately cancelling them.

After all that, I give up and try to delete both backups, but it insists on backing up first before it deletes them, because they’re higher in the queue.

I ended up just uninstalling and deleting the AppData. It became pretty ridiculous.

Now, if I instead Restore directly from config file, again it takes for ever to build a temporary database which is lost at the end of the restore.

Essentially, it’s a lot of messing around, when you think you’d just be able to import a config file and tell it what you want to do next before it goes ahead and schedules unnecessary jobs that take hours to complete.

It’s absurd that it insists on finishing a backup job before it’s deleted.

Essentially, I’d like to know how to delete queued tasks without having to wait until they are actually running so I can press cancel.

Another scenario where this would be useful is if you have made a mistake with a queued job and you want to cancel it, without having to wait around for hours for the earlier jobs to complete.

Read my previous post, it’s just a matter of importing the job, going to the fourth tab, uncheck automatic scheduling and Duplicati will not start anything until you ask for it.

What you are meaning by that is unclear. It sounds as ‘how to stop a task before it is running’. While it is not running, you hardly need it to stop ? You just uncheck automatic scheduling and it will not run.

Good advice for an export with a schedule, as it may be past scheduled time by time it’s imported.
Don’t really want it going immediately. Downside of unchecking is one must enter Schedule again.
Possibly unchecking the current day of week will work, but I haven’t tested to see if it really works.
I did test start of import (no save) of old export with its schedule, and it came up with old next time.
Probably the thing to do there is to get the schedule looking the way you want it before doing save.

One might think so, but it runs anyway at least in some cases, such as a schedule already missed.
There have been some other posts around seeking to deal with this, and also how to control order.

I set Duplicati for 10 minute pause on startup, set a backup to a time shortly in the future, and Quit.
After missing that scheduled time, I started Duplicati to see what it thought it was going to do when.
About → System info is a good spot to look, but it’s human-unfriendly, and I’m not an expert reader.
End result is uncheck, save, (and to be extra sure, trip back to Edit to look) saw it run at pause end.

I did also save a server database trail at significant points, if it comes down to looking at DB insides where it tracks things like the schedule, its last run time, and so on. Not sure it tracks next run time, however it seems to set a schedule at startup, and after that, it acts sticky despite a job deschedule.
There is possibly a two-part schedule, as the proposed schedule looks like the one AFTER this one.

before quit
Next scheduled task: Name Today at 8:20 AM
(I forgot to note Server state properties)

(Quit, wait, start after 8:20 with 4 hour schedule in zone UTC-4)

during pause
Next task: Name
proposedSchedule : [{"Item1":"4","Item2":"2023-10-19T16:20:00Z"}]
schedulerQueueIds : [{"Item1":2,"Item2":"4"}]

after deschedule
Next task: Name
schedulerQueueIds : [{"Item1":2,"Item2":"4"}]

during run
Name: Starting backup
proposedSchedule : []
schedulerQueueIds : []

after run
No scheduled tasks
proposedSchedule : []
schedulerQueueIds : []

It doesn’t update the server database until after the run. I think an expert once said that’s intentional.

Are you doing something like migration to new computer, and relying on Duplicati for its file restore?
Backing up an empty area would probably error out quickly, with no damage done, unless you have


Use this option to continue even if some source entries are missing.

in which case it might make an empty backup that you wouldn’t really want but which you can delete.

Migration to new PC - faq wanted How-To has one plan, although it’d be nice if the manual covered it.

I did not think of that, thanks. This seems like a bug. Did you file one ?

I’m not sure there’s no design reason for it, but it certainly can be a surprise. No issue filed yet.

Good UI is not (too much) surprising.

Yes, confirmed problem. The workaround is pretty simple; jobs are read at start, so closing Duplicati and restarting it after changing the schedule while it’s still paused will read the job table again and avoid the spurious running.

1 Like

Thanks ts678 for the considered responses.

Despite deleting the entire backup profile and exiting the program, Duplicati still initiates the backup before completing the deletion.

I can only imagine most backups run on a schedule, therefore it seems there’s a bit of magic involved in how to manage imports, and restore.

Currently, the only way to remove a job is to wait for it to start and then cancel it. This could be hours if it has another job in front of it, and it may take more hours again to stop once it is running. All needless time/work if you don’t want any of them running. A feature to delete jobs from the queue could be a straightforward solution.

Please list the steps in order, as the sequencing is getting confusing again. Reading left to right sounds like Duplicati is totally stopped and deconfigured yet somehow deletes something that’s already deleted except the real problem is it did a backup first. Step by step please, one action at a time, well described.

Typically you just remove a job whenever it’s not running, which should be most of the time. There might be something you’re not saying, but the quoted statement definitely seems to not be the way that it goes. For proof of that I just tried it on a scheduled backup. I deleted it between runs and that was the end of it.

The import case where the export has missed its schedule by the time of import has been covered, but a hard spot is if there’s no pause already in Settings, any overdue job will start shortly after Duplicati starts.

Please give clear steps, in order, of how to create the problem you see and the straightforward solution which doesn’t throw off the whole concept of schedules, e.g. if you dequeue a run, what about its next?


Also, please don’t give steps for a problem where the workaround is already given, e.g. here which may suit the original post where you wanted to deschedule some jobs before Duplicati had finished its pause.

Original situation was never well described (migration possibly?), and current isn’t well described either.


Restore is coming up again, but still without any good context. I already covered migration suggestions.
Imports were also covered. If you don’t like the schedule, fix it to what you want it to be during its import.
That’s not magic, however there are some operations that do take a bit of knowledge in order to do well.

It should take approximately as long as it takes to finish whatever uploads it has in progress. Not long.
Is your Internet speed too low, or your Options screen 5 “Remote volume size” way past usual 50MB?
There are other ways for Advanced options to slow stops down. What destination storage type is this?

I’m still trying to guess at what you’re actually doing. Ignoring the import, one way to have late manual operations such as Restore is for scheduled jobs to get in line first, for example after an overnight stop. Come morning, those get in line, and your manual work gets at the end, so waits. Is this getting close?

Things work nicely when there is a proposed schedule, but no line yet, as manual work is first in line…

I ran an expanded test with scheduled backup at 9:41, another (with a lower internal ID, to test) at 9:42, Quit at 9:40, Start at 9:43, set up manual Restore and it gave me the following message on my screen:

Waiting for task to begin
Server is currently paused, resume now

I resumed and Restore screen said Waiting for task to begin

9:41 backup ran first, then 9:42, then last-in-line Restore. Logged start times were 9:48, 10:08, 10:36.

If that’s the problem, you might be able to use either the workaround given, or a modification that I use when I don’t want to lose the whole schedule. Maybe turning off that day will do (then do rest of steps),

@ts678 @gpatel-fr
Below link is an example of what happens. It’s a 4TB hard drive attached via usb3.
You can also see a separate issue where I have to restart duplicati/docker a few times to get it to run. Doesn’t seem to be a perms issue as it happens regardless of what docker stack and folder permissions are (root or otherwise).
Anyway, main thing is it’d be great to have some sort of ‘erase from queue’ function of sorts, rather than having to go in and remove the scheduling then re-add it.

Thanks for the video

This is not linked to Docker - while I have never tried to track this down, I have seen it myself on a LXD container where I start the server as a non root user. AFAIR I have seen this only when starting as non root. It’s annoying yes, I suspect that this is related to the web server responding to requests before the server (the C# part) being fully initialized. I have never seen this on a Duplicati started as a service under Windows (while I have never tried to configure the service to run as a standard user)

Your video does seem to show that as soon as your jobs appear, there is a schedule starting. Why you did not invalidate the schedule as you were already advised ? If you don’t have a database, it will try be recreated from the backend as soon as your schedule kicks in, and trying to stop this recreation step may not work (I never tried that to be fair but it seems unlikely that this sort of work can be stopped).

Recreating the database from a 150 GB job could take time, a few minutes if your backend is in good shape, considerably more if your backend is heavily damaged, something that can happen rather easily when jobs are interrupted brutally (there is a fix for this recreation slowness but I will never be able to post a new Canary if I constantly procrastinate to reply to posts on this forum…).

What happens if you just give it a wait instead, ideally with the Pi CPU and disk pretty idle?
If your OS Debian-based? If so, Duplicati might be running at the lowest possible priorities.


I suppose it’s more complicated for Docker (whose, and is it always in one?), as chance of
having the systemd unit file present in there is small. Still – did they copy the priority plan?


Looking at PR and GitHub, Fedora’s systemd unit file was changed too, but does it matter?
Regardless, test is still needed to see if simple slow response is the problem you’re seeing.

there is no systemd in the Duplicati docker image.

We don’t know what image is in use. Often it’s LinuxServer, whose general practice on images is:

Amusing comment on how to build images:

full-on systemd if you’re completely mad

We also don’t know what priority scheme they used. Does Duplicati’s Docker stay at Linux defaults?

@adamlove can watch requests being responded to, using browser web developer tools (often at F12).

The backups request is probably where the home page list of backups is from, but there’s earlier.
I might be seeing About --> System info --> Server state properties coming in before.
We’ve heard of those being slow to fill with current actual data, but probably never found a cause.

If we have an “it’s slow” here, we have a chance to chase. If it never comes in, then never mind…

Here’s another example. Just to refresh you, I back up daily to google cloud and home network server, and backup is paused for 15 minutes when I start up my computer.

I’ve been trying to restore music for work from the backup server on my home network. However, because I didn’t back up yesterday (actually restoring is more of a priority) I have to stop the backup first o̶r̶ p̶l̶a̶y̶ a̶r̶o̶u̶n̶d̶ w̶i̶t̶h̶ t̶h̶e̶ s̶c̶h̶e̶d̶u̶l̶e̶ to get anywhere. Funny thing is, that even deleting the schedule won’t get rid of the job (hence the original title of this thread).

I restored half of the files yesterday before I had to leave for work, but it took about half an hour to cancel my google cloud backup (even when I click cancel at the very beginning), before it would even let me start looking at the restore I urgently needed. It needlessly locks up the database so I can’t even look up what I want to restore; it is so counterproductive.

I just want it to stop, forget it, I don’t need this backup to start in the first place once I unpause Duplicati, (let alone take half an hour for it to cancel).

A command to remove these jobs from the queue before duplicati is unpaused (ie job starts) would fix everything.