Manually Canceling a Backup on Duplicati 2

Duplicati wants to run a backup when I resume it, but I don’t want it to. If I cancel the backup as soon as it starts, it just sits there with 50% CPU and doesn’t do anything for an hour now.

The reason I want to cancel it is that the backup is not continuing and the CPU just sits at 50%.

On the old duplicate, you could cancel a backup when it starts, and then it would cancel quickly. On Duplicati 2 it seems to not cancel.

This is a Windows Server 2019 pc. The backup files is large about 50Gb, but the first time it ran very quickly (started at least quickly) I eventually cancelled that backup after a few hours. But now the backup tries to run again (it is queued currently) and I can’t find a way to stop it. I’ve killed the duplicate process, but when I open it again the backup is still in the queue.

I’ve enabled logging to hope that I can see why it is locking on 50% CPU to see if I can’t get it to actually still run fine. But the log is not showing much more than listing the files it has found to backup.

So I managed to solve this and thought I would post the solution here.

Basically, the solution involves altering the backup set before Duplicati is “Resumed”. Then when you do resume it, it uses the then-current settings, which may assist it in doing nothing.

So my Duplicati 2 was running at 50% CPU. To stop this, I had to kill the process in Task Manager. Then when you start Duplicati again, it starts in a “Pauses” state, but as soon as you resume it, it again starts the previous backup that is not completed yet (it shows in the status box that it will start it when you resume it, so nothing unexpected here).

So the way I got around it, is to edit the config and remove my backup sources, but only keep a small one. Then I renamed my storage location. Then ran a repaired on the DB. Then only did I “Resume” duplicate, who then only backup a few odd files I left in the sources. This quickly finished.

Then I renamed the storage location back, moved the recently created backup files into that location. Did a repair on the DB again, ran the backup again. And then only added my original sources back.

After this, I could get Duplicati to run normally again.

You want to avoid killing the Duplicati process - it can possibly foul up your backups. The best option is to press the stop button and click “After current file.” What version of Duplicati are you using? If you’re on an older version sometimes the Stop command is not reliable. A lot of improvement has been made in recent versions which should make it into the next beta release.

Hi @drwtsn32, thanks for the reply!

So I have no option but to kill the process, although what you say is true about potentially corrupting the sqlite dbs.

I’m running: Duplicati - 2.0.5.1_beta_2020-01-18

It was a fresh install a few days back, on 2 VMs. The same problem occurs in both systems. Pressing the stop button and asking for it to Stop Now or After the current file does not stop anything even after an hour. Duplicati goes into a infinite loop (50% CPU on dual core, and no progress on live logs). So the only way to stop this is by killing the process.

I found a simpler solution to getting Duplicati not to run the backup. By simply unchecking the checkbox on the “Schedule” page. Then you “Quite” the process via the taskbar icon, and start Duplicati again. Now it will not try to start the backup when you “Resume” it and you can then execute a db “repair”. Then only you can enable the schedule and if preferred manually starting the backup again to run through.

1 Like

I’ve made a few screenshots to show the behaviour, here you can see it kind of “finished”. But now it just don’t continue. I’m not sure if that last file is the culprit or if it is just residual information, since the next step may be the problematic one.

This is the same way/spot it got stuck previously.

Here are the last log entries (last line was 10mins ago from screenshot time):

Hi Guys, ok some more feedback. After leaving the backup on that spot for another 60minutes, the backup actually continued. So now the biggest question I have is what did it do for 60 minutes that it could not report on. But anyways here is the “live” log list after the long delay…

Do you remember the steps that you took? Currently, only the “Stop after current file” is advised.
Canary release should be safe for “Stop now”. Anything harder than that isn’t currently advisable
however sometimes it’s necessary because nothing else works, as was seen earlier in this one.

What level logging? Verbose has a screenshot of live log above, but Profiling is more sensitive.
Still, sometimes even that isn’t enough, but it’s worth trying to see if it can add any clue to location.
There are some more sensitive methods, e.g. Sysinternals Process Monitor captures disk activity.
It’s pretty technical, but even Task Manager can see if disk is being accessed (maybe DB activity).

Do you mean Paused as when you click the Pause icon at home page top, or right-click Tray Icon?

image

You can have it pause in Settings, but other than that it shouldn’t. It can delay on a wake from sleep.
There are a few reports of odd issues in Pause, and Resume, but right now I’d just like clarification.

Sometimes SQL queries get slow, and you could see this in the log (but only with Profiling level).
Generally this is only an issue with big backups or ones with lots of versions. 50 GB isn’t very huge.

Fiddling with sources is a good way to see if an issue is sensitive to sources. Interestingly, this one
both was (because it worked with few sources) and wasn’t (because original source worked later).

There are enough steps to this that more test effort would probably be necessary to see what’s up.
Ideal test starts with nothing (also safer than testing a running backup) and has simple steps to fail.

Unfortunately, this means that I killed the process and ran Duplicati 2 again! :laughing:

Thanks for the advice, but at the point this backup is sitting, I had no option left than to resort to killing the process in Task Manager. (Since both stop after file or stop now, just didn’t do anything for a long time, but at this point a long time means, maybe 2 minutes).

Thanks for this, I wasn’t sure what was the highest level of logging. When I track the “Live” logs, I used Verbose. I’ve only heard of “Verbose” so I suspected the other options to be special cases. I’ll update my logging to “Profiling” then. I do have Process Monitor installed and use it a lot, but I can’t recall checking the disk usage on it. Task Manager does not show disk usage, as this is in a VM.

I do have this setting set, so It pauses for 10minutes on startup before running backups. It ensures when the server restarts, that it doesn’t immediately starts with backups. So the reason it is in a paused state is because of the setting that is on. I actually forgot about the setting, but once I saw under the About page that it was counting down, I remembered and checked the setting.

This backup is 335.89 MB in size, so not that big then.

I’ve enabled the advanced setting to send logs to a file in “Verbose” mode, and will see if I can see anything else there. It is a 1.4Gb file, which is a bit slow to work through. I need to figure out how to tell Duplicati to create a new file after say 100Mb or one each day.

Edit: Also check my post just before your post, to see the final live logs screenshot. The backup eventually completed, I also just realised that the backup has a very large amount of small text based files, so everything “fits” inside a few backup “blocks” or what they are called, uploading only at 100Mb size (I’ve set this so), so the backup process may be hard at work working with a huge amount of small files, while not yet ready to break the process to “upload”. Also it seems now that the process “hangs” at the end of the last file. So AFTER it iterated all the files that is in the sources, it took very long to start with the upload of the changed files (only a 3.02Mb file).

Edit: Ok this log file shows the same things as the “live” log, since both were in Verbose mode.

The VM should be able to report on its virtualized disk load, e.g. in Task Manager or Resource Monitor.

That’s a better way than live log which is OK for a glance (like to see if Profiling is doing its big output).

A Verbose log for a 335 MB backup is already 1.4 GB after fairly recent setup? I’m surprised at its size.
I’ve got some 15 GB logs, but they took awhile at Profiling level to get. I use glogg to look through them.

You can probably use Scripting options to move the old log to some other file name, to start a fresh log.