Need advice for currently running backup

Following question is not related to my question from Sunday because it is related to a separate Duplicati installation on another machine (also Win11x64) with a different job configuration.

I have a job running since about 22 hours (~1.30 pm yesterday) which is stuck at status Completing upload (new UI) / 0 Files (0 bytes) to go (old UI) since about 19.5h (3.59 pm).

The job has a verbose log configured, the last entry is [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed (3.59 pm, the mentioned file is present on destination).

In the web interface, the last log entry related to the job is a database query: 2. Febr. 2026 15:59: ExecuteReader: SELECT f."Path", fs."FileID", fs."Lastmodified", COALESCE(bs."Length", -1) FROM (SELECT DISTINCT "FileID", "Lastmodified" FROM "FilesetEntry-49DF88F72BFF2F499F348CC43567B8DA") AS fs LEFT JOIN "File" AS f ON fs."FileID" = f."ID" LEFT JOIN "Blockset" AS bs ON f."BlocksetID" = bs."ID"; took 0:00:00:50.790

According to Windows Resource Monitor, Duplicati.GUI.TrayIcon.exe seems to read from the job DB sqlite file (with reading rate varying between low levels of 400-900 byte/s) but I do not see any writes to destination or the DB sqlite file (last change: 3.57 PM for both). Other reads include from time to time a file (“etilqs…”) in %TEMP% directory and sometimes read/write to the server DB sqlite file. CPU use according to task manager is between 4.5 and 9 %.

I am unsure whether this indicates legitimate processing by Duplicati or whether the job might be stuck in some kind of deadlock or endless loop. Usually the job runs in 30 min - 3 hours (don’t ask me for the cause in volatility) so it definitely takes way too long.

What might have led to an increased processing time is a folder rename affecting a high number of files (no actual content changes): I renamed folder A (containing 1.3 million files in 65k subfolders, about 100 GB) to C and folder B (containing 600 files in 9 subfolders, about 12 MB) to A before the job ran. Eventhough all of these folders are within the the job source directories and thus no data change should have occurred, a high amount of filepaths has changed.

Given that I do not know what is currently happening and whether the job might eventually finish, I am now wondering whether it makes sense to 1) abort the backup job, 2) delete the data of the latest backup run using the web interface, and 3) undo the folder renames. My hope would be that if I run the job the next time, it would then run smoothly in a timely manner. Does that sound reasonable? Or will aborting the backup and/or undoing the rename mean a long processing time for Duplicati anyways because it will have to undo the renames once again? Maybe I am also completely on the wrong track and there is another reason for the job still running?

If you need more information or more extracts (the complete log file would probably be too large to process) from the verbose log, let me know!

I’m no more able to edit, so here is the update: I had initiated a “Terminate after current file” after about 35h of the job running (00.38 am/00.45 am) and after 36h (01.33 am) it finally finished. In verbose job log I only see a few new entries that have been added after the entry from 2nd February:

2026-02-02 15:57:00 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-i17a5d153d7ed44fbb5ddb8c73e8a6085.dindex.zip.aes (282.950 KiB)
2026-02-04 01:29:35 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bffc4e375176646aaa103a78d31845555.dblock.zip.aes (13.700 MiB)
2026-02-04 01:29:38 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-bffc4e375176646aaa103a78d31845555.dblock.zip.aes (13.700 MiB)
2026-02-04 01:29:38 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-i4c5edb16acce45e2ab797dfe9365afd8.dindex.zip.aes (53.669 KiB)
2026-02-04 01:29:38 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-i4c5edb16acce45e2ab797dfe9365afd8.dindex.zip.aes (53.669 KiB)
2026-02-04 01:33:53 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-20260202T123924Z.dlist.zip.aes (204.862 MiB)
2026-02-04 01:33:56 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-20260202T123924Z.dlist.zip.aes (204.862 MiB)
2026-02-04 01:33:57 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'cache_size=-667648'.
2026-02-04 01:33:57 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'cache_size=-20000'.
2026-02-04 01:33:57 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'cache_size=-667648'.
2026-02-04 01:33:57 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'cache_size=-20000'.
2026-02-04 01:33:57 +01 - [Information-Duplicati.Library.Main.Controller-CompletedOperation]: The operation Backup has completed

For the sake of completeness, here is also the live log from the server (Verbose):

Feb 4, 2026 1:35 AM: A Task’s exception(s) were not observed either by Waiting on the Task or accessing its Exception property. As a result, the unobserved exception was rethrown by the finalizer thread. (The CancellationTokenSource has been disposed.)
System.AggregateException: A Task’s exception(s) were not observed either by Waiting on the Task or accessing its Exception property. As a result, the unobserved exception was rethrown by the finalizer thread. (The CancellationTokenSource has been disposed.)
—> System.ObjectDisposedException: The CancellationTokenSource has been disposed.
at Duplicati.Server.Runner.<>c__DisplayClass16_2.<b__3>d.MoveNext()
— End of inner exception stack trace —
Feb 4, 2026 1:33 AM: The operation Backup has completed
Feb 4, 2026 1:33 AM: Setting custom SQLite option ‘cache_size=-20000’.
Feb 4, 2026 1:33 AM: Setting custom SQLite option ‘cache_size=-667648’.
Feb 4, 2026 1:33 AM: Setting custom SQLite option ‘cache_size=-20000’.
Feb 4, 2026 1:33 AM: Setting custom SQLite option ‘cache_size=-667648’.
Feb 4, 2026 1:33 AM: Backend event: Put - Completed: duplicati-20260202T123924Z.dlist.zip.aes (204.862 MiB)
Feb 4, 2026 1:33 AM: Backend event: Put - Started: duplicati-20260202T123924Z.dlist.zip.aes (204.862 MiB)
Feb 4, 2026 1:29 AM: Backend event: Put - Completed: duplicati-i4c5edb16acce45e2ab797dfe9365afd8.dindex.zip.aes (53.669 KiB)
Feb 4, 2026 1:29 AM: Backend event: Put - Started: duplicati-i4c5edb16acce45e2ab797dfe9365afd8.dindex.zip.aes (53.669 KiB)
Feb 4, 2026 1:29 AM: Backend event: Put - Completed: duplicati-bffc4e375176646aaa103a78d31845555.dblock.zip.aes (13.700 MiB)
Feb 4, 2026 1:29 AM: Backend event: Put - Started: duplicati-bffc4e375176646aaa103a78d31845555.dblock.zip.aes (13.700 MiB)
Feb 4, 2026 1:17 AM: Failed to refresh token
Microsoft.IdentityModel.Tokens.SecurityTokenValidationException: Refresh nonce does not match the expected value
at Duplicati.WebserverCore.Middlewares.JWTTokenProvider.ReadRefreshToken(String token, String nonce)
at Duplicati.WebserverCore.Services.LoginProvider.PerformLoginWithRefreshToken(String refreshTokenString, String nonce, CancellationToken ct)
at Duplicati.WebserverCore.Endpoints.V1.Auth.<>c.<b__3_0>d.MoveNext()
Feb 4, 2026 1:17 AM: Failed to refresh token
Feb 4, 2026 1:17 AM: Failed to refresh token
Feb 4, 2026 1:17 AM: Failed to refresh token
Feb 4, 2026 1:17 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token
Feb 4, 2026 1:16 AM: Failed to refresh token

I see no indicator that the backup might be incomplete but I still don’t get what happened in the 33.5h that feature no log messages. I usually don’t have that much time for a backup so the cause would still be good to know (if e.g. the folder rename was the cause, I would like to rollback the backup to the previous state before renaming and make sure that the folder name in source is not changed when the backup runs the next time).

Is the folder rename as cause even plausible?

When deleting the last backup version, is all that happens that files are deleted due to the incremental backup approach? That should be fast then no matter how many changes were detected in the last backup run, right?

For anyone who might have come across similar problems:
I did what I suspected to help the most (undo the folder rename, delete the last backup revision and re-rerun the backup job). The backup run then only took 40 minutes, so it looks like all is fine again. Regarding cause, I will keep my eyes open for similar behaviors in the next backup runs, hoping that situation will not occur again.

Duplicati tracks files based on their absolute paths, and keeps a last-modified timestamp (and some metadata) for each path. When you rename the folder, the paths change, which causes Duplicati to see files as “new”. For these “new” files, Duplicati will not have a matching last-modified timestamp and thus starts scanning the files to figure out what blocks are modified. Since the files are not modified, this scanning just takes longer but otherwise does not do anything.

This should cause a slowdown when running the backup, but does not have any effect on the “Completing upload” step, which is just waiting for uploads to complete, and then some minor database maintenance.

It is possible that the rename causes the file table to expand significantly (as both the old and new paths exist) and that causes one of the consistency checks or cleanups to take significantly longer.

Thanks for your reply!

So that means the GUI status was inaccurate?

I don’t know how atomic the operations during the run are, but would it be possible to add verbose logging in between (or even an indicator in the status bar) to see that something is actually happening?

2026-02-02 15:57:00 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-i17a5d153d7ed44fbb5ddb8c73e8a6085.dindex.zip.aes (282.950 KiB)
2026-02-04 01:29:35 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bffc4e375176646aaa103a78d31845555.dblock.zip.aes (13.700 MiB)

The long logging “pause” is after a completed put and before the start of the next one. Does that mean that the mentioned consistency checks ran in between and blocked the next put until finished?