Backup hangs at the end of Backup_ProcessingFiles

Hey guys,

I’ve been trying to make a 63GB backup to Google Drive using duplicati (current latest beta version)
Near the end of the backup (around ~2000 files remaining, 2MB left) it gets stuck on a random small text file with the Backup_ProcessingFiles step, and does not continue. Hours and hours later it’ll time out and cancel the backup.

I’m running on the Docker image on Linux, and the USN stuff is off (it’s linux anyway so it shouldn’t matter)

Does anyone have any insight onto why it’s not working? Or maybe how I can get a logfile of why it times out? I’m pretty new at this.

Welcome to the forum!

Did this ever get “un stuck”?

To see what’s going on behind the scenes when Duplicati appears stuck, go to About -> Show log -> Live and set the dropdown to Verbose. You should see new entries appear fairly often. If not, you can set the dropdown to Profiling instead.

Let us know what you’re seeing there.

I got this error as well. I turned on Profiling, and it prints out a bunch of messages that start like this: “Starting - ExecuteNonQuery: UPDATE “Block” SET “VolumeID” = 7 WHERE “Hash” = …”
I think blocks are being calculated, but the main progress bar has not been updated to reflect that blocks are being created?

Hi everyone, I’m new with duplicati and I’m facing the same issue!
Docker linuxserver image running on top of debian headless server.
I created the first backup plan, choosing a 43 gb/30.000 files directory and a nextcloud server as destination. After about 5 mins, my job stuck to “7339 files (0 bytes) to go”. Every run hangs on the same file (a 100 MB zip file).
Enable the profiling log level shows me the same @winni2k traces:

ExecuteNonQuery: UPDATE “Block” SET “VolumeID” = 4 WHERE “Hash” = “aAmPYihRaXzdrFXw9HT3JA8b3bmnE5Mliz/9axxJRnU=” AND “Size” = 102400 AND “VolumeID” = 3 took 0:00:00:00.000

This line is printed many times after every live log refresh. 2 hours later, the job simply ends successfully.
Any idea?

I’m not really a fan of this docker image as it runs Duplicati non-root by default, and people easily have file access problems when backing up data. I know it is technically a better security posture, but it will take a bit of effort to make sure the docker duplicati user has access to all your source files.

Can you confirm that the UID/GID Duplicati runs as within the docker container has access to the source files on the HOST machine?

Alternatively, if you are willing you can try using the official duplicati docker image. It runs as root by default.

Isn’t there a timestamp and some other similar lines near it? Does timestamp change? Is next line just like this one except with different Hash? Possibly you’re seeing the following code, but one line doesn’t prove it.

Success is good. I have no feel for what the times should be for 43 gb/30.000 files, given your hardware, network connection, backup history, and amount of change between backups. What does job log show?

<job> → Show log can give a summary of Source Files. The Modified line might be most relevant.
Complete logBackendStatistics gives BytesUploaded and FilesUploaded. Is it 2 hours worth?

If you watch About → Show log → Live → Retry, you can see the uploads happening, and judge speed. Especially on non-initial backup, it can take a little time to gather enough changes to start uploading data. After that, it becomes a question of whether Duplicati can prepare faster than it can upload or vice versa.

Uploading that stops for some reason can also stop preparation with asynchronous-upload-limit in queue.