Stuck on "Waiting for upload to finish"

I know this has been asked a hundred times here, but I’ve read all the threads I could find on the issue, and none seem to be related. I’m not trying to send emails on upload finish, so no email errors are probably causing this. I’m not uploading to megaz, so that’s not it.

I’m running duplicati 2.0.6.3_beta_2021-06-17 on Windows 10. I’m attempting to backup to Backblaze b2. My full config is below, but perhaps relevant is my blocksize at 5MB and my dblock-size (volume size?) at 100MB.

I’ve had this issue with two backups so far: one at 17GB, another at around 200GB.

The 200GB is the one I’m stuck waiting on right now. Overnight it seemed to upload a great deal of volumes, I can see them in backblaze, and I see lots of successful PUTs messages in my logs. However, it’s been “Waiting for upload to finish” for I believe 2 hours. The last log says:

Feb 16, 2022 8:11 AM: Backend event: Put - Completed: duplicati-i4026597c5ec241fd97c154707c4bc57d.dindex.zip.aes (4.22 KB)

at log level verbose. It’s 10:00 am here now for reference.

The previous backup this happened on, I let it sit in “Waiting for upload to finish” mode for 8 hours, on a 17GB backup, based on threads here that mentioned it might just take a while if there’s lots of files to index (though that backup was comprised of few relatively large audio files). I eventually tried to stop it, though the UI was non responsive, so I force killed the Duplicati processes (I have two running that i can see in task manager), which resulted in the Duplicati UI reporting the backup as a “successful backup,” though the source and backup sizes are different…

Last successful backup:

Yesterday at 4:31 PM (took 00:00:31)

[Run now](http://localhost:8200/ngax/index.html#/)

Next scheduled run:

Tuesday at 1:00 PM

Source:

16.98 GB

Backup:

15.96 GB / 2 Versions

I somehow doubt I could restore from that.

Anyway, what can I do? What further debugging can I attempt? I really like this tooling solution and I’d love to get it working.

Here’s my config, minus passwords:

{
  "CreatedByVersion": "2.0.6.3",
  "Schedule": {
    "ID": 2,
    "Tags": [
      "ID=5"
    ],
    "Time": "2022-02-15T05:00:00Z",
    "Repeat": "1W",
    "LastRun": "0001-01-01T00:00:00Z",
    "Rule": "AllowedWeekDays=Monday,Tuesday,Wednesday,Thursday,Friday,Saturday,Sunday",
    "AllowedDays": [
      "mon",
      "tue",
      "wed",
      "thu",
      "fri",
      "sat",
      "sun"
    ]
  },
  "Backup": {
    "ID": "5",
    "Name": "caleb-finished",
    "Description": "",
    "Tags": [],
    "TargetURL": "b2://xxx-xxx?auth-username=xxxxxx",
    "DBPath": "C:\\Users\\calebjay\\AppData\\Local\\Duplicati\\LAQIYFIORO.sqlite",
    "Sources": [
      "E:\\Pictures\\finished\\"
    ],
    "Settings": [
      {
        "Filter": "",
        "Name": "encryption-module",
        "Value": "aes",
        "Argument": null
      },
      {
        "Filter": "",
        "Name": "compression-module",
        "Value": "zip",
        "Argument": null
      },
      {
        "Filter": "",
        "Name": "dblock-size",
        "Value": "100MB",
        "Argument": null
      },
      {
        "Filter": "",
        "Name": "keep-versions",
        "Value": "2",
        "Argument": null
      },
      {
        "Filter": "",
        "Name": "--blocksize",
        "Value": "5MB",
        "Argument": null
      },
      {
        "Filter": "",
        "Name": "--exclude-files-attributes",
        "Value": "hidden,temporary,system",
        "Argument": null
      }
    ],
    "Filters": [],
    "Metadata": {},
    "IsTemporary": false
  },
  "DisplayNames": {
    "E:\\Pictures\\finished\\": "finished"
  }
}

EDIT: I’m noticing the config doesn’t mention that my temporary directory (where I guess duplicati stores zipped up volumes before sending them off and deleting them locally) is on a massive harddrive with terabytes of space remaining, so I don’t think it’s getting stuck there. I am noticing however that the DBPath is set to my C:/ drive which does have limited space, it really only has my OS on it, and has 10GB of free space left. I wonder if that’s an issue? I’ll try clearing some space there.

EDIT: Yeah nearing 12 hours and hasn’t moved :confused: no new logs either

I think it won’t let me edit the above post anymore, but I force restarted duplicati after 24 hours, and it discovered more files in the backup to work on. I came back to check a couple hours later, and it says it’s finished and has started on another backup.

Hello and welcome to the forum!

How is this going? Are the backups completing properly now?

Thanks for the welcome, it’s good to be here :slight_smile:

Hard to say until I test, which I hope to do this weekend. I’m still getting this issue at the end of each of the following backups, which range in size from 16B to terabyte range (I broke my backups into more bite sized pieces). The “solution” has been the same, force-kill duplicati, watch it find more files, and then come back the next morning to find it’s started another backup. Here’s the state of my UI, as you can see there’s sometimes significant differences in backup and source size, though if there’s compression happening it makes sense that documents can be compressed dramatically more than audio files (“audio” backup) and video files (“finished” backup):

Force killing Duplicati shouldn’t be necessary, and is usually a bad idea.

When it appears to be stuck, can you check Resource Monitor (accessible via Task Manager) and see if Duplicati shows any network activity of significance?

That backup you now have in progress is a pretty good sized backup. How fast is your Internet service upload speed?

When it’s stuck, resource monitor shows no cpu, memory, or network usage. I usually watch it for about 5 minutes with a youtube video on just to be sure I’m not missing some super low usage or something.

The current backup is going at 1.04MB/s, our speed is 1000Mbps down 60 up, realistically usually 600 down 40 up.

When it’s stuck, the whole app seems unresponsive. Pushing the pause or stop button just causes the “trying to pause” or whatever message to display in the main UI, but logs don’t change, and nothing changes in resource manager, and if I leave and come back an hour later, the situation remains identical to when I immediately pushed the pause button.

Yeah it really does sound like it’s stuck. Odd.

In your first message you were kind enough to post the last entry of the log, but I wonder what the last entry would show if you had About → Show Log → Live Log set to Verbose or maybe even Profiling. Leave a web browser tab open on this live log while the backup is running, and see what the last entry is when it hangs.

1 Like

I’m doing so now, it’s working through still a 700GB backup. Is there a way to write logs to file? I’m concerned that i might accidentally close a tab, or that my browser will “kill” a tab’s memory usage or whatever it is when I go to a tab i haven’t been on in a while and it refreshes.

I’m realizing it’s actually stuck right now and has been for ~18 hours.

No CPU, Network usage.

It’s 6pm now, logs verbose:


Feb 19, 2022 1:50 AM: Backend event: Put - Started: duplicati-b7927afcda09f49cc837c14163aecb774.dblock.zip.aes (99.03 MB)
Feb 19, 2022 1:50 AM: Backend event: Put - Completed: duplicati-ia120db74c247414d8f7b7b767a63289e.dindex.zip.aes (4.17 KB)
Feb 19, 2022 1:50 AM: Backend event: Put - Started: duplicati-ia120db74c247414d8f7b7b767a63289e.dindex.zip.aes (4.17 KB)
Feb 19, 2022 1:50 AM: Backend event: Put - Completed: duplicati-b89aa079acc784ea4a06174d36c6fea13.dblock.zip.aes (99.03 MB)
Feb 19, 2022 1:48 AM: Backend event: Put - Started: duplicati-b89aa079acc784ea4a06174d36c6fea13.dblock.zip.aes (99.03 MB)
Feb 19, 2022 1:48 AM: Renaming "duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes" to "duplicati-b89aa079acc784ea4a06174d36c6fea13.dblock.zip.aes"
Feb 19, 2022 1:48 AM: Backend event: Put - Rename: duplicati-b89aa079acc784ea4a06174d36c6fea13.dblock.zip.aes (99.03 MB)
Feb 19, 2022 1:48 AM: Backend event: Put - Rename: duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes (99.03 MB)
Feb 19, 2022 1:48 AM: Backend event: Put - Retrying: duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes (99.03 MB)
Feb 19, 2022 1:48 AM: Operation Put with file duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes attempt 3 of 6 failed with message: Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
Feb 19, 2022 1:46 AM: Backend event: Put - Started: duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes (99.03 MB) 

Logs profiling:


Feb 19, 2022 1:50 AM: CommitAddBlockToOutputFlush took 0:00:00:00.015
Feb 19, 2022 1:50 AM: Starting - CommitAddBlockToOutputFlush
Feb 19, 2022 1:50 AM: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-b40643368b1934e75b4c67400bc8e76f9.dblock.zip.aes", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
Feb 19, 2022 1:50 AM: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-b40643368b1934e75b4c67400bc8e76f9.dblock.zip.aes", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
Feb 19, 2022 1:50 AM: Backend event: Put - Started: duplicati-b7927afcda09f49cc837c14163aecb774.dblock.zip.aes (99.03 MB)
Feb 19, 2022 1:50 AM: CommitAfterUpload took 0:00:00:00.000
Feb 19, 2022 1:50 AM: Starting - CommitAfterUpload
Feb 19, 2022 1:50 AM: Backend event: Put - Completed: duplicati-ia120db74c247414d8f7b7b767a63289e.dindex.zip.aes (4.17 KB)
Feb 19, 2022 1:50 AM: Uploaded 4.17 KB in 00:00:00.9711950, 4.29 KB/s
Feb 19, 2022 1:50 AM: Backend event: Put - Started: duplicati-ia120db74c247414d8f7b7b767a63289e.dindex.zip.aes (4.17 KB)
Feb 19, 2022 1:50 AM: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-ia120db74c247414d8f7b7b767a63289e.dindex.zip.aes", "Index", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
Feb 19, 2022 1:50 AM: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-ia120db74c247414d8f7b7b767a63289e.dindex.zip.aes", "Index", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
Feb 19, 2022 1:50 AM: CommitAfterUpload took 0:00:00:00.000
Feb 19, 2022 1:50 AM: Starting - CommitAfterUpload
Feb 19, 2022 1:50 AM: Backend event: Put - Completed: duplicati-b89aa079acc784ea4a06174d36c6fea13.dblock.zip.aes (99.03 MB)
Feb 19, 2022 1:50 AM: Uploaded 99.03 MB in 00:02:14.2841593, 755.17 KB/s
Feb 19, 2022 1:48 AM: Backend event: Put - Started: duplicati-b89aa079acc784ea4a06174d36c6fea13.dblock.zip.aes (99.03 MB)
Feb 19, 2022 1:48 AM: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes", "Blocks", "Deleting", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
Feb 19, 2022 1:48 AM: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes", "Blocks", "Deleting", -1, 0, 0); SELECT last_insert_rowid();
Feb 19, 2022 1:48 AM: ExecuteScalar: SELECT "Type" FROM "Remotevolume" WHERE "Name" = "duplicati-b89aa079acc784ea4a06174d36c6fea13.dblock.zip.aes" took 0:00:00:00.000
Feb 19, 2022 1:48 AM: Starting - ExecuteScalar: SELECT "Type" FROM "Remotevolume" WHERE "Name" = "duplicati-b89aa079acc784ea4a06174d36c6fea13.dblock.zip.aes"
Feb 19, 2022 1:48 AM: ExecuteNonQuery: UPDATE "Remotevolume" SET "Name" = "duplicati-b89aa079acc784ea4a06174d36c6fea13.dblock.zip.aes" WHERE "Name" = "duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes" took 0:00:00:00.000
Feb 19, 2022 1:48 AM: Starting - ExecuteNonQuery: UPDATE "Remotevolume" SET "Name" = "duplicati-b89aa079acc784ea4a06174d36c6fea13.dblock.zip.aes" WHERE "Name" = "duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes"
Feb 19, 2022 1:48 AM: Renaming "duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes" to "duplicati-b89aa079acc784ea4a06174d36c6fea13.dblock.zip.aes"
Feb 19, 2022 1:48 AM: Backend event: Put - Rename: duplicati-b89aa079acc784ea4a06174d36c6fea13.dblock.zip.aes (99.03 MB)
Feb 19, 2022 1:48 AM: Backend event: Put - Rename: duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes (99.03 MB)
Feb 19, 2022 1:48 AM: Backend event: Put - Retrying: duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes (99.03 MB)
Feb 19, 2022 1:48 AM: Operation Put with file duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes attempt 3 of 6 failed with message: Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
Feb 19, 2022 1:46 AM: Backend event: Put - Started: duplicati-b92f29fea4ac1455cb47d52670314d75f.dblock.zip.aes (99.03 MB)
Feb 19, 2022 1:46 AM: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (1, "duplicati-b5040f89cf84e455c8f47b37dca8a1b1f.dblock.zip.aes", "Blocks", "Deleting", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000 

This thread discusses the same (or similar) issue:

One poster said the problem was triggered by including Duplicati’s own databases in the backup. Do you have the Duplicati sqlite files included in your backup selection? (By default they are stored in C:\Users\<username>\AppData\Local\Duplicati.

It appears no, in any of the backups. Interestingly I have a couple .sqlite files, though one of the weirdly named ones’ last updated property matches the last log timestamp from duplicati exactly.

Thanks for sticking with me on this issue. I really would like to get duplicati working. If we can get out some kind of error, though I’m just a web developer, I’d be happy to take a crack at fixing this issue in the code.

8 hours ago I tried “stopping” from the UI

My computer rebooted for an update, so it’s now working on the “caleb-dslr” backup again. Looks like it’s just going through the paths and has restarted from somewhere in the middle of the update.

According to discussion in that thread I linked (specifically this message), there are ‘certain types of files’ that trigger this issue, Duplicati databases only being one possibility. I’m not sure what other types of files the author meant.

Are you using VSS snapshots? I wonder if you enabled snapshot-policy if the problem would go away.

I’m not using VSS snapshots, so I’ve just turned them on for all backups, and will see what happens! I switched it on for the one running and didn’t get any complaints, not sure if that means it can actually use it in this run though. If VSS snapshots means duplicati can access locked files, that might be a big help. I also saw mention of issues accessing files that are being accessed by other programs, which is a strong possibility here.

Still running a fairly large backup, for testing I’m thinking I probably should create a smaller one so I can debug more quickly…

By the way I’m noticing complaints about MS SQL Server regularly, but from what I read in other threads it seems to not matter?


* Feb 22, 2022 9:06 AM: Cannot open WMI provider \\localhost\root\virtualization\v2. Hyper-V is probably not installed.

* Feb 22, 2022 9:06 AM: Cannot find any MS SQL Server instance. MS SQL Server is probably not installed.

* Feb 22, 2022 9:06 AM: Using WMI provider \\localhost\root\virtualization\v2

Yep those warnings can be ignored. They are only worth worrying about if you actually need to back up Hyper-V or SQL workloads.

Hah! Stuck again! For about 5 hours now. This time I had verbose and profiling logs open! I copied all of them so it’s a lot, I put it in the pastebin below. For reference (looks like it got stuck at 6:16pm) it’s 11:49pm here now.

I came home after a vacation, it tried to re-run a previous backup and got stuck on “waiting for upload to finish” again. But the previous backup before that one apparently “succeeded” insomuch as it at least failed without hanging the entire app, so I came home to this warning from that, while the other backup was stuck:


i force closed duplicati, reopened, and the stuck backup also managed to finish, with an error, and a new backup is running now