"Waiting for upload to finish ..."

Hi,

I’ve created and run my first backup with Duplicati on a Synology NAS (with the official Duplicati Docker image). It deals with a little bit more than 0.5 TB towards my OneDrive for Business account. It takes a while (600-700 KB/s), but at the end everything seems okay, till the last phase… It seems stuck with the message “Waiting for upload to finish …” for about 18 hours now.

The live log with level Profiling says (no entry for the last 18 hours…):

  • 22 feb. 2022 15:26: CommitUpdateRemoteVolume took 0:00:00:00.000
  • 22 feb. 2022 15:26: Starting - CommitUpdateRemoteVolume
  • 22 feb. 2022 15:26: Uploading a new fileset took 0:00:02:06.806
  • 22 feb. 2022 15:26: CommitUpdateRemoteVolume took 0:00:00:00.203
  • 22 feb. 2022 15:26: Starting - CommitUpdateRemoteVolume
  • 22 feb. 2022 15:25: CommitAfterUpload took 0:00:00:00.079
  • 22 feb. 2022 15:25: Starting - CommitAfterUpload
  • 22 feb. 2022 15:25: Backend event: Put - Completed: duplicati-i926fa3a993ec4c7c904330dd01929cda.dindex.zip.aes (19.37 KB)
  • 22 feb. 2022 15:25: Backend event: Put - Started: duplicati-i926fa3a993ec4c7c904330dd01929cda.dindex.zip.aes (19.37 KB)
  • 22 feb. 2022 15:25: Backend event: Put - Completed: duplicati-icd61ce8152ab4cbcbc5ec08fd6b54b12.dindex.zip.aes (12.79 KB)
  • 22 feb. 2022 15:25: Backend event: Put - Completed: duplicati-id12235b293b74bd9ab127e44fbc64949.dindex.zip.aes (2.53 KB)
  • 22 feb. 2022 15:25: Backend event: Put - Started: duplicati-icd61ce8152ab4cbcbc5ec08fd6b54b12.dindex.zip.aes (12.79 KB)
  • 22 feb. 2022 15:25: Backend event: Put - Started: duplicati-id12235b293b74bd9ab127e44fbc64949.dindex.zip.aes (2.53 KB)
  • 22 feb. 2022 15:25: Backend event: Put - Completed: duplicati-b309c4f35930a43b887d28d8c60a575f3.dblock.zip.aes (49.36 MB)

I could find a few threads with the same problem, though the situation is different somehow. For example, other backup types, mail notifications being the evildoer (I don’t have these), other OS,…

I cannot click “Run now” to run another (2nd, incremental) backup. I haven’t rebooted my NAS or Duplicati yet, as I’m not sure this is a good idea in my situation.

Anyone having an idea what has caused this and how to solve this?
Thanks!

PS: remote volume size is 50 MB and dblock size is 1 MB.

PPS: I could find this error in stored log:
22 feb. 2022 20:38: Error in updater
System.Net.WebException: Error: TrustFailure (Authentication failed, see inner exception.) —> System.Security.Authentication.AuthenticationException: Authentication failed, see inner exception. —> Mono.Btls.MonoBtlsException: Ssl error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
at /build/mono-6.12.0.107/external/boringssl/ssl/handshake_client.c:1132
at Mono.Btls.MonoBtlsContext.ProcessHandshake () [0x00048] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at Mono.Net.Security.MobileAuthenticatedStream.ProcessHandshake (Mono.Net.Security.AsyncOperationStatus status, System.Boolean renegotiate) [0x000da] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at (wrapper remoting-invoke-with-check) Mono.Net.Security.MobileAuthenticatedStream.ProcessHandshake(Mono.Net.Security.AsyncOperationStatus,bool)
at Mono.Net.Security.AsyncHandshakeRequest.Run (Mono.Net.Security.AsyncOperationStatus status) [0x00006] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at Mono.Net.Security.AsyncProtocolRequest.ProcessOperation (System.Threading.CancellationToken cancellationToken) [0x000fc] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
— End of inner exception stack trace —
at Mono.Net.Security.MobileAuthenticatedStream.ProcessAuthentication (System.Boolean runSynchronously, Mono.Net.Security.MonoSslAuthenticationOptions options, System.Threading.CancellationToken cancellationToken) [0x00262] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at Mono.Net.Security.MonoTlsStream.CreateStream (System.Net.WebConnectionTunnel tunnel, System.Threading.CancellationToken cancellationToken) [0x0016a] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at System.Net.WebConnection.CreateStream (System.Net.WebOperation operation, System.Boolean reused, System.Threading.CancellationToken cancellationToken) [0x001ba] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
— End of inner exception stack trace —
at System.Net.WebConnection.CreateStream (System.Net.WebOperation operation, System.Boolean reused, System.Threading.CancellationToken cancellationToken) [0x0021a] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at System.Net.WebConnection.InitConnection (System.Net.WebOperation operation, System.Threading.CancellationToken cancellationToken) [0x00141] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at System.Net.WebOperation.Run () [0x0009a] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at System.Net.WebCompletionSource1[T].WaitForCompletion () [0x00094] in <6bc04dcac0a443ee834a449c98b8ed9d>:0 at System.Net.HttpWebRequest.RunWithTimeoutWorker[T] (System.Threading.Tasks.Task1[TResult] workerTask, System.Int32 timeout, System.Action abort, System.Func`1[TResult] aborted, System.Threading.CancellationTokenSource cts) [0x000f8] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at System.Net.HttpWebRequest.GetResponse () [0x00016] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at System.Net.WebClient.GetWebResponse (System.Net.WebRequest request) [0x00000] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at System.Net.WebClient.DownloadBits (System.Net.WebRequest request, System.IO.Stream writeStream) [0x000e6] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at System.Net.WebClient.DownloadFile (System.Uri address, System.String fileName) [0x00088] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at System.Net.WebClient.DownloadFile (System.String address, System.String fileName) [0x00008] in <6bc04dcac0a443ee834a449c98b8ed9d>:0
at (wrapper remoting-invoke-with-check) System.Net.WebClient.DownloadFile(string,string)
at Duplicati.Library.AutoUpdater.UpdaterManager.CheckForUpdate (Duplicati.Library.AutoUpdater.ReleaseType channel) [0x000ee] in <8d4cb1693e00483189d3952c3f0ed20f>:0

Regards,
Pedro

I couldn’t wait… I’ve just restarted the Docker container and now the message is gone. But my backup job now says there are 0 backups for it. So I guess Duplicati does consider my 1st backup as unfinished :frowning:

I’ve’ started “Verify files” now and the status of the job is now “Starting backup…”, which is unexpected: is Duplicati just verifying files or is it really doing a new backup again? Based on the remote log it is verifying files (so the status message ‘Starting backup’ seems misleading, right?). But then, if Duplicati says the job has never run, how does it know which files to verify? Or what’s the purpose/meaning of this verification?

I’m a bit confused right now…

Pedro

Update: a database repair hasn’t solved anything. A delete+repair (recreate) gave an error: “No filelists found on the remote destination”. I guess the backup itself has succeeded, but the file list hasn’t been uploaded (yet).
Is there a way to solve this? Or is the only solution to delete everything and start over completely? Will I get the same situation again, where the uploading status sits there for more than 18 hours…?
I’m afraid I will lose a lot of time again (it takes days for this backup to complete), with the same unsuccessful result…

Again, any help is appreciated!

Regards,
Pedro

Yeah, you can’t do a database repair when no backups were successful yet. (There won’t be any dlist files.)

You might want to start completely over by deleting all the duplicati files in the “back end”: the dblock, dindex, and dlist files. Delete the local database by clicking your backup job, clicking “Database…”, and then clicking Delete. Then start a new backup job - it will behave as if it’s the first backup job.

Let’s see if this hangs in the same way.

Hi,

Thanks for the tip! I’ve restarted the backup completely, but this time I just started with a small subset (a little bit more than 16 GB). After a few hours the backup was almost finished, but then it got stuck (in the meanwhile already for more than 10 hours!) with status message “0 files (0 bytes) to go at xxx.yy KB/s”. It seems like the same problem, but this time with another “status” message. The transfer rate (xxx.yy) is changing though; but is it really doing something? You know, 10+ hours for 0 files (0 bytes) is weird… Perhaps it’s the file list again, but this time it should be a quite small file list, so how can this take more than 10 hours?

These are the last entries in my live log with level Profiling: * 23 feb. 2022 23:10: ExecuteScalarInt64: INSERT INTO “Remotevolume” (“OperationID”, “Name”, “Type”, “State”, “Size”, “VerificationCount”, “DeleteGraceTime”) VALUES (2, “duplicati-i5aeb64b8e38641b1b5a78d326e00f0a8.dindex.zip.aes”, “Index”, “Temporary”, -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.012

  • 23 feb. 2022 23:10: Starting - Uploading a new fileset
  • 23 feb. 2022 23:10: Starting - ExecuteScalarInt64: INSERT INTO “Remotevolume” (“OperationID”, “Name”, “Type”, “State”, “Size”, “VerificationCount”, “DeleteGraceTime”) VALUES (2, “duplicati-i5aeb64b8e38641b1b5a78d326e00f0a8.dindex.zip.aes”, “Index”, “Temporary”, -1, 0, 0); SELECT last_insert_rowid();
  • 23 feb. 2022 23:10: UpdateChangeStatistics took 0:00:02:13.459
  • 23 feb. 2022 23:10: ExecuteNonQuery: DROP TABLE IF EXISTS “TmpFileList-B00818CD3DB85444A2BF459B9A8038D9”; took 0:00:00:00.017
  • 23 feb. 2022 23:10: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS “TmpFileList-B00818CD3DB85444A2BF459B9A8038D9”;
  • 23 feb. 2022 23:10: ExecuteNonQuery: DROP TABLE IF EXISTS “TmpFileList-203E687B931250439DA72D5476C6C6F2”; took 0:00:00:00.000
  • 23 feb. 2022 23:10: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS “TmpFileList-203E687B931250439DA72D5476C6C6F2”;

I’m writing this at local time 24 feb. 2022 09:59…

The “current file” is …/@eaDir/storting 2.PDF@SynoEAStream
Perhaps this type of file is causing issues? I can’t see this @eaDir folder from Windows; normally this is caused by applications like Photo Station and Media Server on Synology for storing thumbnails. I had deleted these folders already in the past, but I must have missed some apparently. I’ve deleted it through SCP (via WinSCP). Perhaps the backup will continue now? Let’s wait a bit. If this doesn’t work after a few hours, I’m going to start over again (hoping no other @eaDir folders are there and get in the way :-)).

In the meanwhile, again, every piece of help/information is welcome! :slight_smile:

Thanks!

Regards,
Pedro

Well, I’ll be damned! I could find older files in my target folder. Perhaps I hadn’t cleaned up that folder completely before running the backup job again. And perhaps that’s the reason of my last failure (see above). If so, then the @eaDir folder has nothing to do with it (which would make sense, as I’ve just noticed I have a LOT of such folders, which are related to Synology DSM in my particular case and it would be weird if those would be the culprit, especially because others on the Net seem to be able to backup those folders with Duplicati).

I’m going to start over now, once again :slight_smile: Fingers crossed!

Regards,
Pedro

I would not back up those @eaDir folders. From what I recall the @ folders are system level. All your user files should be elsewhere, probably in the root of your volume as regular files (eg, /volume1/ShareName, etc).

I’ve read the eaDir folders aren’t considered “real” system level folders. But in a way they are of course. And I don’t need them for backup! So I’ve added an exclude filter. A new attempt to backup 16+ GB has succeeded now (probably because I have first cleaned my target folder completely this time :-)). Remarkable (in a way at least) is that my transfer speed had increased with about 150 KB/s after applying the exclude filter (600-650 => 750-800+ KB/s)! Perhaps it’s a coincidence, but after so many tests I found this remarkable!

Tonight I’m going to add a second (bigger) piece to my backup, so fingers crossed again!

Grtz,
Pedro

1 Like

Hi,

As I’ve said above, I’ve restarted my backup completely, but this time in pieces. The first piece of 16+ GB was successful. Then I started a 2nd (incremental) backup, but with 20 GB of extra data (so 36+ GB in total). The last (almost) 4 hours I get the same message “Waiting for upload to finish …” again… I have no idea why this happens TBH: AFAIK this is about the file list that is uploaded, but is it normal that this takes 4 hours or more for only 36+ GB (with a normal amount of files)? If so, is this only the first time new data gets added to the job? If I get this with every backup I can’t even use Duplicati at all :frowning: Also, why does this message appear so long for an extra 20 GB, while the first 16+ GB didn’t result in this message (or at least not as long as now)?
I’m, well, baffled… :-/

Any help is welcome!

Regards,
Pedro

What is the number of files you tried to back up in the most recent test?

Agreed, but this isn’t normal. Hopefully we can resolve it!

Hi,

I’m sorry for my late reply, but I’ve recreated the backup job, started with a small piece (a few gigs) and added extra gigs step by step since. It seems the job fails (see above) when there is too much new stuff in the source, so I have to work with smaller steps towards the final situation and that does seem to work, at least till now (I’m at 350 GB right now). This will probably never be a big problem for me, as I will (almost) never have to backup a LOT of new stuff in 1 daily job run (and if so, well, then I can still intervene and split that up in 2 or more smaller pieces). So it seems I will get an acceptable situation at the end, at least in my particular case.

When I talk about “smaller pieces which seem okay”, then I’m talking about 10 gigs or so; adding something like 40 GB in 1 try just doesn’t work (red error that can be dismissed). The “Waiting for upload to finish …” error hasn’t popped up anymore, I have no idea why… (because I started the backup job with smaller pieces? does this error only appear with initial job runs?) I also haven’t the faintest idea why larger pieces give errors: is it Duplicati’s fault or it it OneDrive for Business acting unpleasantly?

So, bottom line in my case: starting small and adding small pieces afterwards till I have everything covered by the backup job. It’s OK in my situation, but if this has to happen quickly and/or for very large amounts (> 1 TB), it’s just not feasible. I try to work cumulatively with 10 GB steps.

PS: I must say I had to restart the Duplicati container twice since starting the process a few weeks ago because of overloading my NAS and not showing the Duplicati WUI anymore. It’s possible this was the case when adding a larger new data set (yes, I have tried to add larger sets from time to time…), but I’m not sure.

If I have further news, I’ll post it here.

Regards,
Pedro

Aargl… Unbelievable, but true: the first job run after posting my previous message I got the “Waiting for upload to finish …” message again! Perhaps I’ve added a new data set which was a bit too large, but it proves it’s all very sensitive… I’m going to redraw the latest addition and add a smaller new data set instead. Fingers crossed…

Pedro

Yup, I’ve checked it: the newly added dataset was 40+ GB and that was too large for 1 backup job run. In the meanwhile I’ve redrawn that set and splitted it up in smaller pieces; adding those smaller pieces don’t give error messages.

It’s a long process to get the backup job I want, but I’ll get there ultimately :slight_smile: Or at least, I hope so :smiley:

Pedro

I have the same problem, on Linux Mint 20.3, with a ftp backup to my NAS.
It worked till almost its finished. Could not stop the procedure. So I restart my PC. Deleted my (not backup which was not properly finished and modified my backup settings. But now it always shows only, that it will start, but nothing happens.
I used Duplicati also on my win10 pc. There I never had such problems.
Any idea, what I can do?

Hello,

Same issue with my NAS (Asustor), when the upload is almost finish, duplicati is blocked at : “Waiting for upload to finish”.

I’m backuping my file to 1fichier.com with FTP.

Duplicati version : Duplicati - 2.0.6.3_beta_2021-06-17

Any idea how to solve this ?

Hi @uniketou , I was able to avoid this by starting with a small dataset and incrementally add more with every backup job run (adding pieces of about 10 GB seem to be okay, but perhaps 20 GB works out well as a well, just try it out). This way I was able to backup more than 0,5 TB to the cloud. Every backup job run since has run successfully. Drawback: it takes a lot of time to build up the whole large dataset this way… In my case it was still feasible (nothing urgent and I had the time), but in many other scenarios this is no option I guess…

Regards,
Pedro