OneDrive (personal) back-ups suddenly started failing with "Channel is retired" and "A task has been cancelled"


I ran it with log-file set to profiling and I am still lost. Here is a portion of the log file that shows the failure happening. It fails on the first attempt and never retries. The failure messages do not provide too much insight… I also included the first and the last line of the log file. They show that everything only took less than 5 min. - hardly enough for Duplicati to even start processing the large file (isoimages.vhdx).

Any ideas?

2019-01-21 06:59:45 -05 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2019-01-21 07:00:44 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: E:\isoimages.vhdx
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.MetadataPreProcess.FileEntry-AddDirectory]: Adding directory E:\
2019-01-21 07:00:45 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (3, "", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2019-01-21 07:00:45 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (3, "", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.MetadataPreProcess.FileEntry-AddDirectory]: Adding directory E:\$RECYCLE.BIN\
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: E:\$RECYCLE.BIN\S-1-5-21-**********-**********-**********-****\
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.MetadataPreProcess.FileEntry-AddDirectory]: Adding directory E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\desktop.ini
2019-01-21 07:00:46 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FilePreFilterProcess.FileEntry-CheckFileForChanges]: Checking file for changes E:\isoimages.vhdx, new: True, timestamp changed: True, size changed: True, metadatachanged: True, 11/25/2018 2:11:35 PM vs 1/1/0001 12:00:00 AM
2019-01-21 07:00:46 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FilePreFilterProcess.FileEntry-CheckFileForChanges]: Checking file for changes E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\desktop.ini, new: True, timestamp changed: True, size changed: True, metadatachanged: True, 11/15/2014 8:11:26 PM vs 1/1/0001 12:00:00 AM
2019-01-21 07:00:46 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.MetadataPreProcess.FileEntry-AddDirectory]: Adding directory E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\
2019-01-21 07:00:46 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\desktop.ini
2019-01-21 07:01:57 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAddBlockToOutputFlush
2019-01-21 07:02:00 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAddBlockToOutputFlush took 0:00:00:02.328
2019-01-21 07:02:00 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (3, "", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2019-01-21 07:02:00 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (3, "", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2019-01-21 07:02:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAddBlockToOutputFlush
2019-01-21 07:02:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAddBlockToOutputFlush took 0:00:00:00.140
2019-01-21 07:02:14 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (3, "", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2019-01-21 07:02:14 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (3, "", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.312
2019-01-21 07:02:28 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAddBlockToOutputFlush
2019-01-21 07:02:30 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAddBlockToOutputFlush took 0:00:00:02.266
2019-01-21 07:02:41 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: (49.94 MB)
2019-01-21 07:04:26 -05 - [Retry-Duplicati.Library.Main.Operation.Common.BackendHandler-RetryPut]: Operation Put with file attempt 1 of 5 failed with message: A task was canceled.
System.Threading.Tasks.TaskCanceledException: A task was canceled.
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Duplicati.Library.OAuthHttpClient.<SendAsync>d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
2019-01-21 07:04:30 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:04:44.762


Is this the new start-from-scratch job? Does it generally fail on first Put attempt, or does failure move? Checking the file times in the OneDrive web UI might help. What sort of files (if any) have been added? Looking at Job --> Show log --> Remote may also show what’s been working, e.g. put, get, list, delete. Summarized information is also at Job --> Show log --> General, in the Backend statistics section.

First retry would be expected where the posted log has only ... I guess. Can you post more log lines?

As for the initial failure, it’s at 105 seconds which might be the previously cited 100 seconds plus setup.
How we get along with OAuth shows some of what probably has to happen for each OneDrive v2 start, although I’d have thought the setup would have been done already for the initial verify (unless that’s off).

The Destination screen “Test connection” button usually does a List operation I think, Does that work? Starting from scratch sometimes means a new folder, and there’s an option to create it here, if needed.

It’s probably worth testing after explicitly setting the job –number-of-retries and –http-operation-timeout. Possibly that can influence the result some, and at least we’ll know what track to go down to find this…

Command line tools to investigate destination issues include Duplicati.CommandLine.BackendTool.exe where you can test various operations manually (but please use a testing area, if changing things) , and Duplicati.CommandLine.BackendTester.exe which will test for you in an empty area. Both require a URL specifying the target destination. The URL can be obtained from a job’s Commandline or Export actions.


Yes, this is the brand new job. It seems to always fail on this particular file (isoimages.vhdx), which I believe just happens to be the first thing Duplicati is processing.

What I have posted is the first line from the log file (so we can see when it started), then I put the ..., then I added the line that I believe pertains to the failure and finally another ... and the last line from the log file so we can see the end timestamp.

If I read the log file correctly, the first (and only) failure happens close to the end of this fragment (where it says 2019-01-21 07:04:26 -05 - [Retry-Duplicati.Library.Main.Operation.Common.BackendHandler-RetryPut]: Operation Put with file attempt 1 of 5 failed with message: A task was canceled.) and that particular dblock is the first piece of isoimages.vhdx. I didn’t include lines from earlier in the log because they didn’t seem relevant; I didn’t include lines from later because they seem to be dealing with the failure by passing it back from one calling function to another until the whole job finally exits.

I am not going to bore you with any additional logs for now as you have provided me with some good pointers, so I will go look at all of those places for additional clues. If I come back empty handed, I will probably post a complete log.

Thanks once again!!!


@ts678, I was going through your last post trying everything one recommendation at a time and when I got to explicitly setting the HTTP operation timeout I ran into this weirdness. Do you know why I have two of every option and f there is a difference between the two?


The issue with duplicate http options should have been fixed in version What version are you running?


The TaskCancelled means that “something” has requested that the uploader process should stop, and not attempt to upload any more files, which is why it fails to retry.

The Channel is retired is essentially the same message; “something” has stopped working and is shutting everything down.

I am not sure why it does not work, but in this case there should be multiple error messages that explain the root cause of the failure, instead of just the useless error messages you see.

Could it be something with your quota on OneDrive being exceeded suddenly?


I am running, which as far as I am aware is the most recent version on the beta update channel. Considering that this is the most stable channel and the warnings on the other two channels, I would be reluctant to update.
Do you know if that issue is just a UI defect that does not affect functionality (i.e. both options work)?


I wouldn’t say I have multiple error messages, but based on your statement, I will go back to the log file and look again - maybe I missed it.

Not very likely - my quota is 1TB and it is less than 50% full, so it should be able to take at least the first few dblocks without any issues, but it fails instead. Also the overall size of the upload, should be waaay less than the free space because it is essentialy only the delta since the last back-up. Unless Duplicati is check the total size of the back-up against the available free space (even though it will only need to upload a fraction of that). Also, the quota theory doesn’t explain why it would fail on my test back-up - its total size is around 100MiB and I still have hundreds of gigs available in OneDrive.

I still haven’t had a chance to test this all the ways suggested by ts678, but I think I should be able to do it later today or tomorrow.

Thanks for all the replies and suggestions!!!


The duplicated http options was just a bug in the UI. Both options should work.


Thanks for clarifying!


@ts678 and everyone else who might know - I have been experimenting with these options for the last couple of days and while –number-of-retries did not seem to have any effect, I was able to make some progress with –http-operation-timeout. Considering that I am getting various failures mixed with some successes, I would like to ask:

  • What is a reasonable value for the http timeout?
  • Does the value depend on my connection speed and bandwidth (i.e. poor connections works better with a higher value and vice versa)? Any real world examples?


The value you need depends entirely on how long your network needs to transfer your file. Slow networks need longer timeouts. You can see how long your files are actually taking by watching About --> Show log --> Live --> Profiling and looking at the Uploading and Downloading lines for size and time (more size --> more time). Adding to the job the options --log-file and --log-file-log-level=Profiling will work too. Examples:

2018-11-28 08:22:42 -05 - [Profiling-Duplicati.Library.Main.BackendManager-UploadSpeed]: Uploaded 50.02 MB in 00:01:42.3183110, 500.63 KB/s

2018-11-26 15:57:53 -05 - [Profiling-Duplicati.Library.Main.BackendManager-DownloadSpeed]: Downloaded and decrypted 49.99 MB in 00:00:12.2495337, 4.08 MB/s

Upload speed is lower than download on many Internet connections. You can speed test your connection. Units may use multiples of bits (lowercase b), or bytes (uppercase B, where 1 byte = 8 bits). Division will give the theoretical minimum time, but other factors (such as CPU, disk, Duplicati, etc.) may slow things.

If you have set your job screen 5 Remote volume size larger than 50MB default, that will also take longer. Basically you want to leave enough time for your slowest transfer on slowest network with some margin…

Here are notes on --http-transfer-time, and OneDrive success using smaller volumes and larger timeouts.


This is AWESOME!!!
Thank you very much for the detailed explanation!
I guess this is my problem… It still remains a mystery why this is NOT an issue with AWS S3 (I still have a couple of back-ups on S3) and why it worked fine the first few days after I switched to OneDrive.
I will try tweaking the volume size and the timeout further and hopefully that will resolve the issue.

On another note, there is clearly an opportunity to improve Duplicati’s error logging and reporting facilities - a problem like this is rather trivial and should be clearly indicated in the error messages on screen and in the log. Duplicati already does this in some of the other error messages that not only state exactly what the problem is, but also have a recommendation about solving it.


AWS S3 uses Amazon’s AWSSDK*.dll instead of direct .NET Framework access. You can see the .NET timeout config reflected in theirs. Their default looks undocumented, but code might set it extremely high, which can prevent prompt enough recognition of problems, resulting in apparent hangs. Discussion here.

No argument on the general statement. It frustrates users and also those who have to point to all the logs.

however I’m not sure I agree with this. It took a lot of tracking down and experimentation by us to resolve it (and we’re not even sure yet that it is resolved). As said earlier, “Channel is retired” is a very generic error.

There’s a statement from the lead developer somewhere hoping for pretty much what you suggest at end, however there are many competing priorities. There are also statements that logging is messy – and hard.


I ran into the same issue recently and thus second that very question.


That was answered earlier:


I am no developer, and I apologize if I have underestimated the complexity there. My point was that a connection timeout should be pretty straightforward to detect and report to the user, but to your point the underlying .NET may not necessarily make that easy.

Not sure if you mean our discussion above until we sighted in on the timeout, but if you do - I agree and I appreciate it very much.

Correct. I seem to be making progress, however - it is a huge back-up (280 GiB + whatever has changed since it broke) and was running quite well until some time today. I was monitoring it and I saw that with a remote volume size of 25 MiB and a 6 min. timeout it was barely making it (dblock files would finish uploading just before hitting the 6 min. mark). Eventually it failed.
I am retrying with a 20 MiB remote volume size and a 7 min. timeout, but I haven’t been able to gauge how well it’s going.


You’re getting perhaps 350 kilobits/second. Can you find an Internet speed test to see if it matches, and then see what your ISP is promising, and whether they can help? In my home, I sometimes have to reconnect my PC to Wi-Fi or restart my router or (less often) restart cable modem, and occasionally get ISP service visits.

Sometimes speeds vary with time of day (slow at busy times), packet loss slows things (try a ping test), etc. Gauging approximate network performance for the connection can be done on Windows with Task Manager.


I wish! I use a relatively cheep service and my upload speed is lower, both as promised by the ISP and in reality.
That is not the problem though - Duplicati was perfectly happy with that before switching from AWS to OneDrive and even on OneDrive before upgrading and switching to OneDrive v2. But we already discussed some of that. Plus my set-up is far from simple, so there are other factors at play as well :slight_smile:
Anyway! I am happy to report that with an 8 min. timeout and a remote volume size of 20 MiB, I finally have my first successful real back-up for more than a month!

Thank you to everyone who responded and especially to @ts678!


The second question (the one you quoted) was answered indeed, but not the first one.
To make this clear: In order to get my Azure backup working again, I need to switch to an unstable update channel or wait until is available for beta?