OneDrive (personal) back-ups suddenly started failing with "Channel is retired" and "A task has been cancelled"

Hello!
I hope someone here can help as I have been struggling with this for a couple of weeks now with no hope of solving it on my own…

Background:
I had trouble backing-up to OneDrive personal because they retired some old API and I found that I had to update Duplicati to gain access to OneDrive v2. Said and done - update went pretty smooth and back-ups stared working just fine after that.

The issue:
After a few successful runs, the two back-up jobs I have on OneDrive started failing a weird error - “The channel is retired”. A quick search here revealed that this is only a symptom for other issues rather than a problem in its own right, so I enabled verbose logging. I found the error “A task was canceled.” in the logs, but it was equally puzzling as I was not canceling anything - I just ran the job.
I tried a full repair - no luck. I decided to forget about the existing backup and start from scratch so I re-created the back-up job manually, carefully filling in and reviewing all details. To my horror - I was facing the exact same issue!
Meanwhile AWS S3 back-ups on the same Duplicati instance are running just fine.

Here is the last portion of the log where the failure is visible. Please help!

* Jan 20, 2019 9:40 AM: The operation Backup has failed with error: The channel is retired
* Jan 20, 2019 9:40 AM: Fatal error
* Jan 20, 2019 9:40 AM: removing temp files, as no data needs to be uploaded
* Jan 20, 2019 9:40 AM: Backend event: Put - Started: duplicati-bf73a0a69cd8440a09c9fc8854556cda7.dblock.zip.aes (50.00 MB)
* Jan 20, 2019 9:40 AM: Operation Put with file duplicati-b3d9cb32fea364c8f9110296d0e94b46b.dblock.zip.aes attempt 1 of 5 failed with message: A task was canceled.
* Jan 20, 2019 9:39 AM: Backend event: Put - Started: duplicati-b3d9cb32fea364c8f9110296d0e94b46b.dblock.zip.aes (49.94 MB)

Hello @rumenavramov and welcome to the forum!

FaliedToProcessFile Error is possibly a similar problem (though the details vary). You could use it as a guide.

@ts678, thanks for the wamr welcome as well for pointing me to that thread!

I agree that there is some similarity, but there is also one very significant difference - I never get “2 of 5”, “3 of 5”, etc. Mine fails after the first attempt without even hinting as to what the problem may be.
I am currently logging this as “verbose” - is there a more suitable log-level that may provide more details?

If this is the live log (as it appears to be from the reverse-chronological view and the lack of detail), please try –log-file to see if you can get something better than these one-liners. For –log-file-log-level you can stick with Verbose, if you like. Profiling is as high as you can go.

Yes, it was the live log. I will run run it with a log file and the profiling level and we’ll see where that takes us.

Thanks!

I ran it with log-file set to profiling and I am still lost. Here is a portion of the log file that shows the failure happening. It fails on the first attempt and never retries. The failure messages do not provide too much insight… I also included the first and the last line of the log file. They show that everything only took less than 5 min. - hardly enough for Duplicati to even start processing the large file (isoimages.vhdx).

Any ideas?

2019-01-21 06:59:45 -05 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
...
2019-01-21 07:00:44 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: E:\isoimages.vhdx
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.MetadataPreProcess.FileEntry-AddDirectory]: Adding directory E:\
2019-01-21 07:00:45 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (3, "duplicati-bb68efe1533fb4c5ea893385dd7d3e36d.dblock.zip.aes", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2019-01-21 07:00:45 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (3, "duplicati-bb68efe1533fb4c5ea893385dd7d3e36d.dblock.zip.aes", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.MetadataPreProcess.FileEntry-AddDirectory]: Adding directory E:\$RECYCLE.BIN\
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: E:\$RECYCLE.BIN\S-1-5-21-**********-**********-**********-****\
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.MetadataPreProcess.FileEntry-AddDirectory]: Adding directory E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\
2019-01-21 07:00:45 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\desktop.ini
2019-01-21 07:00:46 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FilePreFilterProcess.FileEntry-CheckFileForChanges]: Checking file for changes E:\isoimages.vhdx, new: True, timestamp changed: True, size changed: True, metadatachanged: True, 11/25/2018 2:11:35 PM vs 1/1/0001 12:00:00 AM
2019-01-21 07:00:46 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FilePreFilterProcess.FileEntry-CheckFileForChanges]: Checking file for changes E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\desktop.ini, new: True, timestamp changed: True, size changed: True, metadatachanged: True, 11/15/2014 8:11:26 PM vs 1/1/0001 12:00:00 AM
2019-01-21 07:00:46 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.MetadataPreProcess.FileEntry-AddDirectory]: Adding directory E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\
2019-01-21 07:00:46 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: E:\$RECYCLE.BIN\S-1-5-21-**********-**********-*********-****\desktop.ini
2019-01-21 07:01:57 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAddBlockToOutputFlush
2019-01-21 07:02:00 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAddBlockToOutputFlush took 0:00:00:02.328
2019-01-21 07:02:00 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (3, "duplicati-b5e9da7272be24a749082c797fd14f1fc.dblock.zip.aes", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2019-01-21 07:02:00 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (3, "duplicati-b5e9da7272be24a749082c797fd14f1fc.dblock.zip.aes", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2019-01-21 07:02:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAddBlockToOutputFlush
2019-01-21 07:02:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAddBlockToOutputFlush took 0:00:00:00.140
2019-01-21 07:02:14 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (3, "duplicati-b86cb98f0b1f34dae8024e7a139531259.dblock.zip.aes", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2019-01-21 07:02:14 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (3, "duplicati-b86cb98f0b1f34dae8024e7a139531259.dblock.zip.aes", "Blocks", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.312
2019-01-21 07:02:28 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAddBlockToOutputFlush
2019-01-21 07:02:30 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAddBlockToOutputFlush took 0:00:00:02.266
2019-01-21 07:02:41 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bb68efe1533fb4c5ea893385dd7d3e36d.dblock.zip.aes (49.94 MB)
2019-01-21 07:04:26 -05 - [Retry-Duplicati.Library.Main.Operation.Common.BackendHandler-RetryPut]: Operation Put with file duplicati-bb68efe1533fb4c5ea893385dd7d3e36d.dblock.zip.aes attempt 1 of 5 failed with message: A task was canceled.
System.Threading.Tasks.TaskCanceledException: A task was canceled.
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Duplicati.Library.OAuthHttpClient.<SendAsync>d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
...
2019-01-21 07:04:30 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:04:44.762

Is this the new start-from-scratch job? Does it generally fail on first Put attempt, or does failure move? Checking the file times in the OneDrive web UI might help. What sort of files (if any) have been added? Looking at Job → Show log → Remote may also show what’s been working, e.g. put, get, list, delete. Summarized information is also at Job → Show log → General, in the Backend statistics section.

First retry would be expected where the posted log has only ... I guess. Can you post more log lines?

As for the initial failure, it’s at 105 seconds which might be the previously cited 100 seconds plus setup.
How we get along with OAuth shows some of what probably has to happen for each OneDrive v2 start, although I’d have thought the setup would have been done already for the initial verify (unless that’s off).

The Destination screen “Test connection” button usually does a List operation I think, Does that work? Starting from scratch sometimes means a new folder, and there’s an option to create it here, if needed.

It’s probably worth testing after explicitly setting the job –number-of-retries and –http-operation-timeout. Possibly that can influence the result some, and at least we’ll know what track to go down to find this…

Command line tools to investigate destination issues include Duplicati.CommandLine.BackendTool.exe where you can test various operations manually (but please use a testing area, if changing things) , and Duplicati.CommandLine.BackendTester.exe which will test for you in an empty area. Both require a URL specifying the target destination. The URL can be obtained from a job’s Commandline or Export actions.

Yes, this is the brand new job. It seems to always fail on this particular file (isoimages.vhdx), which I believe just happens to be the first thing Duplicati is processing.

What I have posted is the first line from the log file (so we can see when it started), then I put the ..., then I added the line that I believe pertains to the failure and finally another ... and the last line from the log file so we can see the end timestamp.

If I read the log file correctly, the first (and only) failure happens close to the end of this fragment (where it says 2019-01-21 07:04:26 -05 - [Retry-Duplicati.Library.Main.Operation.Common.BackendHandler-RetryPut]: Operation Put with file duplicati-bb68efe1533fb4c5ea893385dd7d3e36d.dblock.zip.aes attempt 1 of 5 failed with message: A task was canceled.) and that particular dblock is the first piece of isoimages.vhdx. I didn’t include lines from earlier in the log because they didn’t seem relevant; I didn’t include lines from later because they seem to be dealing with the failure by passing it back from one calling function to another until the whole job finally exits.

I am not going to bore you with any additional logs for now as you have provided me with some good pointers, so I will go look at all of those places for additional clues. If I come back empty handed, I will probably post a complete log.

Thanks once again!!!

@ts678, I was going through your last post trying everything one recommendation at a time and when I got to explicitly setting the HTTP operation timeout I ran into this weirdness. Do you know why I have two of every option and f there is a difference between the two?

The issue with duplicate http options should have been fixed in version 2.0.4.6. What version are you running?

The TaskCancelled means that “something” has requested that the uploader process should stop, and not attempt to upload any more files, which is why it fails to retry.

The Channel is retired is essentially the same message; “something” has stopped working and is shutting everything down.

I am not sure why it does not work, but in this case there should be multiple error messages that explain the root cause of the failure, instead of just the useless error messages you see.

Could it be something with your quota on OneDrive being exceeded suddenly?

I am running 2.0.4.5_beta_2018-11-28, which as far as I am aware is the most recent version on the beta update channel. Considering that this is the most stable channel and the warnings on the other two channels, I would be reluctant to update.
Do you know if that issue is just a UI defect that does not affect functionality (i.e. both options work)?

1 Like

I wouldn’t say I have multiple error messages, but based on your statement, I will go back to the log file and look again - maybe I missed it.

Not very likely - my quota is 1TB and it is less than 50% full, so it should be able to take at least the first few dblocks without any issues, but it fails instead. Also the overall size of the upload, should be waaay less than the free space because it is essentialy only the delta since the last back-up. Unless Duplicati is check the total size of the back-up against the available free space (even though it will only need to upload a fraction of that). Also, the quota theory doesn’t explain why it would fail on my test back-up - its total size is around 100MiB and I still have hundreds of gigs available in OneDrive.

I still haven’t had a chance to test this all the ways suggested by ts678, but I think I should be able to do it later today or tomorrow.

Thanks for all the replies and suggestions!!!

The duplicated http options was just a bug in the UI. Both options should work.

Thanks for clarifying!

@ts678 and everyone else who might know - I have been experimenting with these options for the last couple of days and while –number-of-retries did not seem to have any effect, I was able to make some progress with –http-operation-timeout. Considering that I am getting various failures mixed with some successes, I would like to ask:

  • What is a reasonable value for the http timeout?
  • Does the value depend on my connection speed and bandwidth (i.e. poor connections works better with a higher value and vice versa)? Any real world examples?

The value you need depends entirely on how long your network needs to transfer your file. Slow networks need longer timeouts. You can see how long your files are actually taking by watching About → Show log → Live → Profiling and looking at the Uploading and Downloading lines for size and time (more size → more time). Adding to the job the options --log-file and --log-file-log-level=Profiling will work too. Examples:

2018-11-28 08:22:42 -05 - [Profiling-Duplicati.Library.Main.BackendManager-UploadSpeed]: Uploaded 50.02 MB in 00:01:42.3183110, 500.63 KB/s

2018-11-26 15:57:53 -05 - [Profiling-Duplicati.Library.Main.BackendManager-DownloadSpeed]: Downloaded and decrypted 49.99 MB in 00:00:12.2495337, 4.08 MB/s

Upload speed is lower than download on many Internet connections. You can speed test your connection. Units may use multiples of bits (lowercase b), or bytes (uppercase B, where 1 byte = 8 bits). Division will give the theoretical minimum time, but other factors (such as CPU, disk, Duplicati, etc.) may slow things.

If you have set your job screen 5 Remote volume size larger than 50MB default, that will also take longer. Basically you want to leave enough time for your slowest transfer on slowest network with some margin…

Here are notes on --http-transfer-time, and OneDrive success using smaller volumes and larger timeouts.

This is AWESOME!!!
Thank you very much for the detailed explanation!
I guess this is my problem… It still remains a mystery why this is NOT an issue with AWS S3 (I still have a couple of back-ups on S3) and why it worked fine the first few days after I switched to OneDrive.
I will try tweaking the volume size and the timeout further and hopefully that will resolve the issue.

On another note, there is clearly an opportunity to improve Duplicati’s error logging and reporting facilities - a problem like this is rather trivial and should be clearly indicated in the error messages on screen and in the log. Duplicati already does this in some of the other error messages that not only state exactly what the problem is, but also have a recommendation about solving it.

AWS S3 uses Amazon’s AWSSDK*.dll instead of direct .NET Framework access. You can see the .NET timeout config reflected in theirs. Their default looks undocumented, but code might set it extremely high, which can prevent prompt enough recognition of problems, resulting in apparent hangs. Discussion here.

No argument on the general statement. It frustrates users and also those who have to point to all the logs.

however I’m not sure I agree with this. It took a lot of tracking down and experimentation by us to resolve it (and we’re not even sure yet that it is resolved). As said earlier, “Channel is retired” is a very generic error.

There’s a statement from the lead developer somewhere hoping for pretty much what you suggest at end, however there are many competing priorities. There are also statements that logging is messy – and hard.

I ran into the same issue recently and thus second that very question.