OneDrive issue with too many files?

Hi guys,
I’m using Duplicati to backup to my OneDrive since a while now and I’m very happy with it!

After i’ve added a ~280GB file to the backup I waited patiently until the upload finished… which it did. But then I’ve encountered the error below.
A second run produced the same issue…

I’m guessing OneDrive has an issue if too many files are in one folder. I’ve checked online the folder I backup to and it says ~86000 files are in it. I can open the folder on the OneDrive website without an error…

So is it an API issue of OneDrive?

Any suggestions how I’m able to continue from here?

Thanks a lot for your input.
Soko

Here’s the error:

    Failed: InternalServerError: Internal Server Error error from request https://graph.microsoft.com/v1.0/me/drive/root:/Duplicati/CLOUD:/children?$skiptoken=NjAx
Method: GET, RequestUri: 'https://graph.microsoft.com/v1.0/me/drive/root:/Duplicati/CLOUD:/children?$skiptoken=NjAx', Version: 1.1, Content: <null>, Headers:
{
  User-Agent: Duplicati/2.0.5.1
  Authorization: Bearer ABC...XYZ
}
StatusCode: 500, ReasonPhrase: 'Internal Server Error', Version: 1.1, Content: System.Net.Http.StreamContent, Headers:
{
  request-id: 03b806a9-8b5e-4c6a-8b7d-e018dcce480d
  client-request-id: 03b806a9-8b5e-4c6a-8b7d-e018dcce480d
  x-ms-ags-diagnostic: {"ServerInfo":{"DataCenter":"North Europe","Slice":"SliceC","Ring":"4","ScaleUnit":"002","RoleInstance":"AGSFE_IN_41"}}
  Strict-Transport-Security: max-age=31536000
  Cache-Control: private
  Date: Mon, 16 Mar 2020 15:03:35 GMT
  Content-Length: 238
  Content-Type: application/json
}
{
  "error": {
    "code": "generalException",
    "message": "An error occurred in the data store.",
    "innerError": {
      "request-id": "03b806a9-8b5e-4c6a-8b7d-e018dcce480d",
      "date": "2020-03-16T15:03:35"
    }
  }
}
Details: Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException: InternalServerError: Internal Server Error error from request https://graph.microsoft.com/v1.0/me/drive/root:/Duplicati/CLOUD:/children?$skiptoken=NjAx
Method: GET, RequestUri: 'https://graph.microsoft.com/v1.0/me/drive/root:/Duplicati/CLOUD:/children?$skiptoken=NjAx', Version: 1.1, Content: <null>, Headers:
{
  User-Agent: Duplicati/2.0.5.1
  Authorization: Bearer ABC...XYZ
}
StatusCode: 500, ReasonPhrase: 'Internal Server Error', Version: 1.1, Content: System.Net.Http.StreamContent, Headers:
{
  request-id: 03b806a9-8b5e-4c6a-8b7d-e018dcce480d
  client-request-id: 03b806a9-8b5e-4c6a-8b7d-e018dcce480d
  x-ms-ags-diagnostic: {"ServerInfo":{"DataCenter":"North Europe","Slice":"SliceC","Ring":"4","ScaleUnit":"002","RoleInstance":"AGSFE_IN_41"}}
  Strict-Transport-Security: max-age=31536000
  Cache-Control: private
  Date: Mon, 16 Mar 2020 15:03:35 GMT
  Content-Length: 238
  Content-Type: application/json
}
{
  "error": {
    "code": "generalException",
    "message": "An error occurred in the data store.",
    "innerError": {
      "request-id": "03b806a9-8b5e-4c6a-8b7d-e018dcce480d",
      "date": "2020-03-16T15:03:35"
    }
  }
}
   at Duplicati.Library.Main.BackendManager.List()
   at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, String protectedfile)
   at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, String protectedfile)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile)
   at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.<Backup>b__0(BackupResults result)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)

Log data:
2020-03-16 16:03:32 +01 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException: InternalServerError: Internal Server Error error from request https://graph.microsoft.com/v1.0/me/drive/root:/Duplicati/CLOUD:/children?$skiptoken=NjAx
Method: GET, RequestUri: 'https://graph.microsoft.com/v1.0/me/drive/root:/Duplicati/CLOUD:/children?$skiptoken=NjAx', Version: 1.1, Content: <null>, Headers:
{
  User-Agent: Duplicati/2.0.5.1
  Authorization: Bearer ABC...XYZ
}
StatusCode: 500, ReasonPhrase: 'Internal Server Error', Version: 1.1, Content: System.Net.Http.StreamContent, Headers:
{
  request-id: 03b806a9-8b5e-4c6a-8b7d-e018dcce480d
  client-request-id: 03b806a9-8b5e-4c6a-8b7d-e018dcce480d
  x-ms-ags-diagnostic: {"ServerInfo":{"DataCenter":"North Europe","Slice":"SliceC","Ring":"4","ScaleUnit":"002","RoleInstance":"AGSFE_IN_41"}}
  Strict-Transport-Security: max-age=31536000
  Cache-Control: private
  Date: Mon, 16 Mar 2020 15:03:35 GMT
  Content-Length: 238
  Content-Type: application/json
}
{
  "error": {
    "code": "generalException",
    "message": "An error occurred in the data store.",
    "innerError": {
      "request-id": "03b806a9-8b5e-4c6a-8b7d-e018dcce480d",
      "date": "2020-03-16T15:03:35"
    }
  }
}
   at Duplicati.Library.Main.BackendManager.List()
   at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, String protectedfile)
   at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, String protectedfile)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile)
   at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()

I’ve just gotten the very same error message (see below). I was in the process of backing up parts of my NAS (660 GB). The backup stopped after 204 GB and 8,352 Files (on 1drive). The below error message can reliably be reproduced (2 hours breaks in between). Any help is greatly appreciated.

@soko I am wondering about the large number of files you get on 1drive. My backup failed with about half your data volume and my files are roughly 10% of yours.

(bei = at)
Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException: InternalServerError: Internal Server Error error from request https://graph.microsoft.com/v1.0/me/drive/root:/NAS-Musik:/children?$skiptoken=NDAx
Method: GET, RequestUri: ‘https://graph.microsoft.com/v1.0/me/drive/root:/NAS-Musik:/children?$skiptoken=NDAx’, Version: 1.1, Content: ,
Headers:
{ User-Agent: Duplicati/2.0.5.1
Authorization: Bearer ABC…XYZ }
StatusCode: 500, ReasonPhrase: ‘Internal Server Error’, Version: 1.1, Content: System.Net.Http.StreamContent,
Headers: { request-id: fc2ec5e5-22e5-4e9b-a826-77c9b3c67fb0 client-request-id: fc2ec5e5-22e5-4e9b-a826-77c9b3c67fb0 x-ms-ags-diagnostic: {“ServerInfo”:{“DataCenter”:“North Europe”,“Slice”:“SliceC”,“Ring”:“4”,“ScaleUnit”:“002”,“RoleInstance”:“AGSFE_IN_59”}} Strict-Transport-Security: max-age=31536000 Cache-Control: private Date: Wed, 18 Mar 2020 16:26:45 GMT Content-Length: 238 Content-Type: application/json } { “error”: { “code”: “generalException”, “message”: “An error occurred in the data store.”, “innerError”: { “request-id”: “fc2ec5e5-22e5-4e9b-a826-77c9b3c67fb0”, “date”: “2020-03-18T16:26:45” } } }
bei Duplicati.Library.Main.BackendManager.List()
bei Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, String protectedfile)
bei Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, String protectedfile)
bei Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile) bei Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext() — Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —
bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bei CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
bei Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.b__0(BackupResults result)
bei Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action`1 method)
bei Duplicati.Library.Main.Controller.Backup(String inputsources, IFilter filter)
bei Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

One addition: I cannot list the backup directory on 1drive, seems like a timeout at listing 8,000+ files.

@mkymu8f4: Its weird indeed that you have a different limit. I have another folder with ~20000 files which works ok (so far).

I’ve also ran into the problem of not being able to even create a new folder on my 1drv! I’ve got “Unkown Error”. But it worked again the following day.

I am using O365 family (best price to get 6 TB of storage!) and sometimes accounts behave differently. I just had to unlock one, seemingly too much traffic :slight_smile:

–http-operation-timeout adjusted up from default 100 seconds for OneDrive might help it.

Thanks @ts678. I’ve thought about such a timeout and looked for some 1drv specific setting which I didn’t find.
Than I dumped this idea as I get an “Error 6” when listing the folder at www.onedrive.com directly as well. I’ve opened a ticket with 1drv support which just got escalated.

I will give your setting a try once my other bigger upload is finished.

Too short a timeout might be the cause of the problem. For me, it worked later that day just to err today again. But this time I get a gateway timeout, so not so much guesswork. The default timeout seems to be just too close to the limit for my connection. I will try to increase.

@ts678: I’ve set the timeout and the readwrite timeout to 5m and tried again.
now I get a different Error (copied from the live-log set to Verbose) after the log-entry Backend event: List - Started: ():

Operation List with file attempt 5 of 5 failed with message: A task was canceled.
{"ClassName":"System.Threading.Tasks.TaskCanceledException","Message":"A task was canceled.","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":"
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.SendRequest[T](HttpRequestMessage request)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.SendRequest[T](HttpMethod method, String url)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.<Enumerate>d__65`1.MoveNext()
   at Duplicati.Library.Backend.MicrosoftGraphBackend.<List>d__49.MoveNext()
   at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
   at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
   at Duplicati.Library.Main.BackendManager.DoList(FileEntryItem item)
   at Duplicati.Library.Main.BackendManager.ThreadRun()","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":"8
ThrowForNonSuccess
mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
System.Runtime.CompilerServices.TaskAwaiter
Void ThrowForNonSuccess(System.Threading.Tasks.Task)","HResult":-2146233029,"Source":"mscorlib","WatsonBuckets":null} 

Any idea?
Whats weird is that there are 9 minutes between List Started and the error and not just 5 as I set the timeout.

I suspect the final error might not be the first issue, and that could also explain the time difference.
The most brief log that shows the destination activity is About --> Show log --> Live --> Information
however About --> Show log --> Live -> Retry is also good, and shows the intermediate difficulties.

@ts678: I did a rerun and the error above didn’t show up anymore. It came to the stage where Duplicati try to repack all files as I changed the size from 10MB to 50MB. It took quite a while for 450GB but in the end it started uploading & deleting to the 1drv. After 237 50MB files uploaded I encountered another error which might be a “real” bug in duplicati (fyi: I’m a programmer my self):

{"ClassName":"System.ArgumentNullException","Message":"Path cannot be null.","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":"
   at Duplicati.Library.Main.BackendManager.WaitForEmpty(LocalDatabase db, IDbTransaction transation)
   at Duplicati.Library.Main.Operation.CompactHandler.DoDelete(LocalDeleteDatabase db, BackendManager backend, IEnumerable`1 deleteableVolumes, IDbTransaction& transaction)
   at Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)
   at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)
   at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
   at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":"8
WaitForEmpty
Duplicati.Library.Main, Version=2.0.5.1, Culture=neutral, PublicKeyToken=null
Duplicati.Library.Main.BackendManager
Void WaitForEmpty(Duplicati.Library.Main.Database.LocalDatabase, System.Data.IDbTransaction)","HResult":-2147467261,"Source":"Duplicati.Library.Main","WatsonBuckets":null,"ParamName":"path"}

I’ll try again and see if the issue persists.

EDIT: Ohh I see now: When changing the remote volume size (as I did from 10MB to 50MB) Duplicati redo/repacks all files on the 1drv. In my case: It downloads 10 10MB files, repacks them into one 50MB file and uploads that one. Then it takes the next 10 files. Which is generally quite clever but sends 450GB from 1drv to my PC and 450GB up again.
So it would be better if I just delete all files from 1drv and upload them again. I will loose the different version information of my backup but thats not important for me…

Ideas about “System.ArgumentNullException: Value cannot be null.”, “Parameter name: path”
has a similar stack trace and a recent comment speculating that its cause has a fix underway.
The exact reporting of the final error is slightly different, but I’m not expert enough to say why:

Are you good at C# stack traces? The above have different formats but appear similar beyond that.
If you follow last posts of the other forum topic, you’ll see the issue, the test case, and proposed fix.

My test case involved basically an upload failure during compact that exhausted the default 5 tries.
What I got, though, was Duplicati death through an unhandled exception. Still, I mention the theory.

Unfortunately even when Duplicati doesn’t crash, it tends not to make a job log when job errors out. Sometimes emailed logs have more details. Ideally one has an added log file, but few people do so.

So – have anything else that might show what sort of activity led up to the final failure that you got?
Ideally this would be a learning opportunity to study this problem, but often there’s not enough data.

Yeah, I hate random reproducible in my programs as well :frowning:

I’m with you: The stack traces look the same to me!

I was going to delete all files on 1drv and upload them again to solve my issue (hopefully). But if I can be of help finding this issue I’m willing to help.

I could try to run the backup again and hopefully the NullException gets thrown again.

How can I make sure I get all possible logs and steps that lead up to this error?
I’ve had a look to the “send mail” advanced options of a job but couldn’t find a way to enable log-file-attachments. How do I do that exactly?
Can I somehow make duplicati to write a verbose log to a file? Even when it tends not to log everything.
Finally: what about the live log? Any way to store this in a file?

Backup failing: “Path cannot be null” and “TLS warning: SSL3 alert write: fatal: bad record mac”
has a more extended log that shows an error similar to yours, but just before that another one that wouldn’t be logged ordinarily, because it’s at Retry level. The poster possibly had added a –log-file configured to –log-file-log-level=Retry or more (Profiling is highest, and I use it, but output is huge). Verbose is in-between. The main problem (for privacy) is it can have source path names in places. Logging to file is probably the easiest way to get a big log. Live log is best done for one screen-full. Getting too big a file for your usual editor/viewer can be solved with glogg or EditPad Lite or others.

You can see whether you tend to do retries (e.g. for network errors) from job log’s RetryAttempts. Setting –number-of-retries low on a job you care less about may be a good way to unearth bugs…
That’s been what I’ve been doing lately, on redundant backups so I can afford to break and debug.

2.0.4.5 - The process cannot access the file because it is being used by another process
is my quote of the interesting double stack trace with commentary. I wonder if one led to the other?

The emails don’t attach full logs, but seem to include some relevant sections determined somehow.
2.0.4.5 - The process cannot access the file because it is being used by another process
looks like it might be from an emailed report, due to the Log data section. Instead of the usual log, looking similar to the ones in the web UI with a bunch of stats, a failure email has more failure data.

ok, I will try with

  • –log-file
  • –log-file-level=profiling
  • –number-of-retries=1 (to overcome a short reconnect of my connection)
  • –retry-delay=3m

I should be able to remove private info of the log. I know my way around Notepadd++ and UltraEdit and handled GByte log files already

1 Like

Update guys:

I wasn’t able to reproduce the ArgumentNullException so far. But I’m planning to try again.

BUT:
In the meantime the original error is back. The 1drv Support Team wanted me to trace the issue with there own backend tracker and the Network Trace of Google Chrome browser.
I was able to reproduce it and I’ve send the trace to them… waiting now for a reply.

Funny thing though:
It seems that the error only occurs after heavy read/write onto 1drv AND is fixed the next day. In detail: All my Duplicati jobs (even the smallest ones) failed with errors thrown on the 1drv side.
They were working again the next day.

My guess:
1drv has some issues when heavily loading the files/folders in a “short” period of time.

So we probably have different problems. I seem to be stuck with this problem on one of my backups and it shows consistently every time. If it either all your jobs are running or failing, I assume, they all run under the same o365 account?

Yepp, all backups to the same 1drv.

I also configured my backup for an extensive logs. Let’s see.

Hi @soko Anything new on your side?

I put a lenghty log in my thread but I could not find anything interesting. Seems like a problem on MS’ side. Did you get any feedback?