Backup fails with "stream was too long"

I recently moved my duplicate insctance from my Qnap NAS to docker on the same NAS.
The old duplicati instance was running directly on the Qnap OS. Version was 2.0.8.3.
In docker I installed 2.1.0.3, mounted the NAS directories as read-only to docker, moved the old databases into docker, restored my configs and corrected database pathes.
I have several confiigs and all went well, except the really large ones (4TB+). They fail now constantly with “stream was too long”.
They worked before with no problem. I did a last backup before migarting to docker.
I already tried a database repair which was successfull, but did not help with my error.
What I see while running the backup:

  1. It is really slow on checking files. Even so the files are utouched, I only mounted the old directories to docker as mentioned, it seems duplicati is treating every file as unscanned. I set the option “check-filetime-only” which did not help.
  2. Duplicatie tries to upload new archives, which includes really new files, but fails.

I am archiving to Jottacloud with infiinte storage. So space souldn’t be a problem.

            {
  "DeletedFiles": 25934,
  "DeletedFolders": 1477,
  "ModifiedFiles": 0,
  "ExaminedFiles": 7164,
  "OpenedFiles": 7162,
  "AddedFiles": 7162,
  "SizeOfModifiedFiles": 0,
  "SizeOfAddedFiles": 2647466055772,
  "SizeOfExaminedFiles": 2648812252876,
  "SizeOfOpenedFiles": 2647466055772,
  "NotProcessedFiles": 0,
  "AddedFolders": 712,
  "TooLargeFiles": 0,
  "FilesWithError": 0,
  "ModifiedFolders": 0,
  "ModifiedSymlinks": 0,
  "AddedSymlinks": 20,
  "DeletedSymlinks": 934,
  "PartialBackup": false,
  "Dryrun": false,
  "MainOperation": "Backup",
  "CompactResults": null,
  "VacuumResults": null,
  "DeleteResults": null,
  "RepairResults": null,
  "TestResults": null,
  "ParsedResult": "Fatal",
  "Interrupted": false,
  "Version": "2.1.0.103 (2.1.0.103_canary_2024-12-21)",
  "EndTime": "2025-01-04T17:54:38.7432575Z",
  "BeginTime": "2025-01-04T08:29:23.1736362Z",
  "Duration": "09:25:15.5696213",
  "MessagesActualLength": 86,
  "WarningsActualLength": 0,
  "ErrorsActualLength": 2,
  "Messages": [
    "2025-01-04 09:29:23 +01 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: Die Operation Backup wurde gestartet",
    "2025-01-04 09:30:13 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2025-01-04 09:30:15 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (4,182 KB)",
    "2025-01-04 13:53:09 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b2b948135458c45bdad7a515893a2997f.dblock.zip.aes (3,999 GB)",
    "2025-01-04 13:54:09 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-b2b948135458c45bdad7a515893a2997f.dblock.zip.aes (3,999 GB)",
    "2025-01-04 13:54:20 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-b2b948135458c45bdad7a515893a2997f.dblock.zip.aes (3,999 GB)",
    "2025-01-04 13:54:20 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-bf0af55715310472ba96e43b994ce57b6.dblock.zip.aes (3,999 GB)",
    "2025-01-04 13:54:20 +01 - [Information-Duplicati.Library.Main.Operation.Backup.BackendUploader-RenameRemoteTargetFile]: Renaming \"duplicati-b2b948135458c45bdad7a515893a2997f.dblock.zip.aes\" to \"duplicati-bf0af55715310472ba96e43b994ce57b6.dblock.zip.aes\"",
    "2025-01-04 13:54:20 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bf0af55715310472ba96e43b994ce57b6.dblock.zip.aes (3,999 GB)",
    "2025-01-04 13:55:48 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-bf0af55715310472ba96e43b994ce57b6.dblock.zip.aes (3,999 GB)",
    "2025-01-04 13:55:59 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-bf0af55715310472ba96e43b994ce57b6.dblock.zip.aes (3,999 GB)",
    "2025-01-04 13:55:59 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-bd08ddd62d784409297ffb44e5dbad26a.dblock.zip.aes (3,999 GB)",
    "2025-01-04 13:55:59 +01 - [Information-Duplicati.Library.Main.Operation.Backup.BackendUploader-RenameRemoteTargetFile]: Renaming \"duplicati-bf0af55715310472ba96e43b994ce57b6.dblock.zip.aes\" to \"duplicati-bd08ddd62d784409297ffb44e5dbad26a.dblock.zip.aes\"",
    "2025-01-04 13:55:59 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bd08ddd62d784409297ffb44e5dbad26a.dblock.zip.aes (3,999 GB)",
    "2025-01-04 13:56:55 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-bd08ddd62d784409297ffb44e5dbad26a.dblock.zip.aes (3,999 GB)",
    "2025-01-04 13:57:06 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-bd08ddd62d784409297ffb44e5dbad26a.dblock.zip.aes (3,999 GB)",
    "2025-01-04 13:57:06 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-b042e2b308842401b8bc2f101f8f0afea.dblock.zip.aes (3,999 GB)",
    "2025-01-04 13:57:06 +01 - [Information-Duplicati.Library.Main.Operation.Backup.BackendUploader-RenameRemoteTargetFile]: Renaming \"duplicati-bd08ddd62d784409297ffb44e5dbad26a.dblock.zip.aes\" to \"duplicati-b042e2b308842401b8bc2f101f8f0afea.dblock.zip.aes\"",
    "2025-01-04 13:57:06 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b042e2b308842401b8bc2f101f8f0afea.dblock.zip.aes (3,999 GB)",
    "2025-01-04 13:58:10 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-b042e2b308842401b8bc2f101f8f0afea.dblock.zip.aes (3,999 GB)"
  ],
  "Warnings": [],
  "Errors": [
    "2025-01-04 18:54:38 +01 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error\nAggregateException: One or more errors occurred. (Stream was too long.) (Stream was too long.) (Stream was too long.)",
    "2025-01-04 18:54:38 +01 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: Die Operation Backup ist mit folgenden Fehler fehlgeschlagen: One or more errors occurred. (Stream was too long.) (Stream was too long.) (Stream was too long.) (One or more errors occurred. (Stream was too long.) (Stream was too long.) (Stream was too long.)) (One or more errors occurred. (One or more errors occurred. (Stream was too long.) (Stream was too long.) (Stream was too long.)))\nAggregateException: One or more errors occurred. (Stream was too long.) (Stream was too long.) (Stream was too long.) (One or more errors occurred. (Stream was too long.) (Stream was too long.) (Stream was too long.)) (One or more errors occurred. (One or more errors occurred. (Stream was too long.) (Stream was too long.) (Stream was too long.)))"
  ],
  "TaskControl": {
    "ProgressToken": {
      "IsCancellationRequested": false,
      "CanBeCanceled": true,
      "WaitHandle": {
        "Handle": {
          "value": 2984
        },
        "SafeWaitHandle": {
          "IsInvalid": false,
          "IsClosed": false
        }
      }
    },
    "TransferToken": {
      "IsCancellationRequested": false,
      "CanBeCanceled": true,
      "WaitHandle": {
        "Handle": {
          "value": 5584
        },
        "SafeWaitHandle": {
          "IsInvalid": false,
          "IsClosed": false
        }
      }
    }
  },
  "BackendStatistics": {
    "RemoteCalls": 20,
    "BytesUploaded": 0,
    "BytesDownloaded": 0,
    "FilesUploaded": 0,
    "FilesDownloaded": 0,
    "FilesDeleted": 0,
    "FoldersCreated": 0,
    "RetryAttempts": 16,
    "UnknownFileSize": 0,
    "UnknownFileCount": 0,
    "KnownFileCount": 4282,
    "KnownFileSize": 8920750402322,
    "LastBackupDate": "2024-12-25T12:09:25+01:00",
    "BackupListCount": 12,
    "TotalQuotaSpace": 0,
    "FreeQuotaSpace": 0,
    "AssignedQuotaSpace": -1,
    "ReportedQuotaError": false,
    "ReportedQuotaWarning": false,
    "MainOperation": "Backup",
    "ParsedResult": "Success",
    "Interrupted": false,
    "Version": "2.1.0.103 (2.1.0.103_canary_2024-12-21)",
    "EndTime": "0001-01-01T00:00:00",
    "BeginTime": "2025-01-04T08:29:23.1736389Z",
    "Duration": "00:00:00",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null,
    "TaskControl": {
      "ProgressToken": {
        "IsCancellationRequested": false,
        "CanBeCanceled": true,
        "WaitHandle": {
          "Handle": {
            "value": 2984
          },
          "SafeWaitHandle": {
            "IsInvalid": false,
            "IsClosed": false
          }
        }
      },
      "TransferToken": {
        "IsCancellationRequested": false,
        "CanBeCanceled": true,
        "WaitHandle": {
          "Handle": {
            "value": 5584
          },
          "SafeWaitHandle": {
            "IsInvalid": false,
            "IsClosed": false
          }
        }
      }
    }
  }
}
        

If a Beta, that would be 2.0.8.1. There were also early .NET 8 previews at 2.0.8.100, etc.

Assuming this means 2.1.0.103 (per output below), which would be almost latest Canary.

Whose Docker image? LinuxServer is popular. Duplicati has one too. I’m not using either.

Were you able to keep exactly the same paths? If not, that would cause the file re-reading.

You could also look at the job log for one of those, and see what Source section says, e.g.:

If Source file stats show huge numbers of Deleted and New, that could be file path change.
More obscure reasons could be if timestamp precision changes, or metadata info changes.
Duplicati log at Verbose level can detail this, e.g. by About → Show log → Live → Verbose.

I wonder if size is (but if so, I don’t know why it is now and not before)? What’s your Remote volume size set to on Options screen. Sometimes things dislike over 2 ** 32, so about 4 GB. Searching Google for that error indicates other things that dislike half that.

Yes. Got confused with all the numbering. Version was 2.0.8.1_beta_2024-05-07

Again confusion: 2.1.0.103_canary_2024-12-21

Duplicatis image

Yes, you are right. The paths obv. changed. Thought of that after putting out my post.

dblock = 4GB
block = 1MB

I had never problems with this sizing before.

Does the typical upload for the “really large ones” make them? Easiest way (if Jottacloud can do it) is sort by size and see how often large files go up. Going by average size in stats, there are lots…

There were lots of changes during the move to .NET 8, so it’s possible one of them is causing this. Until someone familiar with the details can comment, you could play with just reducing that size to see if it helps. This might cost a compact if you set it back up later, but it’s one way to test theory.

It would also be helpful if you can get a better error message. About → Show log → Stored might have one. If not, you can maybe have it fail while running with About → Show log → Live → Error. Clicking on the error should expand it with a stack trace that might help locate the path to a failure.

Sure.

System.AggregateException: One or more errors occurred. (Stream was too long.) (Stream was too long.) (Stream was too long.) (One or more errors occurred. (Stream was too long.) (Stream was too long.) (Stream was too long.)) (One or more errors occurred. (One or more errors occurred. (Stream was too long.) (Stream was too long.) (Stream was too long.)))
 ---> System.AggregateException: One or more errors occurred. (Stream was too long.) (Stream was too long.) (Stream was too long.)
 ---> System.IO.IOException: Stream was too long.
   at System.IO.MemoryStream.WriteAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.CopyStreamAsync(Stream source, Stream target, Boolean tryRewindSource, CancellationToken cancelToken, Byte[] buf)
   at Duplicati.Library.Backend.Jottacloud.PutAsync(String remotename, Stream stream, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoPut(FileEntryItem item, IBackend backend, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<>c__DisplayClass20_0.<<UploadFileAsync>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry(Func`1 method, FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry(Func`1 method, FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadFileAsync(FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadBlockAndIndexAsync(VolumeUploadRequest upload, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<Run>b__13_0(<>f__AnonymousType2`1 self)
   --- End of inner exception stack trace ---
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<Run>b__13_0(<>f__AnonymousType2`1 self)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<Run>b__13_0(<>f__AnonymousType2`1 self)
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)
   at Duplicati.Library.Main.Operation.BackupHandler.FlushBackend(BackupResults result, IWriteChannel`1 uploadtarget, Task uploader)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IFilter filter)
 ---> (Inner Exception #1) System.IO.IOException: Stream was too long.
   at System.IO.MemoryStream.WriteAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.CopyStreamAsync(Stream source, Stream target, Boolean tryRewindSource, CancellationToken cancelToken, Byte[] buf)
   at Duplicati.Library.Backend.Jottacloud.PutAsync(String remotename, Stream stream, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoPut(FileEntryItem item, IBackend backend, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<>c__DisplayClass20_0.<<UploadFileAsync>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry(Func`1 method, FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry(Func`1 method, FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadFileAsync(FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadBlockAndIndexAsync(VolumeUploadRequest upload, Worker worker, CancellationToken cancelToken)<---

 ---> (Inner Exception #2) System.IO.IOException: Stream was too long.
   at System.IO.MemoryStream.WriteAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.CopyStreamAsync(Stream source, Stream target, Boolean tryRewindSource, CancellationToken cancelToken, Byte[] buf)
   at Duplicati.Library.Backend.Jottacloud.PutAsync(String remotename, Stream stream, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoPut(FileEntryItem item, IBackend backend, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<>c__DisplayClass20_0.<<UploadFileAsync>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry(Func`1 method, FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry(Func`1 method, FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadFileAsync(FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadBlockAndIndexAsync(VolumeUploadRequest upload, Worker worker, CancellationToken cancelToken)<---

   --- End of inner exception stack trace ---
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IFilter filter)
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<Backup>b__0(BackupResults result)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
   at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
   at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)
 ---> (Inner Exception #1) System.AggregateException: One or more errors occurred. (One or more errors occurred. (Stream was too long.) (Stream was too long.) (Stream was too long.))
 ---> System.AggregateException: One or more errors occurred. (Stream was too long.) (Stream was too long.) (Stream was too long.)
 ---> System.IO.IOException: Stream was too long.
   at System.IO.MemoryStream.WriteAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.CopyStreamAsync(Stream source, Stream target, Boolean tryRewindSource, CancellationToken cancelToken, Byte[] buf)
   at Duplicati.Library.Backend.Jottacloud.PutAsync(String remotename, Stream stream, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoPut(FileEntryItem item, IBackend backend, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<>c__DisplayClass20_0.<<UploadFileAsync>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry(Func`1 method, FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry(Func`1 method, FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadFileAsync(FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadBlockAndIndexAsync(VolumeUploadRequest upload, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<Run>b__13_0(<>f__AnonymousType2`1 self)
   --- End of inner exception stack trace ---
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<Run>b__13_0(<>f__AnonymousType2`1 self)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<Run>b__13_0(<>f__AnonymousType2`1 self)
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)
   at Duplicati.Library.Main.Operation.BackupHandler.FlushBackend(BackupResults result, IWriteChannel`1 uploadtarget, Task uploader)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IFilter filter)
 ---> (Inner Exception #1) System.IO.IOException: Stream was too long.
   at System.IO.MemoryStream.WriteAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.CopyStreamAsync(Stream source, Stream target, Boolean tryRewindSource, CancellationToken cancelToken, Byte[] buf)
   at Duplicati.Library.Backend.Jottacloud.PutAsync(String remotename, Stream stream, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoPut(FileEntryItem item, IBackend backend, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<>c__DisplayClass20_0.<<UploadFileAsync>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry(Func`1 method, FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry(Func`1 method, FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadFileAsync(FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadBlockAndIndexAsync(VolumeUploadRequest upload, Worker worker, CancellationToken cancelToken)<---

 ---> (Inner Exception #2) System.IO.IOException: Stream was too long.
   at System.IO.MemoryStream.WriteAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.CopyStreamAsync(Stream source, Stream target, Boolean tryRewindSource, CancellationToken cancelToken, Byte[] buf)
   at Duplicati.Library.Backend.Jottacloud.PutAsync(String remotename, Stream stream, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoPut(FileEntryItem item, IBackend backend, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<>c__DisplayClass20_0.<<UploadFileAsync>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry(Func`1 method, FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.DoWithRetry(Func`1 method, FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadFileAsync(FileEntryItem item, Worker worker, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.UploadBlockAndIndexAsync(VolumeUploadRequest upload, Worker worker, CancellationToken cancelToken)<---

   --- End of inner exception stack trace ---<---

Right now I am testing another backup with similiar size and same settings. We’ll see if it fails too.

This indicates that you have a volume size greater than 2GiB ?
It should be possible to work around it by reducing the volume size to less than 2GiB, e.g. 1900MiB or so.

It is a problem with the WebRequest in .NET which was deprecated some time ago. Since Jottacloud is not yet fully updated, it still uses the WebRequest, which annoyingly buffers the entire request in memory before sending it. This was not a problem with 2.0.8.1 because it used Mono that handled WebRequest correctly.

You mean dblock size?
Yes it is.

But this means, I need to start the backup from scratch because changing dblock size to an existing backup is not possible.

Just to clarify: Is this an issue on Jottaclouds side or duplicati?

No :slight_smile:
dblock is usually either 100KiB or 1MiB. If you have 4GiB dblock, it means you will look for deduplications on blocks larger than 4GiB? That requires volumes larger than 4GiB then…

True, if it really is dblock and not volume size, then it cannot be changed.

It is Duplicati doing the wrong thing here, buffering the entire request in memory.

Now I’m confused. That statement refers to the old and new blocksize doesn’t it?

Remote volume size and dblock are sort of equivalent and relate to upload error.

This also refers to blocksize doesn’t it?

EDIT to add help text:

  --dblock-size (Size): Limit the size of the volumes
    This option can change the maximum size of dblock files. Changing the size can be useful if the backend has a limit on the size of each individual file.
    * default value: 50mb

  --blocksize (Size): Block size used in hashing
    The block size determines how files are fragmented. Choosing a large value will cause a larger overhead on file changes, choosing a small value will cause a large overhead on storage of file lists. Note
    that the value cannot be changed after remote files are created.
    * default value: 1mb

I am confused too. In Duplicati GUI it shows


Screenshot 2025-01-06 192726

which translates to

--dblock-size=4GB
--blocksize=1MB

in cli.

So it will be fixed in future versions? Not asking for an eta. :wink:

It should be possible to change this for an existing backup, you can try it. It will only affect new uploads. Unless you use a backend that has trouble with many files per directory, I would recommend 200MB instead. Depending on your network even smaller might be better. Having them too big means for a partial restore you have to download useless data and more unused parts of files can stick around.

Backup size parameters in the manual explains tradeoffs. There it suggests 50-500 MiB.
For testing, you just need to not hit seeming (citation here) 2 GB limit on MemoryStream.

Beware that Duplicati tends to use GiB (bigger than GB), but I don’t know what MS uses.
Duplicati limit might also not be a perfectly guaranteed max, so leave slack if you push it.

The “deprecated some time ago” WebRequest doesn’t sound like we should expect a fix.
Question of trying to do Duplicati rewrite to avoid is usually a matter of need + resources.

Sorry, I confused things. It is labelled “volume size” in the UI, and that maps to --dblock-size.
@ts678 and @Jojo-1000 explains it correctly: you can change --dblock-size without issues, but not --blocksize.

Yes. We are in the process of removing WebRequest and adding CI testing to all backends. It is a big task, so it will take a while before we are through.

MemoryStream uses a 32-bit signed Int32.MaxValue, which is closer to GiB than GB.

“Some time ago” refers to .NET6 deprecating WebRequest in 2021.

I don’t know when/if MS will actually remove it, as it will break a lot of existing .NET code. But having it deprecated is the source of many of the current build warnings, so it needs to be fixed.

We have started the process with WebDAV and MSGraph, but will fix more.

2 Likes

I will change the volume size and report back.

Changed the volume size to 2GB and backup worked flawlessly.
Good to know that volume size can be changed as often as someone likes.

Thanks for the help.

3 Likes