Release: 2.1.2.0 (Beta) 2025-08-20

2.1.2.0_beta_2025-08-20

This release is a beta release inteded to be used for testing.

If no major issue are found with this release, it will be used to create a new stable release.

About this release

This is the first beta release since the stable release went out, and we are super excited to reach this milestone!

The most visual change for this version is the use of the new user interface, but there is also a massive list of fixes and improvements in this version. Below is a summary of some of the larger changes.

If you have already been using the experimental release, this release is the same but with some additional bug fixes.

New user interface

The new user interface is rewritten from scratch and has the same general structure as the previous one, but we made some things more user friendly. The UI is fully functional, but we continue to improve on it.

Should you find a function that is missing, we have included buttons to switch back-n-forth between the two user interfaces.

New backends

We added support for using the cloud services pCloud, Filen and Filejump.

We also added support for connections with SMB.
The new SMB backend can connect directly to a Windows share without needing to mount the folder or install SMB support, and it works on Windows, Linux and macOS.

New restore flow

The new restore flow is enabled by default and you should not notice anything other than faster restores. In case there is an issue with this, it is possible to set the option --restore-legacy=true to fall back to the previous restore flow.

New signing keys

The packages are now signed by Duplicati Inc, and the Windows packages are signed with EV certificates.

Remote source support

With this version it is now possible to make backups of local and some remote data.

In this version, S3, IDrive, SSH and CIFS sources are supported.
The UI does not yet support editing this nicely, but you can enter a path in the special format to ā€œmountā€ the remote source.

For the commandline (and manual text entry in the UI) enter sources such as:

// Linux/MacOS
@/mnt/s3-data|s3://example?auth-username=...

// Windows
@X:\server1|smb://server/share?auth-username=...

This will cause the backups to fetch data from the remote sources.
We will add an editor to the UI to allow browsing the remote sources, similar to the local files.

Archive attributes support

For AWS S3 and Azure Blob Storage, Duplicati will now respect the archive attributes and not attempt to read and verify files that have been moved to cold storage.

Database updates

This version updates the format of the local database to version 17.

To assist in downgrades there is now a bundled CommandLine.DatabaseTool.exe / duplicati-database-tool that can downgrade databases with minimal data loss. For a downgrade from this version to 2.1.0.5 this will only drop a few indexes and not cause any data loss. Be sure to run the database tool before downgrading the install.

Throttle updated

For backups that throttle the transfer speeds, the new throttle logic uses a shared limit for the backup, where previous versions would apply the throttle for each individual stream.

Removed backends

The Sia backend has been removed due to an incompatible hardfork.

The Mega backend has been marked as unmaintained due to lack of a supported library.
For now, the Mega library still works, but you should migrate away from it. The new Mega S4 storage might be an option.

Updates to all backends

All backends are updated to handle timeouts in a granular manner.

This means the option --http-operations-timeout is no longer present, but instead there are now --read-write-timeout, --list-timeout, and -short-timeout. These have sensible defaults but are open for tweaking.

The option --allowed-ssl-versions is only present for the FTP backend, all other backends use the operating system to figure out what version to use.

New datafolder default location

For Duplicati running as a service there are now changes for the default folder location.
If you are not running Duplicati as a service/daemon, this change has no effect.

Windows: Avoid storing data in C:\Windows\System32\config\systemprofile\AppData\Local\Duplicati and prefer {CommonProgramData}\Duplicati, usually resolving to C:\ProgramData\Duplicati.

This change is to counter an issue where Windows will wipe the C:\Windows folder on major updates, and destroy the backup configuration in the process. If your service stores data under C:\Windows you will see a warning in the user interface on startup.

Linux: Avoid storing data in /Duplicati and prefer /var/lib/Duplicati.
This was caused by the update to .NET8 where the data folder was not resolved correctly and returned /, which is not the ideal place for storing data.

If you are using --server-datafolder or DUPLICATI_HOME, this has no effect on the database, but may cause your machineid and installid to change.

The machineid.txt and installid.txt would previously be stored in the local app data folder, even when using portable mode or choosing a specific data folder.

This has been fixed, so the files will now follow the database.
If you are using the Duplicati console or otherwise depend on these values, you need to move them into the folder where the database is stored.

This update also sets permissions on the data folder and the databases to prevent unauthorized access from local accounts.
To opt out of setting permissions on each startup, place a file named insecure-permissions.txt inside the data folder.

Other large changes

  • New file and folder enumeration logic
  • Timeouts logic on all backend operations
  • Improved database validation and repair logic
  • ServerUtil can output JSON for script integration
  • Improved support for having Duplicati behind a proxy
  • Updated throttle logic, all streams share the throttle
  • Improved repair logic
  • VSS is automatically on if running on Windows with sufficient privileges
  • Improved backend test function
  • Ability to suppress warnings
5 Likes

I have updated one of my Hyper-V hosts to the latest beta. For the most part it works as expected, but I am getting intermittent errors on some of my backups that use OneDrive as the backend. Here is a sample from one of them:

    "2025-08-29 22:07:17 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b383a88deb48747358fb80d408cc8d15d.dblock.zip.aes (49.99 MiB)",
    "2025-08-29 22:07:17 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b5885db4981c343a4855bf4531f56f9d4.dblock.zip.aes (49.65 MiB)",
    "2025-08-29 22:07:18 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-bf0d48028f9bb4388871b71ec9288a441.dblock.zip.aes (49.72 MiB)"
  ],
  "Warnings": [
    "2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerHandlerFailure]: Error in handler: The stream was already consumed. It cannot be read again.\r\nInvalidOperationException: The stream was already consumed. It cannot be read again.",
    "2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeWhileActive]: Terminating 3 active uploads",
    "2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 2 active upload(s) are still active",
    "2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 1 active upload(s) are still active",
    "2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-08-29 22:11:51 -04 - [Warning-Duplicati.Library.Main.Backend.BackendManager-BackendManagerShutdown]: Backend manager queue runner crashed\r\nAggregateException: One or more errors occurred. (The stream was already consumed. It cannot be read again.)"
  ],
  "Errors": [
    "2025-08-29 22:11:51 -04 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error\r\nInvalidOperationException: The stream was already consumed. It cannot be read again.",
    "2025-08-29 22:11:51 -04 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Backup has failed\r\nInvalidOperationException: The stream was already consumed. It cannot be read again."
  ],

Edit: Formatting.

I’ve been getting this error pretty much every time the backup runs, the amount of faulty files varies a bit between the runs. Is this normal and will resolve itself over time when old backups are purged?

[Warning-Duplicati.Library.Main.Operation.TestHandler-FaultyIndexFiles]: Found 1 faulty index files, repairing now

Is this only for OneDrive? It looks like an error that would affect all backends?

Yes. This is a fix that repairs index files when they are tested.
We discovered a subtle issue that would leave the index files partially updated in some cases when reclaiming unused space.

The partial index files does not affect the ability to restore files, but it makes it significantly slower when attempting to rebuild the database. The fix repairs the index files when it meets them, so the database recreate should be smooth if it is needed.

1 Like

Yes, the only backups I have experienced this with so far are OneDrive.

In my collection of backups I have several that are WebDAV, several OneDrive, and one Local storage.

It seems to be the same error as here:

I’m still have intermittent errors with my OneDrive backups. The ā€œStream was already consumedā€ error doesn’t happen on all of them, but the chances seem to be tied to the size of the backup. The more uploads it needs to do, the greater the likelihood that the error will occur… or at least that’s my non-scientific view of the thing.

Since OneDrive is basically unusable for my larger jobs on the current beta I decided to switch to the Rclone backend, which can then communicate with OneDrive. This had problems of its own though.

I had to use the old GUI to set up the job. For some reason, the Advanced options tab does not render correctly all the time for the Rclone backend. It’s supposed to look like this, with the Advanced options available at the bottom:

Instead though, it sometimes looks like this, with no Advanced options available at all:

I think that this has something to do with how the settings window is accessed. If you are on an existing backup and switch from another backend to Rclone, it will render the Advanced options. If your backup job is already set to the Rclone backend, the Advanced options renders blank.

Also, it seems to always set the Remote repository to all lowercase. Remotes in Rclone are case-sensitive, and in my case it’s ā€œOneDriveā€, not ā€œonedriveā€.

I was able to configure the jobs in the old GUI, but if I then opened it up and re-saved it under the new one I’m not sure all my settings would survive intact.

Once I got it all dialed in though, the Rclone backend seems to be working great … the job has been running for hours (10 Mbps upload :/), and has not encountered any errors yet.

Trying to see possible patterns is exactly what I hoped for. For example OneDrive can throttle if it feels the need. Duplicati responds, but it’s in code that isn’t otherwise used.

If slow-but-works is better, you could try throttling upload speed to less than it usually is. Another troubleshooting test is reducing number of concurrent uploads. The options are throttle-upload and asynchronous-concurrent-upload-limit.

Good for your backup, bad for incentives to chase the OneDrive problem. Nothing in the server log at About, not starting from job? Another logging technique would use the new log-http-requests option along with a verbose log of some sort to watch status code.

OneDrive API file upload and download limits

explains how 429: Too Many Requests can come back. Even old Duplicati could show problems like this in a log file. Log level retry makes it more obvious, as each one shows, otherwise I think you only see the details at final failure after number-of-retries with a retry-delay in between. One can also ask to get retry-with-exponential-backoff, which is what many providers prefer over being requested repeatedly after a short delay.

If you have some backups that are making job logs, check RetryAttempts in Complete log. Maybe sometimes when some backup completes, it only does so after doing retries.

I’m only commenting on this aspect (not GUI) because I know the history leading to work, however I think the devs have changed design since then. Maybe some code got broken?

In the spirit of this, I did attempt to set the synchronous-upload=True setting to try and reduce the number of concurrent uploads to one, but it seemingly had no effect. The error persisted.

On another note, I have discovered an issue with the Rclone backend. File deletions are extremely slow, with the slowdowns being directly proportional to the number of files stored. On one of my backups, file deletions take several minutes each.

It’s only anecdotal, but I found several reports around the internet regarding this issue and to be fair it’s an issue with Rclone, not with Duplicati. Apparently each time a file is deleted, Rclone generates a list of the entire contents of the remote directory, which depending on how many files are there, can take quite a while.

For mitigations it seems like there might be some command line switches that can be passed to Rclone such as –no-traverse, and –disable ListR. I’ll be testing this on backups tonight.

I looked at the Duplicati code for the Rclone backend and it appears to use the Rclone ā€œdeleteā€ operation … apparently there is a ā€œdeletefileā€ operation that does not trigger a re-listing, which can be much faster. I’m not familiar enough with the code to know if it’s appropriate to use though.

Thanks for catching that, I had not picked up on it.

My guess is that this is something related to retries. When the upload is retried, it needs to read the file again, and if the stream is closed, it will give this error.

Does anyone have a full stack trace from the logs?

Will these help, or do you need something more detailed?

Here is the log entry from the last attempt before I switched the backend to Rclone. I don’t see anything being uploads being retried, but it has the ā€œRetryAttemptsā€ listed at 17.

 {
  "DeletedFiles": 0,
  "DeletedFolders": 0,
  "ModifiedFiles": 2,
  "ExaminedFiles": 5,
  "OpenedFiles": 3,
  "AddedFiles": 0,
  "SizeOfModifiedFiles": 4276736,
  "SizeOfAddedFiles": 0,
  "SizeOfExaminedFiles": 430583190016,
  "SizeOfOpenedFiles": 4329984,
  "NotProcessedFiles": 0,
  "AddedFolders": 0,
  "TooLargeFiles": 0,
  "FilesWithError": 0,
  "TimestampChangedFiles": 1,
  "ModifiedFolders": 0,
  "ModifiedSymlinks": 0,
  "AddedSymlinks": 0,
  "DeletedSymlinks": 0,
  "PartialBackup": false,
  "Dryrun": false,
  "MainOperation": "Backup",
  "CompactResults": null,
  "VacuumResults": null,
  "DeleteResults": null,
  "RepairResults": null,
  "TestResults": null,
  "ParsedResult": "Fatal",
  "Interrupted": false,
  "Version": "2.1.2.0 (2.1.2.0_beta_2025-08-20)",
  "EndTime": "2025-09-07T02:09:13.5907935Z",
  "BeginTime": "2025-09-07T02:00:00.1342082Z",
  "Duration": "00:09:13.4565853",
  "MessagesActualLength": 128,
  "WarningsActualLength": 8,
  "ErrorsActualLength": 2,
  "Messages": [
    "2025-09-06 22:00:00 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started",
    "2025-09-06 22:01:27 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (27.81 KiB)",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: QuotaInfo - Started:  ()",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: Removing file listed as Deleting: duplicati-20250905T020000Z.dlist.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-KeepIncompleteFile]: Keeping protected incomplete remote file listed as Temporary: duplicati-20250906T020000Z.dlist.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-b63ac863978f64275a13e562827be43ce.dblock.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-b08595cb34d0f47a09f181bc5abb3aabe.dblock.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-i940dc4763dbb4638b2ec6dc72f4c4227.dindex.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-bdf5e31a3acb249e19d4f9e88d5ee550e.dblock.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-i9b96e0448f9e402b8142fb84fce4e2ab.dindex.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-i7882df0fc3164925991f913c91e378e5.dindex.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-b48601a4d761f4f68b9d473ad7a7dcbcd.dblock.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-b16194a744c99485eb3c2852e7cd67bc8.dblock.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-i48a2440988fa4ba380377c16c1149aaa.dindex.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: Removing file listed as Temporary: duplicati-idf25f49f340e4229b8f2c456a3a003a2.dindex.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: Removing file listed as Deleting: duplicati-b473446a90178455f932034859c0f6510.dblock.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: Removing file listed as Deleting: duplicati-bca5318b3b2dc4fa081a405ab56faff7b.dblock.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: Removing file listed as Deleting: duplicati-b65c48e54a9c642d0b4666dbb487cc30d.dblock.zip.aes",
    "2025-09-06 22:03:36 -04 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: Removing file listed as Deleting: duplicati-b309a2f0a2e094f05849916604b3da65d.dblock.zip.aes"
  ],
  "Warnings": [
    "2025-09-06 22:09:02 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerHandlerFailure]: Error in handler: The stream was already consumed. It cannot be read again.\r\nInvalidOperationException: The stream was already consumed. It cannot be read again.",
    "2025-09-06 22:09:02 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeWhileActive]: Terminating 3 active uploads",
    "2025-09-06 22:09:02 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-09-06 22:09:02 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 2 active upload(s) are still active",
    "2025-09-06 22:09:02 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-09-06 22:09:02 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 1 active upload(s) are still active",
    "2025-09-06 22:09:02 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-09-06 22:09:13 -04 - [Warning-Duplicati.Library.Main.Backend.BackendManager-BackendManagerShutdown]: Backend manager queue runner crashed\r\nAggregateException: One or more errors occurred. (The stream was already consumed. It cannot be read again.)"
  ],
  "Errors": [
    "2025-09-06 22:09:13 -04 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error\r\nInvalidOperationException: The stream was already consumed. It cannot be read again.",
    "2025-09-06 22:09:13 -04 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Backup has failed\r\nInvalidOperationException: The stream was already consumed. It cannot be read again."
  ],
  "BackendStatistics": {
    "RemoteCalls": 25,
    "BytesUploaded": 52286503,
    "BytesDownloaded": 0,
    "FilesUploaded": 3,
    "FilesDownloaded": 0,
    "FilesDeleted": 0,
    "FoldersCreated": 0,
    "RetryAttempts": 17,
    "UnknownFileSize": 0,
    "UnknownFileCount": 0,
    "KnownFileCount": 28475,
    "KnownFileSize": 809395176255,
    "KnownFilesets": 37,
    "LastBackupDate": "2025-09-04T22:00:01-04:00",
    "BackupListCount": 39,
    "TotalQuotaSpace": 1099511627776,
    "FreeQuotaSpace": 87005151495,
    "AssignedQuotaSpace": -1,
    "ReportedQuotaError": false,
    "ReportedQuotaWarning": false,
    "MainOperation": "Backup",
    "ParsedResult": "Success",
    "Interrupted": false,
    "Version": "2.1.2.0 (2.1.2.0_beta_2025-08-20)",
    "EndTime": "0001-01-01T00:00:00",
    "BeginTime": "2025-09-07T02:00:00.13421Z",
    "Duration": "00:00:00",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null
  }
} 

… and here is the oldest log I have from when the problems first started occurring. This one does have a mention of Retries in it:

 {
  "DeletedFiles": 0,
  "DeletedFolders": 0,
  "ModifiedFiles": 0,
  "ExaminedFiles": 5,
  "OpenedFiles": 3,
  "AddedFiles": 0,
  "SizeOfModifiedFiles": 0,
  "SizeOfAddedFiles": 0,
  "SizeOfExaminedFiles": 430214091264,
  "SizeOfOpenedFiles": 4329984,
  "NotProcessedFiles": 0,
  "AddedFolders": 0,
  "TooLargeFiles": 0,
  "FilesWithError": 0,
  "TimestampChangedFiles": 3,
  "ModifiedFolders": 0,
  "ModifiedSymlinks": 0,
  "AddedSymlinks": 0,
  "DeletedSymlinks": 0,
  "PartialBackup": false,
  "Dryrun": false,
  "MainOperation": "Backup",
  "CompactResults": null,
  "VacuumResults": null,
  "DeleteResults": null,
  "RepairResults": null,
  "TestResults": null,
  "ParsedResult": "Fatal",
  "Interrupted": false,
  "Version": "2.1.2.0 (2.1.2.0_beta_2025-08-20)",
  "EndTime": "2025-08-31T02:09:05.6968171Z",
  "BeginTime": "2025-08-31T02:00:00.124997Z",
  "Duration": "00:09:05.5718201",
  "MessagesActualLength": 111,
  "WarningsActualLength": 8,
  "ErrorsActualLength": 2,
  "Messages": [
    "2025-08-30 22:00:00 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started",
    "2025-08-30 22:00:55 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2025-08-30 22:02:58 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (27.79 KiB)",
    "2025-08-30 22:02:58 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: QuotaInfo - Started:  ()",
    "2025-08-30 22:04:02 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b80a2d17251cd4adba22e61e2f5563eaa.dblock.zip.aes (49.62 MiB)",
    "2025-08-30 22:04:07 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b6e341793296a4ffbb81b0e5683432d6b.dblock.zip.aes (49.81 MiB)",
    "2025-08-30 22:04:58 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b317bddc129e3417da6eb60f82803387f.dblock.zip.aes (49.81 MiB)",
    "2025-08-30 22:04:58 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b897b50695eff41e9965aaf95c41134f5.dblock.zip.aes (49.73 MiB)",
    "2025-08-30 22:05:29 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-b317bddc129e3417da6eb60f82803387f.dblock.zip.aes ()",
    "2025-08-30 22:05:30 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-b897b50695eff41e9965aaf95c41134f5.dblock.zip.aes ()",
    "2025-08-30 22:05:31 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-b6e341793296a4ffbb81b0e5683432d6b.dblock.zip.aes ()",
    "2025-08-30 22:05:38 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b80a2d17251cd4adba22e61e2f5563eaa.dblock.zip.aes (49.62 MiB)",
    "2025-08-30 22:05:38 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-i212cedfb34b54450bbd026947fc22d92.dindex.zip.aes (13.09 KiB)",
    "2025-08-30 22:05:39 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-b317bddc129e3417da6eb60f82803387f.dblock.zip.aes (49.81 MiB)",
    "2025-08-30 22:05:39 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-bc9ec39a0f26c40d5bca3daf05dd4a769.dblock.zip.aes (49.81 MiB)",
    "2025-08-30 22:05:39 -04 - [Information-Duplicati.Library.Main.Backend.PutOperation-RenameRemoteTargetFile]: Renaming \"duplicati-b317bddc129e3417da6eb60f82803387f.dblock.zip.aes\" to \"duplicati-bc9ec39a0f26c40d5bca3daf05dd4a769.dblock.zip.aes\"",
    "2025-08-30 22:05:39 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bc9ec39a0f26c40d5bca3daf05dd4a769.dblock.zip.aes (49.81 MiB)",
    "2025-08-30 22:05:39 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-i212cedfb34b54450bbd026947fc22d92.dindex.zip.aes (13.09 KiB)",
    "2025-08-30 22:05:39 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b3d68eb26cf0b4978982d2c059c9c5f60.dblock.zip.aes (49.55 MiB)",
    "2025-08-30 22:05:40 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-b897b50695eff41e9965aaf95c41134f5.dblock.zip.aes (49.73 MiB)"
  ],
  "Warnings": [
    "2025-08-30 22:09:01 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerHandlerFailure]: Error in handler: The stream was already consumed. It cannot be read again.\r\nInvalidOperationException: The stream was already consumed. It cannot be read again.",
    "2025-08-30 22:09:01 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeWhileActive]: Terminating 3 active uploads",
    "2025-08-30 22:09:01 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-08-30 22:09:01 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 2 active upload(s) are still active",
    "2025-08-30 22:09:01 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-08-30 22:09:01 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 1 active upload(s) are still active",
    "2025-08-30 22:09:01 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-08-30 22:09:05 -04 - [Warning-Duplicati.Library.Main.Backend.BackendManager-BackendManagerShutdown]: Backend manager queue runner crashed\r\nAggregateException: One or more errors occurred. (The stream was already consumed. It cannot be read again.)"
  ],
  "Errors": [
    "2025-08-30 22:09:05 -04 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error\r\nInvalidOperationException: The stream was already consumed. It cannot be read again.",
    "2025-08-30 22:09:05 -04 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Backup has failed\r\nInvalidOperationException: The stream was already consumed. It cannot be read again."
  ],
  "BackendStatistics": {
    "RemoteCalls": 27,
    "BytesUploaded": 52047594,
    "BytesDownloaded": 0,
    "FilesUploaded": 2,
    "FilesDownloaded": 0,
    "FilesDeleted": 0,
    "FoldersCreated": 0,
    "RetryAttempts": 20,
    "UnknownFileSize": 0,
    "UnknownFileCount": 0,
    "KnownFileCount": 28453,
    "KnownFileSize": 808978545697,
    "KnownFilesets": 31,
    "LastBackupDate": "2025-08-29T22:00:39-04:00",
    "BackupListCount": 31,
    "TotalQuotaSpace": 1099511627776,
    "FreeQuotaSpace": 89105898065,
    "AssignedQuotaSpace": -1,
    "ReportedQuotaError": false,
    "ReportedQuotaWarning": false,
    "MainOperation": "Backup",
    "ParsedResult": "Success",
    "Interrupted": false,
    "Version": "2.1.2.0 (2.1.2.0_beta_2025-08-20)",
    "EndTime": "0001-01-01T00:00:00",
    "BeginTime": "2025-08-31T02:00:00.1249998Z",
    "Duration": "00:00:00",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null
  }
} 

Please let me know if I can gather any more information.

Here, I also dug this out of my Zabbix server … I had forgotten I store the full JSON result reports in there as well.

This seems more like the stack trace you were looking for.

{"Data":{"DeletedFiles":0,"DeletedFolders":0,"ModifiedFiles":0,"ExaminedFiles":5,"OpenedFiles":0,"AddedFiles":0,"SizeOfModifiedFiles":0,"SizeOfAddedFiles":0,"SizeOfExaminedFiles":430214091264,"SizeOfOpenedFiles":0,"NotProcessedFiles":0,"AddedFolders":0,"TooLargeFiles":0,"FilesWithError":0,"TimestampChangedFiles":0,"ModifiedFolders":0,"ModifiedSymlinks":0,"AddedSymlinks":0,"DeletedSymlinks":0,"PartialBackup":false,"Dryrun":false,"MainOperation":"Backup","CompactResults":null,"VacuumResults":null,"DeleteResults":null,"RepairResults":null,"TestResults":null,"ParsedResult":"Fatal","Interrupted":false,"Version":"2.1.2.0 (2.1.2.0_beta_2025-08-20)","EndTime":"2025-08-30T02:11:51.6985776Z","BeginTime":"2025-08-30T02:00:00.1000445Z","Duration":"00:11:51.5985331","MessagesActualLength":100,"WarningsActualLength":8,"ErrorsActualLength":2,"BackendStatistics":{"RemoteCalls":26,"BytesUploaded":104122100,"BytesDownloaded":0,"FilesUploaded":4,"FilesDownloaded":0,"FilesDeleted":0,"FoldersCreated":0,"RetryAttempts":17,"UnknownFileSize":0,"UnknownFileCount":0,"KnownFileCount":28442,"KnownFileSize":808718276770,"KnownFilesets":30,"LastBackupDate":"2025-08-28T22:00:00-04:00","BackupListCount":30,"TotalQuotaSpace":1099511627776,"FreeQuotaSpace":85388367678,"AssignedQuotaSpace":-1,"ReportedQuotaError":false,"ReportedQuotaWarning":false,"MainOperation":"Backup","ParsedResult":"Success","Interrupted":false,"Version":"2.1.2.0 (2.1.2.0_beta_2025-08-20)","EndTime":"0001-01-01T00:00:00","BeginTime":"2025-08-30T02:00:00.1000524Z","Duration":"00:00:00","MessagesActualLength":0,"WarningsActualLength":0,"ErrorsActualLength":0}},"Extra":null,"LogLines":["2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerHandlerFailure]: Error in handler: The stream was already consumed. It cannot be read again.
System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)","2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeWhileActive]: Terminating 3 active uploads","2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled","2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 2 active upload(s) are still active","2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled","2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 1 active upload(s) are still active","2025-08-29 22:10:34 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled","2025-08-29 22:11:51 -04 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.PutAsync(VolumeWriterBase volume, IndexVolumeWriter indexVolume, Action indexVolumeFinished, Boolean waitForComplete, Func`1 onDbUpdate, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)
   at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation(Channels channels, ISourceProvider source, UsnJournalService journalService, BackupDatabase database, IBackendManager backendManager, BackupStatsCollector stats, Options options, IFilter filter, BackupResults result, ITaskReader taskreader, Int64 filesetid, Int64 lastfilesetid)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)","2025-08-29 22:11:51 -04 - [Warning-Duplicati.Library.Main.Backend.BackendManager-BackendManagerShutdown]: Backend manager queue runner crashed
System.AggregateException: One or more errors occurred. (The stream was already consumed. It cannot be read again.)
 ---> System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.PutAsync(VolumeWriterBase volume, IndexVolumeWriter indexVolume, Action indexVolumeFinished, Boolean waitForComplete, Func`1 onDbUpdate, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)
   at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation(Channels channels, ISourceProvider source, UsnJournalService journalService, BackupDatabase database, IBackendManager backendManager, BackupStatsCollector stats, Options options, IFilter filter, BackupResults result, ITaskReader taskreader, Int64 filesetid, Int64 lastfilesetid)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<<Backup>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)
   --- End of inner exception stack trace ---","2025-08-29 22:11:51 -04 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Backup has failed
System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.PutAsync(VolumeWriterBase volume, IndexVolumeWriter indexVolume, Action indexVolumeFinished, Boolean waitForComplete, Func`1 onDbUpdate, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)
   at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation(Channels channels, ISourceProvider source, UsnJournalService journalService, BackupDatabase database, IBackendManager backendManager, BackupStatsCollector stats, Options options, IFilter filter, BackupResults result, ITaskReader taskreader, Int64 filesetid, Int64 lastfilesetid)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<<Backup>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)"],"Exception":"System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.PutAsync(VolumeWriterBase volume, IndexVolumeWriter indexVolume, Action indexVolumeFinished, Boolean waitForComplete, Func`1 onDbUpdate, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)
   at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation(Channels channels, ISourceProvider source, UsnJournalService journalService, BackupDatabase database, IBackendManager backendManager, BackupStatsCollector stats, Options options, IFilter filter, BackupResults result, ITaskReader taskreader, Int64 filesetid, Int64 lastfilesetid)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<<Backup>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)"}

Edit: I apologize for the formatting, I just realized how terrible it is.

First start OLD UI:

System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at System.Net.Http.HttpConnectionResponseContent.<SerializeToStreamAsync>g__Impl|6_0(Stream stream, CancellationToken cancellationToken)
   at System.Net.Http.HttpContent.LoadIntoBufferAsyncCore(Task serializeToStreamTask, MemoryStream tempBuffer)
   at System.Net.Http.HttpContent.WaitAndReturnAsync[TState,TResult](Task waitTask, TState state, Func`2 returnFunc)
   at Duplicati.Library.Utility.Utility.Await[T](Task`1 task)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException.ResponseToString(HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException.BuildFullMessage(String message, HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException..ctor(String message, HttpResponseMessage response, Exception innerException)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException..ctor(String message, HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.ParseResponseAsync[T](HttpResponseMessage response, CancellationToken cancelToken)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.ThrowUploadSessionException(UploadSession uploadSession, HttpResponseMessage createSessionResponse, Int32 fragment, Int32 fragmentCount, Exception ex, CancellationToken cancelToken)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.PutAsync(String remotename, Stream stream, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.PutOperation.PerformUpload(IBackend backend, String hash, Int64 size, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.PutOperation.ExecuteAsync(IBackend backend, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute[TResult](PendingOperation`1 op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.PutAsync(VolumeWriterBase volume, IndexVolumeWriter indexVolume, Func`1 indexVolumeFinished, Boolean waitForComplete, Func`1 onDbUpdate, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)
   at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation(Channels channels, ISourceProvider source, UsnJournalService journalService, BackupDatabase database, IBackendManager backendManager, BackupStatsCollector stats, Options options, IFilter filter, BackupResults result, ITaskReader taskreader, Int64 filesetid, Int64 lastfilesetid)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<<Backup>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)
   at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
   at Duplicati.Server.Runner.RunInternal(Connection databaseConnection, EventPollNotify eventPollNotify, INotificationUpdateService notificationUpdateService, IProgressStateProviderService progressStateProviderService, IApplicationSettings applicationSettings, IRunnerData data, Boolean fromQueue)
   at Duplicati.Server.Runner.Run(Connection databaseConnection, EventPollNotify eventPollNotify, INotificationUpdateService notificationUpdateService, IProgressStateProviderService progressStateProviderService, IApplicationSettings applicationSettings, IQueuedTask data, Boolean fromQueue)
   at Duplicati.WebserverCore.Services.QueueRunnerService.RunTask(IQueuedTask task)
System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at System.Net.Http.HttpConnectionResponseContent.<SerializeToStreamAsync>g__Impl|6_0(Stream stream, CancellationToken cancellationToken)
   at System.Net.Http.HttpContent.LoadIntoBufferAsyncCore(Task serializeToStreamTask, MemoryStream tempBuffer)
   at System.Net.Http.HttpContent.WaitAndReturnAsync[TState,TResult](Task waitTask, TState state, Func`2 returnFunc)
   at Duplicati.Library.Utility.Utility.Await[T](Task`1 task)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException.ResponseToString(HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException.BuildFullMessage(String message, HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException..ctor(String message, HttpResponseMessage response, Exception innerException)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException..ctor(String message, HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.ParseResponseAsync[T](HttpResponseMessage response, CancellationToken cancelToken)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.ThrowUploadSessionException(UploadSession uploadSession, HttpResponseMessage createSessionResponse, Int32 fragment, Int32 fragmentCount, Exception ex, CancellationToken cancelToken)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.PutAsync(String remotename, Stream stream, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.PutOperation.PerformUpload(IBackend backend, String hash, Int64 size, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.PutOperation.ExecuteAsync(IBackend backend, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute[TResult](PendingOperation`1 op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.PutAsync(VolumeWriterBase volume, IndexVolumeWriter indexVolume, Func`1 indexVolumeFinished, Boolean waitForComplete, Func`1 onDbUpdate, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)
   at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation(Channels channels, ISourceProvider source, UsnJournalService journalService, BackupDatabase database, IBackendManager backendManager, BackupStatsCollector stats, Options options, IFilter filter, BackupResults result, ITaskReader taskreader, Int64 filesetid, Int64 lastfilesetid)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<<Backup>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)
   at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
   at Duplicati.Server.Runner.RunInternal(Connection databaseConnection, EventPollNotify eventPollNotify, INotificationUpdateService notificationUpdateService, IProgressStateProviderService progressStateProviderService, IApplicationSettings applicationSettings, IRunnerData data, Boolean fromQueue)

Second attempt OLD UI:

System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at System.Net.Http.HttpConnectionResponseContent.<SerializeToStreamAsync>g__Impl|6_0(Stream stream, CancellationToken cancellationToken)
   at System.Net.Http.HttpContent.LoadIntoBufferAsyncCore(Task serializeToStreamTask, MemoryStream tempBuffer)
   at System.Net.Http.HttpContent.WaitAndReturnAsync[TState,TResult](Task waitTask, TState state, Func`2 returnFunc)
   at Duplicati.Library.Utility.Utility.Await[T](Task`1 task)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException.ResponseToString(HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException.BuildFullMessage(String message, HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException..ctor(String message, HttpResponseMessage response, Exception innerException)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException..ctor(String message, HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.ParseResponseAsync[T](HttpResponseMessage response, CancellationToken cancelToken)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.ThrowUploadSessionException(UploadSession uploadSession, HttpResponseMessage createSessionResponse, Int32 fragment, Int32 fragmentCount, Exception ex, CancellationToken cancelToken)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.PutAsync(String remotename, Stream stream, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.PutOperation.PerformUpload(IBackend backend, String hash, Int64 size, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.PutOperation.ExecuteAsync(IBackend backend, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute[TResult](PendingOperation`1 op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.PutAsync(VolumeWriterBase volume, IndexVolumeWriter indexVolume, Func`1 indexVolumeFinished, Boolean waitForComplete, Func`1 onDbUpdate, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)
   at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation(Channels channels, ISourceProvider source, UsnJournalService journalService, BackupDatabase database, IBackendManager backendManager, BackupStatsCollector stats, Options options, IFilter filter, BackupResults result, ITaskReader taskreader, Int64 filesetid, Int64 lastfilesetid)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<<Backup>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)
   at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
   at Duplicati.Server.Runner.RunInternal(Connection databaseConnection, EventPollNotify eventPollNotify, INotificationUpdateService notificationUpdateService, IProgressStateProviderService progressStateProviderService, IApplicationSettings applicationSettings, IRunnerData data, Boolean fromQueue)
   at Duplicati.Server.Runner.Run(Connection databaseConnection, EventPollNotify eventPollNotify, INotificationUpdateService notificationUpdateService, IProgressStateProviderService progressStateProviderService, IApplicationSettings applicationSettings, IQueuedTask data, Boolean fromQueue)
   at Duplicati.WebserverCore.Services.QueueRunnerService.RunTask(IQueuedTask task)
System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at System.Net.Http.HttpConnectionResponseContent.<SerializeToStreamAsync>g__Impl|6_0(Stream stream, CancellationToken cancellationToken)
   at System.Net.Http.HttpContent.LoadIntoBufferAsyncCore(Task serializeToStreamTask, MemoryStream tempBuffer)
   at System.Net.Http.HttpContent.WaitAndReturnAsync[TState,TResult](Task waitTask, TState state, Func`2 returnFunc)
   at Duplicati.Library.Utility.Utility.Await[T](Task`1 task)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException.ResponseToString(HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException.BuildFullMessage(String message, HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException..ctor(String message, HttpResponseMessage response, Exception innerException)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException..ctor(String message, HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.ParseResponseAsync[T](HttpResponseMessage response, CancellationToken cancelToken)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.ThrowUploadSessionException(UploadSession uploadSession, HttpResponseMessage createSessionResponse, Int32 fragment, Int32 fragmentCount, Exception ex, CancellationToken cancelToken)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.PutAsync(String remotename, Stream stream, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.PutOperation.PerformUpload(IBackend backend, String hash, Int64 size, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.PutOperation.ExecuteAsync(IBackend backend, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute[TResult](PendingOperation`1 op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.PutAsync(VolumeWriterBase volume, IndexVolumeWriter indexVolume, Func`1 indexVolumeFinished, Boolean waitForComplete, Func`1 onDbUpdate, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)
   at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation(Channels channels, ISourceProvider source, UsnJournalService journalService, BackupDatabase database, IBackendManager backendManager, BackupStatsCollector stats, Options options, IFilter filter, BackupResults result, ITaskReader taskreader, Int64 filesetid, Int64 lastfilesetid)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<<Backup>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)
   at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
   at Duplicati.Server.Runner.RunInternal(Connection databaseConnection, EventPollNotify eventPollNotify, INotificationUpdateService notificationUpdateService, IProgressStateProviderService progressStateProviderService, IApplicationSettings applicationSettings, IRunnerData data, Boolean fromQueue)

Third attempt NEW UI:

System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at System.Net.Http.HttpConnectionResponseContent.<SerializeToStreamAsync>g__Impl|6_0(Stream stream, CancellationToken cancellationToken)
   at System.Net.Http.HttpContent.LoadIntoBufferAsyncCore(Task serializeToStreamTask, MemoryStream tempBuffer)
   at System.Net.Http.HttpContent.WaitAndReturnAsync[TState,TResult](Task waitTask, TState state, Func`2 returnFunc)
   at Duplicati.Library.Utility.Utility.Await[T](Task`1 task)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException.ResponseToString(HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException.BuildFullMessage(String message, HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException..ctor(String message, HttpResponseMessage response, Exception innerException)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException..ctor(String message, HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.ParseResponseAsync[T](HttpResponseMessage response, CancellationToken cancelToken)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.ThrowUploadSessionException(UploadSession uploadSession, HttpResponseMessage createSessionResponse, Int32 fragment, Int32 fragmentCount, Exception ex, CancellationToken cancelToken)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.PutAsync(String remotename, Stream stream, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.PutOperation.PerformUpload(IBackend backend, String hash, Int64 size, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.PutOperation.ExecuteAsync(IBackend backend, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute[TResult](PendingOperation`1 op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.PutAsync(VolumeWriterBase volume, IndexVolumeWriter indexVolume, Func`1 indexVolumeFinished, Boolean waitForComplete, Func`1 onDbUpdate, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)
   at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation(Channels channels, ISourceProvider source, UsnJournalService journalService, BackupDatabase database, IBackendManager backendManager, BackupStatsCollector stats, Options options, IFilter filter, BackupResults result, ITaskReader taskreader, Int64 filesetid, Int64 lastfilesetid)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<<Backup>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)
   at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
   at Duplicati.Server.Runner.RunInternal(Connection databaseConnection, EventPollNotify eventPollNotify, INotificationUpdateService notificationUpdateService, IProgressStateProviderService progressStateProviderService, IApplicationSettings applicationSettings, IRunnerData data, Boolean fromQueue)
   at Duplicati.Server.Runner.Run(Connection databaseConnection, EventPollNotify eventPollNotify, INotificationUpdateService notificationUpdateService, IProgressStateProviderService progressStateProviderService, IApplicationSettings applicationSettings, IQueuedTask data, Boolean fromQueue)
   at Duplicati.WebserverCore.Services.QueueRunnerService.RunTask(IQueuedTask task)

BK Casa HD -
clock
tra 3 ore at 20:30
Generale
Registro delle modifiche
Librerie
Informazioni di sistema
Impostazioni del server
Registri
Registro arresti anomali
Marca temporale
ID backup
Messaggio
14 set 2025, 17:29:46
12
Failed while executing Backup "BK Casa OneDrive" (id: 12)
System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at System.Net.Http.HttpConnectionResponseContent.<SerializeToStreamAsync>g__Impl|6_0(Stream stream, CancellationToken cancellationToken)
   at System.Net.Http.HttpContent.LoadIntoBufferAsyncCore(Task serializeToStreamTask, MemoryStream tempBuffer)
   at System.Net.Http.HttpContent.WaitAndReturnAsync[TState,TResult](Task waitTask, TState state, Func`2 returnFunc)
   at Duplicati.Library.Utility.Utility.Await[T](Task`1 task)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException.ResponseToString(HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException.BuildFullMessage(String message, HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException..ctor(String message, HttpResponseMessage response, Exception innerException)
   at Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException..ctor(String message, HttpResponseMessage response)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.ParseResponseAsync[T](HttpResponseMessage response, CancellationToken cancelToken)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.ThrowUploadSessionException(UploadSession uploadSession, HttpResponseMessage createSessionResponse, Int32 fragment, Int32 fragmentCount, Exception ex, CancellationToken cancelToken)
   at Duplicati.Library.Backend.MicrosoftGraphBackend.PutAsync(String remotename, Stream stream, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.PutOperation.PerformUpload(IBackend backend, String hash, Int64 size, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.PutOperation.ExecuteAsync(IBackend backend, CancellationToken cancelToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute[TResult](PendingOperation`1 op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Execute(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.PutAsync(VolumeWriterBase volume, IndexVolumeWriter indexVolume, Func`1 indexVolumeFinished, Boolean waitForComplete, Func`1 onDbUpdate, CancellationToken cancelToken)
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Main.Operation.Backup.DataBlockProcessor.<>c__DisplayClass0_0.<<Run>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)
   at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation(Channels channels, ISourceProvider source, UsnJournalService journalService, BackupDatabase database, IBackendManager backendManager, BackupStatsCollector stats, Options options, IFilter filter, BackupResults result, ITaskReader taskreader, Int64 filesetid, Int64 lastfilesetid)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<<Backup>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)
   at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
   at Duplicati.Server.Runner.RunInternal(Connection databaseConnection, EventPollNotify eventPollNotify, INotificationUpdateService notificationUpdateService, IProgressStateProviderService progressStateProviderService, IApplicationSettings applicationSettings, IRunnerData data, Boolean fromQueue)

Another one of my OneDrive backups failed. I hadn’t converted this one to Rclone because it had not failed yet under 2.1.2.0.

Here is the backtrace (once again, my apologizes for the formatting).

{"Data":{"DeletedFiles":0,"DeletedFolders":0,"ModifiedFiles":1,"ExaminedFiles":4,"OpenedFiles":1,"AddedFiles":0,"SizeOfModifiedFiles":13962838016,"SizeOfAddedFiles":0,"SizeOfExaminedFiles":13967163904,"SizeOfOpenedFiles":13962838016,"NotProcessedFiles":0,"AddedFolders":0,"TooLargeFiles":0,"FilesWithError":0,"TimestampChangedFiles":0,"ModifiedFolders":0,"ModifiedSymlinks":0,"AddedSymlinks":0,"DeletedSymlinks":0,"PartialBackup":false,"Dryrun":false,"MainOperation":"Backup","CompactResults":null,"VacuumResults":null,"DeleteResults":null,"RepairResults":null,"TestResults":null,"ParsedResult":"Fatal","Interrupted":false,"Version":"2.1.2.0 (2.1.2.0_beta_2025-08-20)","EndTime":"2025-09-18T05:21:35.8666683Z","BeginTime":"2025-09-18T05:15:26.7805703Z","Duration":"00:06:09.0860980","MessagesActualLength":90,"WarningsActualLength":8,"ErrorsActualLength":2,"BackendStatistics":{"RemoteCalls":24,"BytesUploaded":105239876,"BytesDownloaded":0,"FilesUploaded":4,"FilesDownloaded":0,"FilesDeleted":0,"FoldersCreated":0,"RetryAttempts":16,"UnknownFileSize":0,"UnknownFileCount":0,"KnownFileCount":535,"KnownFileSize":12164310315,"KnownFilesets":31,"LastBackupDate":"2025-09-17T00:34:09-04:00","BackupListCount":31,"TotalQuotaSpace":1099511627776,"FreeQuotaSpace":35601472545,"AssignedQuotaSpace":-1,"ReportedQuotaError":false,"ReportedQuotaWarning":false,"MainOperation":"Backup","ParsedResult":"Success","Interrupted":false,"Version":"2.1.2.0 (2.1.2.0_beta_2025-08-20)","EndTime":"0001-01-01T00:00:00","BeginTime":"2025-09-18T05:15:26.7805732Z","Duration":"00:00:00","MessagesActualLength":0,"WarningsActualLength":0,"ErrorsActualLength":0}},"Extra":null,"LogLines":["2025-09-18 01:21:35 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerHandlerFailure]: Error in handler: The stream was already consumed. It cannot be read again.
System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)","2025-09-18 01:21:35 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeWhileActive]: Terminating 3 active uploads","2025-09-18 01:21:35 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled","2025-09-18 01:21:35 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 2 active upload(s) are still active","2025-09-18 01:21:35 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled","2025-09-18 01:21:35 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 1 active upload(s) are still active","2025-09-18 01:21:35 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled","2025-09-18 01:21:35 -04 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.WaitForEmptyAsync(CancellationToken cancellationToken)
   at Duplicati.Library.Main.Operation.BackupHandler.FlushBackend(BackupDatabase database, BackupResults result, IBackendManager backendManager)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.<>c__DisplayClass14_0.<<RunHandlerAsync>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)","2025-09-18 01:21:35 -04 - [Warning-Duplicati.Library.Main.Backend.BackendManager-BackendManagerShutdown]: Backend manager queue runner crashed
System.AggregateException: One or more errors occurred. (The stream was already consumed. It cannot be read again.)
 ---> System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.WaitForEmptyAsync(CancellationToken cancellationToken)
   at Duplicati.Library.Main.Operation.BackupHandler.FlushBackend(BackupDatabase database, BackupResults result, IBackendManager backendManager)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.<>c__DisplayClass14_0.<<RunHandlerAsync>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at CoCoL.AutomationExtensions.RunTask[T](T channels, Func`2 method, Boolean catchRetiredExceptions)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<<Backup>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)
   --- End of inner exception stack trace ---","2025-09-18 01:21:35 -04 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Backup has failed
System.InvalidOperationException: The stream was already consumed. It cannot be read again.
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ExecuteWithRetry(PendingOperationBase op, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.ReclaimCompletedTasks(List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 n, List`1 tasks)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.EnsureAtMostNActiveTasks(Int32 uploads, Int32 downloads)
   at Duplicati.Library.Main.Backend.BackendManager.Handler.Run(IReadChannel`1 requestChannel)
   at Duplicati.Library.Main.Backend.BackendManager.WaitForEmptyAsync(CancellationToken cancellationToken)
   at Duplicati.Library.Main.Operation.BackupHandler.FlushBackend(BackupDatabase database, BackupResults result, IBackendManager backendManager)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library

Thanks for that trace, that made it much simpler to track down.

It is essentially when it is building the error message that it tries to squeeze out information that has already been processed which gives the error.

I have a fix ready, but it only fixes the parsing, so with this fix it should then surface the real error message.

Great, implement that we test. :grin:

Hello,

I installed the beta version. It seems that --server-datafolder with the new recommended path is not working.
I run Duplicati as a service. I can’t even set a password.

& ā€œC:\Program Files\Duplicati 2\Duplicati.CommandLine.ServerUtil.exeā€ --server-datafolder ā€œC:\ProgramData\Duplicatiā€ change-password
Connecting to http://127.0.0.1:8200/…
No database found in C:\ProgramData\Duplicati\

It reports this error:

Access to the path ā€˜C:\ProgramData\Duplicati\Duplicati-server.sqlite’ is denied.

Creating file insecure-permissions.txt in C:\ProgramData\Duplicati did not help.

I changed the password, but I had to run it as a system user. It is not very secure.

& C:\util\PStools\PsExec.exe -i -s cmd.exe

This is a change from version 2.1.0.5.

If you are running as a service, Duplicati will (usually) run in the system account, so your local user may not have access.

Creating the file only prevents Duplicati from resetting the permissions when restarting. You still need to manually adjust the permissions on the folder and database.

Well… it requires you to elevate to a system account before you can access it, I would argue this is as secure as it gets?

Yes, this is done to reduce the risk of exposing credentials and backup keys to unknowing users.

We should update the documentation when the stable release is out, but this is a way to get what you want without using PsExec:

  1. Right click the folder
  2. Adjust permissions to include the Administrator (or your local user)
  3. Run Duplicati.CommandLine.ServerUtil.exe in the appropriate context

If you remove the file insecure-permissions.txt Duplicati will reset the folder permissions when it restarts, but you can also manually change the permissions back.

If you have an idea for how we can make the process smoother, please let me know.

This is an interesting phrasing. Is it like asterisks in GUI to avoid accidental view?
I’m not sure how much it increases security, given the two posted ways to bypass.

There are others, easy to find, easy to use. System attackers likely know them all.
Expert IT admins may too, so is this blocking casual admins? Is that a good idea?

I’m developing a stronger opinion that system admins should be allowed to admin.
Could we take the usual path, and let elevated administrators in? I asked recently.
Current status is an unhappy mix of asking what’s hard for most, and not detailing.

Impact of that is probably felt most by Windows service users, so that limits a little.
Non-SYSTEM users seeing things is not stopped, since the ACL lets them right in.

In techno-nitty-gritty, beyond ACL, there’s also an ā€œownerā€ who can change perms.
For BUILTIN\Administrators members (including SYSTEM), owner is the group.
This is thought to be partly to help administrative practicalities, e.g. for admin team.
If there’s any idea to tweak owner, why? Next, study SeTakeOwnershipPrivilege.