Release: 2.2.0.0 (Stable) 2025-10-23

2.2.0.0_stable_2025-10-23

About this release

This is the next stable release building on the 2.1 line and adds more features and stability fixes, and we are super excited to share this version!

A big thanks to the supportive Duplicati user base who continues to contribute with fixes, issue reports and feature requests.

The most visual change for this version is the use of the new user interface, but there is also a massive list of fixes and improvements in this version. Below is a summary of some of the larger changes.

If you have already been using the beta release, this release is the same as v2.1.2.3.

New user interface

The new user interface is rewritten from scratch and has the same general structure as the previous one, but we made some things more user friendly and more visually appealing.

Should you find a function that is lacking, we have included buttons to switch back-n-forth between the two user interfaces.

New backends

We added support for using the cloud services pCloud, Filen and Filejump.

We also added support for connections with SMB.
The new SMB backend can connect directly to a Windows share without needing to mount the folder or install SMB support, and it works on Windows, Linux and macOS.

New restore flow

The new restore flow is enabled by default and you should not notice anything other than faster restores!
In case there is an issue with this, it is possible to set the option --restore-legacy=true to fall back to the previous restore flow.

New signing keys

The packages are now signed by Duplicati Inc, and the Windows packages are signed with EV certificates.

Remote source support

With this version it is now possible to make backups of local and some remote data.

In this version, S3, IDrive, SSH and CIFS sources are supported.
The UI does not yet support editing this nicely, but you can enter a path in the special format to “mount” the remote source.

For the commandline (and manual text entry in the UI) enter sources such as:

// Linux/MacOS
@/mnt/s3-data|s3://example?auth-username=...

// Windows
@X:\server1|smb://server/share?auth-username=...

This will cause the backups to fetch data from the remote sources.
We will add an editor to the UI to allow browsing the remote sources, similar to the local files.

Archive attributes support

For AWS S3 and Azure Blob Storage, Duplicati will now respect the archive attributes and not attempt to read and verify files that have been moved to cold storage.

Database updates

This version updates the format of the local database to version 17.
This version updates the format of the settings database to v9.

To assist in downgrades there is now a bundled CommandLine.DatabaseTool.exe / duplicati-database-tool that can downgrade databases with minimal data loss.
For a downgrade from this version to 2.1.0.5 this will only drop a few indexes and not cause any data loss.
Be sure to run the database tool before downgrading the install as the tool needs to be in the latest version.

Throttle updated

For backups that throttle the transfer speeds, the new throttle logic uses a shared limit for the backup, where previous versions would apply the throttle for each individual stream.

Removed backends

The Sia backend has been removed due to an incompatible hardfork.

The Mega backend has been marked as unmaintained due to lack of a supported library.
For now, the Mega library still works, but you should migrate away from it. The new Mega S4 storage might be an option.

Updates to all backends

All backends are updated to handle timeouts in a granular manner.

This means the option --http-operations-timeout is no longer present, but instead there are now --read-write-timeout, --list-timeout, and -short-timeout. These have sensible defaults but are open for tweaking.

The option --allowed-ssl-versions is only present for the FTP backend, all other backends use the operating system to figure out what version to use.

New datafolder default location

For Duplicati running as a service there are now changes for the default folder location.
If you are not running Duplicati as a service/daemon, this change has no effect.

Windows: Avoid storing data in C:\Windows\System32\config\systemprofile\AppData\Local\Duplicati and prefer {CommonProgramData}\Duplicati, usually resolving to C:\ProgramData\Duplicati.

This change is to counter an issue where Windows will wipe the C:\Windows folder on major updates, and destroy the backup configuration in the process. If your service stores data under C:\Windows you will see a warning in the user interface on startup.

Linux: Avoid storing data in /Duplicati and prefer /var/lib/Duplicati.
This was caused by the update to .NET8 where the data folder was not resolved correctly and returned /, which is not the ideal place for storing data.

If you are using --server-datafolder or DUPLICATI_HOME, this has no effect on the database, but may cause your machineid and installid to change.

The machineid.txt and installid.txt would previously be stored in the local app data folder, even when using portable mode or choosing a specific data folder.

This has been fixed, so the files will now follow the database.
If you are using the Duplicati console or otherwise depend on these values, you need to move them into the folder where the database is stored.

This update also sets permissions on the data folder and the databases to prevent unauthorized access from local accounts.
To opt out of setting permissions on each startup, place a file named insecure-permissions.txt inside the data folder.

Other large changes

  • New file and folder enumeration logic
  • Timeout logic on all backend operations
  • Improved database validation and repair logic
  • ServerUtil can output JSON for script integration
  • Improved support for having Duplicati behind a proxy
  • Updated throttle logic, all streams share the throttle
  • Improved repair logic
  • VSS is automatically on if running on Windows with sufficient privileges
  • Improved backend test function
  • Ability to suppress warnings
  • Support for remotely provided reporting url and remotely managed backup configs
  • Added support for Google IAM on Google Drive
5 Likes

There is an issue in the UI, wherein if you have a very verbose error message, it stretches way off the screen. So far, that you can’t see the button to dismiss it without zooming way out.

The error shown below happened in an older version of Duplicati, not the current stable version, but I’m unable to clear the error dialog in the current stable version without zooming way out.

Even at 30% zoom, I can only barely see the “Show Log” and “Dismiss” buttons.

In case anyone is curious, the message showed related to the old “The stream was already consumed. It cannot be read again.” error, which I am not sure has been fixed in this stable version or not.

Edit:

The error still occurs even with the current stable version, using the OneDrive backend. Here is the sequence of events in the log:

"2025-10-23 13:15:24 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerHandlerFailure]: Error in handler: Failed to parse response\nMethod: DELETE, RequestUri: 'https://xxxxxxxxxxxxxxx-my.sharepoint.com/personal/xxxxxxx_xxxxxxxxxxxxxxx_com/_api/v2.0/drive/items/01O3D4LCZ24WXHEWPNOFEJPXKUQSCC2OQ3/uploadSession?guid='2745ad63-eba2-47e2-92ec-22df83434421'&overwrite=True&rename=False&dc=0&tempauth=REDACTED', Version: 1.1, Content: <null>, Headers:\r\n{\r\n  User-Agent: Duplicati/2.2.0.0\r\n  traceparent: 00-95ccb87524fe7109505eda16388a775f-ea47ac4ff99ee722-00\r\n}\nStatusCode: 204, ReasonPhrase: 'No Content', Version: 1.1, Content: System.Net.Http.HttpConnectionResponseContent, Headers:\r\n{\r\n  Cache-Control: no-cache, no-store\r\n  Pragma: no-cache\r\n  P3P: CP=\"ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI TELo OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI\"\r\n  X-NetworkStatistics: 0,4194720,0,2,587463,525568,525568,6797\r\n  X-SharePointHealthScore: 1\r\n  X-SP-SERVERSTATE: ReadOnly=0\r\n  ODATA-VERSION: 4.0\r\n  SPClientServiceRequestDuration: 208\r\n  X-AspNet-Version: 4.0.30319\r\n  IsOCDI: 0\r\n  X-DataBoundary: NONE\r\n  X-1DSCollectorUrl: https://mobile.events.data.microsoft.com/OneCollector/1.0/\r\n  X-AriaCollectorURL: https://browser.pipe.aria.microsoft.com/Collector/3.0/\r\n  SPRequestGuid: e443d2a1-0092-a000-8a40-80ea31826dd3\r\n  request-id: e443d2a1-0092-a000-8a40-80ea31826dd3\r\n  MS-CV: odJD5JIAAKCKQIDqMYJt0w.0\r\n  SPLogId: e443d2a1-0092-a000-8a40-80ea31826dd3\r\n  SPRequestDuration: 234\r\n  Strict-Transport-Security: max-age=31536000\r\n  X-Frame-Options: SAMEORIGIN\r\n  Content-Security-Policy: frame-ancestors 'self' teams.microsoft.com *.teams.microsoft.com *.skype.com *.teams.microsoft.us local.teams.office.com teams.cloud.microsoft *.office365.com goals.cloud.microsoft *.powerapps.com *.powerbi.com *.yammer.com engage.cloud.microsoft word.cloud.microsoft excel.cloud.microsoft powerpoint.cloud.microsoft *.officeapps.live.com *.office.com *.microsoft365.com m365.cloud.microsoft *.cloud.microsoft *.stream.azure-test.net *.dynamics.com *.microsoft.com onedrive.live.com *.onedrive.live.com securebroker.sharepointonline.com;\r\n  X-Powered-By: ASP.NET\r\n  MicrosoftSharePointTeamServices: 16.0.0.26608\r\n  X-Content-Type-Options: nosniff\r\n  X-MS-InvokeApp: 1; RequireReadOnly\r\n  X-Cache: CONFIG_NOCACHE\r\n  X-MSEdge-Ref: Ref A: 230CD7ED1DD742ECAD071A6D7A545AD5 Ref B: EWR311000108051 Ref C: 2025-10-23T17:15:24Z\r\n  Date: Thu, 23 Oct 2025 17:15:23 GMT\r\n  Expires: -1\r\n}\n<error reading body>: The stream was already consumed. It cannot be read again.\r\nMicrosoftGraphException: Failed to parse response\nMethod: DELETE, RequestUri: 'https://xxxxxxxxxxxxxxx-my.sharepoint.com/personal/xxxxxxx_xxxxxxxxxxxxxxx_com/_api/v2.0/drive/items/01O3D4LCZ24WXHEWPNOFEJPXKUQSCC2OQ3/uploadSession?guid='2745ad63-eba2-47e2-92ec-22df83434421'&overwrite=True&rename=False&dc=0&tempauth=REDACTED', Version: 1.1, Content: <null>, Headers:\r\n{\r\n  User-Agent: Duplicati/2.2.0.0\r\n  traceparent: 00-95ccb87524fe7109505eda16388a775f-ea47ac4ff99ee722-00\r\n}\nStatusCode: 204, ReasonPhrase: 'No Content', Version: 1.1, Content: System.Net.Http.HttpConnectionResponseContent, Headers:\r\n{\r\n  Cache-Control: no-cache, no-store\r\n  Pragma: no-cache\r\n  P3P: CP=\"ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI TELo OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI\"\r\n  X-NetworkStatistics: 0,4194720,0,2,587463,525568,525568,6797\r\n  X-SharePointHealthScore: 1\r\n  X-SP-SERVERSTATE: ReadOnly=0\r\n  ODATA-VERSION: 4.0\r\n  SPClientServiceRequestDuration: 208\r\n  X-AspNet-Version: 4.0.30319\r\n  IsOCDI: 0\r\n  X-DataBoundary: NONE\r\n  X-1DSCollectorUrl: https://mobile.events.data.microsoft.com/OneCollector/1.0/\r\n  X-AriaCollectorURL: https://browser.pipe.aria.microsoft.com/Collector/3.0/\r\n  SPRequestGuid: e443d2a1-0092-a000-8a40-80ea31826dd3\r\n  request-id: e443d2a1-0092-a000-8a40-80ea31826dd3\r\n  MS-CV: odJD5JIAAKCKQIDqMYJt0w.0\r\n  SPLogId: e443d2a1-0092-a000-8a40-80ea31826dd3\r\n  SPRequestDuration: 234\r\n  Strict-Transport-Security: max-age=31536000\r\n  X-Frame-Options: SAMEORIGIN\r\n  Content-Security-Policy: frame-ancestors 'self' teams.microsoft.com *.teams.microsoft.com *.skype.com *.teams.microsoft.us local.teams.office.com teams.cloud.microsoft *.office365.com goals.cloud.microsoft *.powerapps.com *.powerbi.com *.yammer.com engage.cloud.microsoft word.cloud.microsoft excel.cloud.microsoft powerpoint.cloud.microsoft *.officeapps.live.com *.office.com *.microsoft365.com m365.cloud.microsoft *.cloud.microsoft *.stream.azure-test.net *.dynamics.com *.microsoft.com onedrive.live.com *.onedrive.live.com securebroker.sharepointonline.com;\r\n  X-Powered-By: ASP.NET\r\n  MicrosoftSharePointTeamServices: 16.0.0.26608\r\n  X-Content-Type-Options: nosniff\r\n  X-MS-InvokeApp: 1; RequireReadOnly\r\n  X-Cache: CONFIG_NOCACHE\r\n  X-MSEdge-Ref: Ref A: 230CD7ED1DD742ECAD071A6D7A545AD5 Ref B: EWR311000108051 Ref C: 2025-10-23T17:15:24Z\r\n  Date: Thu, 23 Oct 2025 17:15:23 GMT\r\n  Expires: -1\r\n}\n<error reading body>: The stream was already consumed. It cannot be read again."
"2025-10-23 13:15:24 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeWhileActive]: Terminating 3 active uploads"
"2025-10-23 13:15:24 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled"
"2025-10-23 13:15:24 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 2 active upload(s) are still active"
"2025-10-23 13:15:24 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled"
"2025-10-23 13:15:24 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 1 active upload(s) are still active"
"2025-10-23 13:15:24 -04 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled"
"2025-10-23 13:15:26 -04 - [Warning-Duplicati.Library.Main.Backend.BackendManager-BackendManagerShutdown]: Backend manager queue runner crashed\r\nAggregateException: One or more errors occurred. (Failed to parse response\nMethod: DELETE, RequestUri: 'https://xxxxxxxxxxxxxxx-my.sharepoint.com/personal/xxxxxxx_xxxxxxxxxxxxxxx_com/_api/v2.0/drive/items/01O3D4LCZ24WXHEWPNOFEJPXKUQSCC2OQ3/uploadSession?guid='2745ad63-eba2-47e2-92ec-22df83434421'&overwrite=True&rename=False&dc=0&tempauth=REDACTED', Version: 1.1, Content: <null>, Headers:\r\n{\r\n  User-Agent: Duplicati/2.2.0.0\r\n  traceparent: 00-95ccb87524fe7109505eda16388a775f-ea47ac4ff99ee722-00\r\n}\nStatusCode: 204, ReasonPhrase: 'No Content', Version: 1.1, Content: System.Net.Http.HttpConnectionResponseContent, Headers:\r\n{\r\n  Cache-Control: no-cache, no-store\r\n  Pragma: no-cache\r\n  P3P: CP=\"ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI TELo OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI\"\r\n  X-NetworkStatistics: 0,4194720,0,2,587463,525568,525568,6797\r\n  X-SharePointHealthScore: 1\r\n  X-SP-SERVERSTATE: ReadOnly=0\r\n  ODATA-VERSION: 4.0\r\n  SPClientServiceRequestDuration: 208\r\n  X-AspNet-Version: 4.0.30319\r\n  IsOCDI: 0\r\n  X-DataBoundary: NONE\r\n  X-1DSCollectorUrl: https://mobile.events.data.microsoft.com/OneCollector/1.0/\r\n  X-AriaCollectorURL: https://browser.pipe.aria.microsoft.com/Collector/3.0/\r\n  SPRequestGuid: e443d2a1-0092-a000-8a40-80ea31826dd3\r\n  request-id: e443d2a1-0092-a000-8a40-80ea31826dd3\r\n  MS-CV: odJD5JIAAKCKQIDqMYJt0w.0\r\n  SPLogId: e443d2a1-0092-a000-8a40-80ea31826dd3\r\n  SPRequestDuration: 234\r\n  Strict-Transport-Security: max-age=31536000\r\n  X-Frame-Options: SAMEORIGIN\r\n  Content-Security-Policy: frame-ancestors 'self' teams.microsoft.com *.teams.microsoft.com *.skype.com *.teams.microsoft.us local.teams.office.com teams.cloud.microsoft *.office365.com goals.cloud.microsoft *.powerapps.com *.powerbi.com *.yammer.com engage.cloud.microsoft word.cloud.microsoft excel.cloud.microsoft powerpoint.cloud.microsoft *.officeapps.live.com *.office.com *.microsoft365.com m365.cloud.microsoft *.cloud.microsoft *.stream.azure-test.net *.dynamics.com *.microsoft.com onedrive.live.com *.onedrive.live.com securebroker.sharepointonline.com;\r\n  X-Powered-By: ASP.NET\r\n  MicrosoftSharePointTeamServices: 16.0.0.26608\r\n  X-Content-Type-Options: nosniff\r\n  X-MS-InvokeApp: 1; RequireReadOnly\r\n  X-Cache: CONFIG_NOCACHE\r\n  X-MSEdge-Ref: Ref A: 230CD7ED1DD742ECAD071A6D7A545AD5 Ref B: EWR311000108051 Ref C: 2025-10-23T17:15:24Z\r\n  Date: Thu, 23 Oct 2025 17:15:23 GMT\r\n  Expires: -1\r\n}\n<error reading body>: The stream was already consumed. It cannot be read again.)"

I have another question, not necessarily specific to this version of Duplicati but in general.

When using the “suppress specific warnings” option on a backup, it requests a comma-separated list of warning IDs that you would like it to suppress.

How does one find these? In particular, I would like to suppress this one:

Warning while running XXXXXXXXXXXXXXXXXXXX
2025-10-23 14:25:34 -04 - [Warning-Duplicati.Library.Modules.Builtin.HyperVOptions-HyperVOnServerOnly]: This is client version of Windows. Hyper-V VSS writer is present only on Server version. Backup will continue, but will be crash consistent only in opposite to application consistent in Server version

… but I’m unable to find the warning ID.

Hopefully it’s not somewhere right under my nose.

Edit: Question still stands, but I found out in the meantime that there is a specific setting toggle for this particular error. Specifically: hyperv-ignore-client-warning=True will cause this specific warning to be ignored.

This reference shows the challenge I was fretting over in new UI, which hides option names.
Trying to get the option help from its short help message that you mentioned found it was at:

  --suppress-warnings (String): Suppress specific warnings
    Suppress warnings and log them as information instead. Use this if you need
    to silence specific warnings. This option accepts a comma separated list
    of warning IDs.

which could probably give better explanations of how to get the ID, but I had an idea already.

It’s probably a descriptive final word. Did you try HyperVOnServerOnly?

I had already reported them Release: 2.1.1.102 (Canary) 2025-09-23 - #6 by Folgore101 , it seems like he couldn’t find a fix before the stable release, which is a shame because it’s a stable release anyway.

Hello,

How exciting, to have a stable version!

I was so far using the canary version (as I am a long-term user and missed that there is a beta and stable)

And I was wondering if it is possible to update to this one from a canary version? (Without risking backup corruption)

If yes, from which canary version is it safe to switch to this one?

Thank you!

I have encountered a strange behavior with the scheduler.

On one of my PCs I have many Hyper-V virtual machines that are scheduled to back up daily. I have them all scheduled to go at the same time, 10:00PM and 11:00PM, and Duplicati usually runs them all sequentially.

However, since installing 2.2.0.0, something strange has happened. I have my backup jobs set up to phone home to my Zabbix monitoring system every time they run. I got an alert last night that one of my jobs hadn’t reported that it had run in a longer-than-expected length of time.

I checked my jobs, and several of them have “Next scheduled run” dates a day or two in the future, and for one job, an entire week into the future:

… I have others like this one, that I would have expected to run again last night, but for some reason it didn’t run last night, and will instead run tonight at its normal time.

Despite all of the above, I have several other backup jobs that ran just fine at their appointed times, and are scheduled to run again exactly as expected.

I’m going to keep an eye on these, hopefully it’s just a one-time thing.

The new UI looks really neat! Big improvement! Thank you! :flexed_biceps:

I’m running 2.1.0.5_stable_2025-03-04 on Linux (Ubuntu 24.04). What are the steps to upgrade to 2.2.0.0 stable. Is it any more than stop duplicati, run dpkg -i on duplicati-2.2.0.0_stable_2025-10-23-linux-x64-gui.deb? It does several 200GB backups to google drive, so it would be a real problem if this update fails.

Install Duplicati on Linux shows dpkg -i although I’m not fully convinced it installs dependencies.
Generally I let browser download the .deb then open a file browser to let it install using gdebi.

Running apt install works too, but you have to persuade it with something like ./ file prefix. Additionally it can finish with harmless but scary Download is performed unsandboxed if the .deb is in a protected folder such as your home directory might use. A root shell seems to avoid issue.

Basically it’s a .deb file, and there are probably a lot of different tools available to do that install.

Back to dpkg -i, I stopped Duplicati with systemctl stop duplicati (because I had a service), ran apt remove duplicati, then used dpkg for 2.1.0.5, then for 2.2.0.0: The output of that was:

Selecting previously unselected package duplicati.
(Reading database ... 613530 files and directories currently installed.)
Preparing to unpack duplicati-2.1.0.5_stable_2025-03-04-linux-x64-gui.deb ...
Unpacking duplicati (2.1.0.5) ...
Setting up duplicati (2.1.0.5) ...
Processing triggers for gnome-menus (3.36.0-1ubuntu3) ...
Processing triggers for desktop-file-utils (0.26+mint3+victoria) ...
Processing triggers for mailcap (3.70+nmu1ubuntu1) ...
(Reading database ... 614807 files and directories currently installed.)
Preparing to unpack duplicati-2.2.0.0_stable_2025-10-23-linux-x64-gui.deb ...
Unpacking duplicati (2.2.0.0) over (2.1.0.5) ...
Setting up duplicati (2.2.0.0) ...
Processing triggers for gnome-menus (3.36.0-1ubuntu3) ...
Processing triggers for desktop-file-utils (0.26+mint3+victoria) ...
Processing triggers for mailcap (3.70+nmu1ubuntu1) ...

I can’t think of any safety issues. It’s sometimes possible for Canary database to get ahead of Stable, but I don’t think it is now (as of 2.1.1.105_canary_2025-10-10). Database updates has information on what this version wants, and there’s Duplicati.CommandLine.DatabaseTool.exe (Windows, and others use duplicati-database-tool) that can downgrade. It gives help text.

Database version upgrades (say from older Canary or anything needing it) should be automatic. There’s a backup file made before that, so I’ll remind both of you to have some free space for it.

EDIT 1:

In the other direction (database version is newer than old Duplicati knows), it just refuses to run.
There’s no corruption risk. You just have to downgrade that database to what old version knows.
But this isn’t even an issue with current Canary and this Stable, as they are on the same version.

1 Like