Release: 2.1.1.100 (Canary) 2025-08-08

2.1.1.100_canary_2025-08-08

This release is a canary release intended to be used for testing.

Changes in this versions

This build adresses an issue with timeouts on B2 uploads, along with a number of minor fixes.

There is a breaking change in that Duplicati will now consider an empty source folder as an error, unless you specify --allow-empty-source. This is done to avoid common mistakes where a folder was supposed to be mounted but was not.

The Docker images have been overhauled and now supports running with a custom UID/GID combination.

The login flow has been secured slightly and now features an extra nonce value that is required on requests. If you have custom code authenticating with Duplicati this is a breaking change.

Finally, there is now also an option to log internal HTTP requests to the log for tracing and debugging.

Detailed list of changes:

  • Fixed commandline test command issuing a warning
  • Fixed erratic timeouts on B2
  • Corrected reported timeouts in BackendTester
  • Fixed some quoting of SQLite queries
  • Update Docker images with UID/GID support
  • Fixed CLI help not showing the datafolder environment variable name
  • Changed repair message to suggest purge-broken-files
  • Fixed a thread resource leak in TrayIcon
  • Fixed Vacuum failing to complete due to active transaction
  • Fixed an issue with missing hashes on remote volumes causing crashes
  • Updated localizations, thanks to all translators
  • Fixed a case where databases could be opened with pooling enabled
  • Remote Sync tool now restart and retries on transfer failures
  • Fixed an issue that prevent setting a default theme
  • Added detailed output option to serverutil list-backups command
  • Added --allow-empty-source option
  • Added nonce to refresh tokens and improved non-persisted logins
  • Updated multiple packages to latest versions, including AWSSDK, Azure, SMB, FluentFTP, and SharpCompress
  • Added option to log http requests and socket data

ngclient changes:

  • Converted from Sparkle to ShipUI
  • Progress bar is now fixed length
  • Updated translations, thanks to all translators
  • Added support for a refresh nonce
3 Likes

I upgraded the Windows machine with the database issue and it was able to complete and also clean up backups after reducing. So that’s good.

However, it’s now broken my Wasabi S3 backup on another Windows server and I get this error when I test the connection:

The current storage type setting is “S3 Compatible” under the “Proprietary” section, but notice another and “Standard protocols”, tried that one and it was the same.

The setting for "Client library to use” was “Amazon AWS SDK”, so I changed that to “Minio SDK” and the connection test succeeded. I then tried a “Verify files” for the backup job and it returned warnings:

I then tried a database repair

So I went to check the job and revert it back to “Amazon AWS SDK” and it was still that. It also reverts to the “S3 Compatible” under “Proprietary”.

I really hope this hasn’t screwed it up.

Small update so I tried again through the new UI, sorry still a bit stuck in my ways, and although the same error at first, I did see that the job had the library as as an advanced option of “Amazon AWS SDK” plus “Specify S3 location constraints” and “Specify storage class” present but empty. So I removed those, changed the library to “Minio SDK” and tested it, was ok. Re-ran the backup and it completed.

However, on the previous server that also has a separate S3 job, it too failed. Same error. I needed to run a Verify before it would start the backup. It then completed.

Oh, and I did notice that when clicking on the Server field, which for me is Wasabi Frankfurt, it would cause the entry to become invalid on a connection test and I have to choose it again:

Small visual issue with the new UI, the alerts, when you hover the mouse over the X to close the mouse changes to “text selection” rather than a pointer. I cannot snapshot it as that reverts it to the selection cursor of the snap tool even when I delay the snap.

I found another issue in the new UI. In the section options I can´t change the unit for the remote volume size. It ist always set to MB.

By coincidence, I stumbled onto the same thing in throttle-upload, and filed an issue.

Options screen unit multiplier becomes MB regardless of what it says #337

I didn’t try Remote volume size, but you can read the description to see if it fits your case.

I’m not sure this is a good default for Linux. Many desktop environments will create “standard” folders that may or may not get used - i.e. Pictures, Documents, etc. Software packages may do likewise. The very first backup I tried complained about an empty /opt - it is empty because it is created as part of the OS install, but I have never installed any “optional” software that uses that directory - typically it would be used when building and installing from source rather than an rpm or other software package. In other words, I am thinking it will be more common for there to be “normal” empty folders on Linux then un-mounted file systems.

I concur, nearly all my backups both Linux and Windows needed the override to start after failing last night. There are far more “empty” folders than you think.

One of the Wasabi S3 jobs that I needed to fix by changing the library failed when it detected

LimitedWarnings: [
    2025-08-11 15:05:28 +02 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-ia732b8831a734598aeb9992821a57a52.dindex.zip.aes
]
LimitedErrors: [
    2025-08-11 15:05:28 +02 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Backup has failed
RemoteListVerificationException: Found 1 remote files that are not recorded in local storage. This can be caused by having two backups sharing a destination folder which is not supported. It can also be caused by restoring an old database. If you are certain that only one backup uses the folder and you have the most updated version of the database, you can use repair to delete the unknown files.
]
Log data:
2025-08-11 15:05:28 +02 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-ia732b8831a734598aeb9992821a57a52.dindex.zip.aes
2025-08-11 15:05:28 +02 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Backup has failed
Duplicati.Library.Interface.RemoteListVerificationException: Found 1 remote files that are not recorded in local storage. This can be caused by having two backups sharing a destination folder which is not supported. It can also be caused by restoring an old database. If you are certain that only one backup uses the folder and you have the most updated version of the database, you can use repair to delete the unknown files.
   at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(IBackendManager backend, Options options, LocalDatabase database, IBackendWriter log, IEnumerable`1 protectedFiles, IEnumerable`1 strictExcemptFiles, Boolean logErrors, VerifyMode verifyMode, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(Options options, BackupResults result, IBackendManager backendManager)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(Options options, BackupResults result, IBackendManager backendManager)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<<Backup>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)

I ran the repair and then got

 "Messages": [
    "2025-08-11 15:29:38 +02 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Repair has started",
    "2025-08-11 15:29:45 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2025-08-11 15:29:47 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (19.04 KiB)",
    "2025-08-11 15:29:47 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-ia732b8831a734598aeb9992821a57a52.dindex.zip.aes (178.73 KiB)",
    "2025-08-11 15:29:47 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-ia732b8831a734598aeb9992821a57a52.dindex.zip.aes (178.73 KiB)",
    "2025-08-11 15:29:47 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-ia732b8831a734598aeb9992821a57a52.dindex.zip.aes (178.73 KiB)",
    "2025-08-11 15:29:47 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Completed: duplicati-ia732b8831a734598aeb9992821a57a52.dindex.zip.aes (178.73 KiB)"
  ],
  "Warnings": [
    "2025-08-11 15:29:48 +02 - [Warning-Duplicati.Library.Main.Operation.RepairHandler-LargeEmptyIndexFile]: The empty index file duplicati-iaf3227b9a99c42d297f4f157c7000dfa.dindex.zip.aes is larger than expected (24653 bytes), choosing not to delete it",
    "2025-08-11 15:29:48 +02 - [Warning-Duplicati.Library.Main.Operation.RepairHandler-LargeEmptyIndexFile]: The empty index file duplicati-i100be42487b84900aa7a1a4dd8580bc5.dindex.zip.aes is larger than expected (138781 bytes), choosing not to delete it"
  ],
  "Errors": [
    "2025-08-11 15:29:47 +02 - [Error-Duplicati.Library.Main.Operation.RepairHandler-FailedNewIndexFile]: Failed to accept new index file: duplicati-ia732b8831a734598aeb9992821a57a52.dindex.zip.aes, message: Too many source blocklist entries in h6bNIJMkWpgE/4v/iNpFwggIybR2ijtJIGWoTRGhxHA=\r\nException: Too many source blocklist entries in h6bNIJMkWpgE/4v/iNpFwggIybR2ijtJIGWoTRGhxHA="
  ],

That is most likely due to the updated AWSSDK library. Based on this note, it is a problem with Wasabi S3 not correctly parsing the signature string.

You can set the option --s3-disable-chunk-encoding which should then not emit that header. I will try a new version that simply lowercases that header, and have fingers crossed that other services do not expect it to be upper case :crossed_fingers:

The first warning is because the previous job crashed when it was uploading files.

The other issues are not related to the S3 connection, so nothing appears messed up.

The location constraints are only applied when creating a bucket (from Duplicati). The location constraints and storage class values are not supported by the MinioSDK (not sure if Wasabi supports them either).

That looks exactly like the message I found, saying that the signature was incorrect.

Hi @ghChrisHe, welcome to the forum :waving_hand:

I have reproduced it on the issue that @ts678 reported.

I was a bit on the fence about introducing the option in the current (opt-out) way, but decided that it would have little preventive effects if it was not enabled by default. I imagined that very few people would deliberately have empty folders as their sources, but maybe I am wrong there.

Just to understand your setup a little better: you have explicitly set up the backup to include /opt even though it is empty, because it may be filled at a later time? And you have not chosen /, but a few explicitly chosen folders (including /opt)?

Just to clarify, the check only checks the source folders (the top-level folders), there is no check on sub-folders that are empty.

Can you give me an idea of the thinking behind setting up backups that explicitly include empty folders in the backup? Are they for “future use”? Or “just in case”?

That sounds strange, because the logs show that it was just deleted? Does it work if you run repair again, given that the file was deleted?

Thanks for the various responses, I will retry but I wanted to know first if I should switch away from Minio, does it make a difference for Wasabi or go back to AWS with the switch you mentioned to be safe?

The other way around - I backup everything using multiple jobs. So I have an “os” backup that backs up “/”, but excludes /home and /root/.config/Duplicati. Then a job that only does /home, and a job the does /root/.config/Duplicati. Where I have temporary mount points I have specific excludes in place - as an example my home directory has a mount point for my cellphone, but I don’t want to back that tree up if I forget to unmount the cellphone.

I’ll see if there is a Q&D way to identify empty directories to see how “bad” the situation may be in my case.

EDIT 1

On my main laptop there are over 3000 empty directories in the “/” tree. This does not include anything under /home, /tmp, or other file systems that are separate from “/”.

There are 3225 empty directories in my home directory tree. Many are firefox, git, and wine.

On another system that is a relative new build with less software my home directory tree has 69 empty directories while the “/” tree has 1945.

I’m not clear you two are communicating, but I worry about the test, as it’s not meaningful. Descending into empty folders should be fine AFAIK, but explicitly configuring them is not.

Example test uses an “empty_folder” underneath “empty_folder parent” and another at the same level as the parent. The former works. The latter is concerned and reports to you as:

Error while running test 1

The source folder C:\backup source\empty_folder\ is empty, aborting backup. If this is expected, consider using the --allow-empty-source option.

You can see what you explicitly selected in new UI Source screen by looking on the right. Old ngax UI had it at the bottom of the tree. In either case, you can also look at an export.

image

It doesn’t matter just that it is empty (for any reason), but how did it get into your config?

EDIT 1:

Let’s check the help text, which only indirectly talks about source paths, but it means the source path list as would show up in the GUI (see images) or as a backup command list.

  --allow-empty-source (Boolean): Allow backups to run if source folders are empty
    Use this option to allow backups to run if one or more source folders are empty.
    Usually an empty source folder is an indication that something like a USB drive
    is not mounted correctly. This check only works for filesystem source paths.
    * default value: false
Usage: Duplicati.CommandLine.exe backup <storage-URL> "<source-path>" [<options>]
...
Multiple source paths can be specified if they are separated by a space.

As indicated in another post - it is part of the “/” file system and not excluded. Linux has thousands of empty directories as part of the OS, and dozens to thousands of empty directories within a home directory tree. This is normal and IMHO it is a mistake for Duplicati to default to aborting a backup just because it encounters an empty directory.

To be clear, I am only referring to Linux - I make no representation as to how applicable this option is for Windows. I.e. for all I know it may make sense to keep this as a default for Windows and remove it as a default for Linux. (FWIW I have not had to enable this option on the one Windows system I have updated to 2.1.100 - only on the Linux system. But I will have to enable it as a global option on all my Linux systems.)

It doesn’t do that, as explained and demonstrated – unless there’s a bug just on Linux.

OK - retracting what I said regarding my config in earlier posts.

I am explicitly selecting the directories under “/”, so that may be why I am falling into this.

At some point I must have decided it was less work/confusion/whatever to pick directories to back up rather than picking directories to exclude.

OK - So this is only triggered by an explicitly selected directory that happens to be empty at the time of the backup.

I can live with that :slightly_smiling_face:

1 Like

It’s a mixture, I have one backup where a specific folder is sometimes used by another application, so most of the time it’s empty but can sometimes have files present that I’d want backed up. Another is a folder I use to backup application data before an upgrade then I delete once it’s not needed - having this backed up means I can still go back after a few months when the files have been deleted and grab any files.

I let the job run on its own last night and it was fine, so maybe it did clean up the file.

But as per my previous question, is there any reason to go back to the AWS SDK or should I stay with Minio?

My Windows 10 system running 2.1.1.100 reported several missing files on 2 backups this morning. This is after running at least 2 “good” backups for each.

I just installed this version on Debian Trixie as you suggested in a different thread and it worked as expected. I love the white space (very clean) in the new browser UI. :smiling_face_with_sunglasses: :smiling_face_with_sunglasses: :smiling_face_with_sunglasses: