Can you provide a screenshot of what you see? I have seen there are some button issues, but it looks translated to me.
Thanks! We are working on them.
I suspect that this is caused by a missing message somewhere.
Thanks for the issue.
The plan is to have less need for the commandline UI.
I would prefer if we have proper flows for each of the commands, so you do not need to deal with the commandline UI.
Makes a lot of sense. I have registered an issue for this.
Generally, the repair command should be able to fix it. But for dblock files it depends on having the data still available locally, which is not always the case.
There is a new option that can be used to suppress warnings such as this:
--suppress-warnings=CompressionReadErrorFallback
It will then convert those messages to information
messages.
That is an update to the way authentication is done. Initial implementation would send the token as a query string to the socket, but this could end up logging the token. It is short-lived, so it should be a minor issue, but I have updated it to use a real authentication handshake now.
The new code is much too verbose, so I have a PR the scales it down.
Not much to go after. Could you try to add --console-log-level=verbose
to see if we can zoom in on where it gets stuck?
Yes! It no longer buffers everything, but streams it from disk to Jottacloud.

The result, in addition to warnings, seems to be that automatic reconnect doesn’t work:
I have added an issue for that.

doesn’t look great, but ngax seemed to wind up as expected in the end, allowing my use.
I spent quite a lot of energy on getting ngax to be robust against various failures. I will apply the same handling to ngclient.

however the output is a bit confusing at first glance. It seems to end with failure, but that’s the old file which was found to have a problem, so was replaced (uploaded) then deleted.
Yes, that is quite confusing. The issue is that the verification has been completed once the repair kicks in. I guess we should remove any deleted files from the output.

Found 59 faulty index files, use the option --replace-faulty-index-files to repair them
This line indicates that the files are actually NOT being repaired.

duplicati-b4b09349f34e34766931ca770d5c16c94.dblock.zip.aes: 1 errors Extra: +0nAIzGYf+Dk4B5xdkLE+YwhcJKZTXeQ/edh+51aQ24=
This is not fixed by the repair command. It happens due to to a bug that has been fixed, where the same chunks (from a blocklist) could be written to multiple .dblock
files. When this happens, the database will only track a single one of them, and report all the others as “extra”.
Generally, you can ignore “extra”, as it means that there is more data in the file than what was expected.

The empty index file duplicati-i836147d0459948d4a5d7663528f26aab.dindex.zip.aes is larger than expected (54541 bytes), choosing not to delete it
This is a file that is most likely not needed. The database has no entries for the file, so it should be safe to delete. But because it is not an empty file (due to the size) it is not deleted.

the website service will not accept any PFX exported using AES256-SHA256 encryption, it starts but the site does not respond. Re-export with the older TripleDES-SHA1 and it’s fine. Both have password set.
That sounds odd. I have tested with some of the newer EC variants and they work, so something as standard as AES256 should certainly work. Would you be able to provide a certificate that does not work, so I can test with that? (obviously not a copy of actual certs).
I have created an issue to track it here.
If you can either attach to the issue or PM it to me, I will follow up.

2025-06-06 23:10:08 -03 - [Warning-Duplicati.Library.Main.Operation.Backup.MetadataGenerator.Metadata-MetadataProcessFailed]: Failed to process metadata on “/Volumes/exFAT_macOS/macOS/[ Whatsapp ]/._WhatsApp Image 2024-09-12 at 09.41.10.jpeg”, storing empty metadata
FileAccesException: Unable to access the file “/Volumes/exFAT_macOS/macOS/[ Whatsapp ]/._WhatsApp Image 2024-09-12 at 09.41.10.jpeg” with method listxattr, error: EPERM (1)
Generally, this warning means that Duplicati gets an EPERM
(permission denied) error when trying to read the attributes from the file or directory. Duplicati will instead just store an empty set of metadata.
I can see that this case is not correctly re-classified as an access issue. I have a PR for that

Using a command in macOS Terminal, I deleted the ._ metadata files.
Now, the backup is being performed without warnings.
However, before, in v.118, the warning did not appear. Only in v.119 appear. This metadata is created when exFAT is accessed by other operating systems, for example, Windows, but version v.118 did not include the ._ metadata when making backups.
Thank you for your attention.EDIT: Once the metadata is created again the alerts will appear again.
Yes, macOS will create these files as needed. They are similar to Thumbs.db
files on Windows.
There has been no change that I am aware of that should change how attributes are read between the two versions. From what I can see in the code, the previous logic would also log this exact problem with a warning message.
My best guess is that “something” was filtering these errors before, but the update to classify the error messages is now letting them through.
If you want to just ignore the messages, you can add:
--suppress-warnings=MetadataProcessFailed

The TEST option or a backup run with full-remote-verification = ListAndIndexes/IndexesOnly/IndexOnly and with replace-faultiy-index-files = true and finally with no-local-db always results in errors. One or two “.dlist.zip” files are checked, never more.
I don’t fully follow the issue here. The --replace-faulty-index-files
option only affects index files.
If you set --full-remote-verification=indexonly
it will not test any .dlist
files. If you use ListAndIndexes
, it may check multiple files, but it will only fix index files.

I’m coming from Canary .118. With 119, the messages appeared (more or less frequently for most jobs):
[Warning-Duplicati.Library.Main.Operation.TestHandler-FaultyIndexFiles]: Found 1 faulty index files, repairing now
[Error-Duplicati.Library.Main.Operation.TestHandler-Test results]: Verified 2 remote files with 1 problem(s)
I think this is “as designed”. At then end of the backup it will verify the backup, and if it discovers problematic index files, it will replace them. But because it does not verify everything on each run, you will get some files fixed on each run.

backup-test-percentage = 100
Only two index files are checked.
Sounds odd (assuming you have more than 2).
What about --backup-test-samples=100
?

Unfortunately I have several jobs where the errors are not fixed. So lazy index files are found and re-uploaded, but it stays that way, even after several runs.
Do you mean that the errors are detected but not fixed?

I haven’t tested it, but the same problem should occur with
.
hidden files.
I don’t think so. I think the problem is that macOS somehow prevents access to the attributes.

Again, since Duplicati v.118 does not display the warning messages, I conclude that Duplicati v.119 is handling the
._
metadata differently than Duplicati v.118.
If you can run with .118 and try to have verbose logging enabled, maybe you can see messages with the type FailedAttributeRead
? If so, I think the problem is that .118 would log these as verbose errors, and for some reason .119 treats them as warnings.

was listed under ngclient, however the bigger question is did options get moved or killed?
Options likehttp-operation-timeout
andallowed-ssl-versions
are gone in the help.
The mention under ngclient is because there was a section in the backends, specifically for the http options.
However, the idea was meant to make it easier to do system-wide configurations, but caused some confusion, because you could apply --http-operation-timeout
for all backends, including FTP and SSH which just ignored the option.
Instead of this, the code is now using a common-options
module that has a few options that are shared across modules, such as --read-write-timeout
. The intention is that you can supply these in the general settings as advanced options, and they apply to all backups.
This is still a bit weird for cases like --accept-specified-ssl-hash
which only applies to HTTP-based backends. The backends are now updated so they expose all options they support, so you can look at the advanced options in the UI and know that the option has an effect on that backend. I sadly botched the --oauth-url
by not following this standard, so that will be added in the next canary.

but if this was done, it seems to not be working. Any user impact should be clearly noted, however ideally it would still work without requiring any migration work for previous setup.
It should not have any impact on existing setups. It is just a move of where the options are defined, making it clearer going forward. The options --http-operation-timeout
and --allowed-ssl-versions
were only working for backends using the deprecated WebClient
which we have been removing. They most likely have not had an effect for a while.
For --http-operations-timeout
, the replacement is the new --list-timeout
, --read-write-timeout
and --short-timeout
options that now work on all backends, including the non-http ones.
The --allowed-ssl-versions
is supported for FTP, but not for any HTTP-based backends, as it instead uses the OS to determine what to support.

Some of this might be from passage of time helping to increase the wasted space issue, however superficially numerically, the problem in the first file grew far worse than before.
Are the errors “extra” or “missing”? The number of “missing” should certainly go down.

So the good news is it doesn’t seem to have slowed a DB recreate, but is this still a bug?
In addition to new “Extra” case, there’s also the “Missing” ones that were reported earlier.
I have created an issue. I will investigate.

I get on some backups the following error
2025-06-12 12:02:03 +02 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error TaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing.
That is a bug. I have created an issue for it. I think the other error is similar enough, that I would guess it is the same.