Release: 2.1.0.119 (Canary) 2025-05-29

Can you provide a screenshot of what you see? I have seen there are some button issues, but it looks translated to me.

Thanks! We are working on them.

I suspect that this is caused by a missing message somewhere.

Thanks for the issue.

The plan is to have less need for the commandline UI.
I would prefer if we have proper flows for each of the commands, so you do not need to deal with the commandline UI.

Makes a lot of sense. I have registered an issue for this.

Generally, the repair command should be able to fix it. But for dblock files it depends on having the data still available locally, which is not always the case.

There is a new option that can be used to suppress warnings such as this:

--suppress-warnings=CompressionReadErrorFallback

It will then convert those messages to information messages.

That is an update to the way authentication is done. Initial implementation would send the token as a query string to the socket, but this could end up logging the token. It is short-lived, so it should be a minor issue, but I have updated it to use a real authentication handshake now.

The new code is much too verbose, so I have a PR the scales it down.

Not much to go after. Could you try to add --console-log-level=verbose to see if we can zoom in on where it gets stuck?

Yes! It no longer buffers everything, but streams it from disk to Jottacloud.

I have added an issue for that.

I spent quite a lot of energy on getting ngax to be robust against various failures. I will apply the same handling to ngclient.

Yes, that is quite confusing. The issue is that the verification has been completed once the repair kicks in. I guess we should remove any deleted files from the output.

This line indicates that the files are actually NOT being repaired.

This is not fixed by the repair command. It happens due to to a bug that has been fixed, where the same chunks (from a blocklist) could be written to multiple .dblock files. When this happens, the database will only track a single one of them, and report all the others as “extra”.

Generally, you can ignore “extra”, as it means that there is more data in the file than what was expected.

This is a file that is most likely not needed. The database has no entries for the file, so it should be safe to delete. But because it is not an empty file (due to the size) it is not deleted.

That sounds odd. I have tested with some of the newer EC variants and they work, so something as standard as AES256 should certainly work. Would you be able to provide a certificate that does not work, so I can test with that? (obviously not a copy of actual certs).

I have created an issue to track it here.

If you can either attach to the issue or PM it to me, I will follow up.

Generally, this warning means that Duplicati gets an EPERM (permission denied) error when trying to read the attributes from the file or directory. Duplicati will instead just store an empty set of metadata.

I can see that this case is not correctly re-classified as an access issue. I have a PR for that

Yes, macOS will create these files as needed. They are similar to Thumbs.db files on Windows.

There has been no change that I am aware of that should change how attributes are read between the two versions. From what I can see in the code, the previous logic would also log this exact problem with a warning message.

My best guess is that “something” was filtering these errors before, but the update to classify the error messages is now letting them through.

If you want to just ignore the messages, you can add:

--suppress-warnings=MetadataProcessFailed

I don’t fully follow the issue here. The --replace-faulty-index-files option only affects index files.

If you set --full-remote-verification=indexonly it will not test any .dlist files. If you use ListAndIndexes, it may check multiple files, but it will only fix index files.

I think this is “as designed”. At then end of the backup it will verify the backup, and if it discovers problematic index files, it will replace them. But because it does not verify everything on each run, you will get some files fixed on each run.

Sounds odd (assuming you have more than 2).
What about --backup-test-samples=100 ?

Do you mean that the errors are detected but not fixed?

I don’t think so. I think the problem is that macOS somehow prevents access to the attributes.

If you can run with .118 and try to have verbose logging enabled, maybe you can see messages with the type FailedAttributeRead ? If so, I think the problem is that .118 would log these as verbose errors, and for some reason .119 treats them as warnings.

The mention under ngclient is because there was a section in the backends, specifically for the http options.

However, the idea was meant to make it easier to do system-wide configurations, but caused some confusion, because you could apply --http-operation-timeout for all backends, including FTP and SSH which just ignored the option.

Instead of this, the code is now using a common-options module that has a few options that are shared across modules, such as --read-write-timeout. The intention is that you can supply these in the general settings as advanced options, and they apply to all backups.

This is still a bit weird for cases like --accept-specified-ssl-hash which only applies to HTTP-based backends. The backends are now updated so they expose all options they support, so you can look at the advanced options in the UI and know that the option has an effect on that backend. I sadly botched the --oauth-url by not following this standard, so that will be added in the next canary.

It should not have any impact on existing setups. It is just a move of where the options are defined, making it clearer going forward. The options --http-operation-timeout and --allowed-ssl-versions were only working for backends using the deprecated WebClient which we have been removing. They most likely have not had an effect for a while.

For --http-operations-timeout, the replacement is the new --list-timeout, --read-write-timeout and --short-timeout options that now work on all backends, including the non-http ones.

The --allowed-ssl-versions is supported for FTP, but not for any HTTP-based backends, as it instead uses the OS to determine what to support.

Are the errors “extra” or “missing”? The number of “missing” should certainly go down.

I have created an issue. I will investigate.

That is a bug. I have created an issue for it. I think the other error is similar enough, that I would guess it is the same.

1 Like