Release: 2.1.0.119 (Canary) 2025-05-29

Assuming “lazy” means broken in some way that gets complaints, and that complaints are persistent over several runs with “all”, the problem looks to me like the replace-faulty-index-files can make faulty replacement index files that lack blocks from DeletedBlock table (meaning blocks that now site in dblock file but are waste after their use ages away).

It seems to pick up blocks from Block table, which is current, and I’m not sure what to say about DuplicateBlock table which is even more obscure. This info is mainly for developer.

I tested with backup having 7 known bad dindex files (no list folder). I had test sample less than “all”, so it claimed to fix 4 by upload of a new dindex then delete of the old dindex.

Found 4 faulty index files, repairing now
  Uploading file duplicati-i63c3448043e54cf686629a88285d85e7.dindex.zip.aes (68.53 KiB) ...
  Deleting file duplicati-ibae40f55c39f4bf39688dd5a84431fce.dindex.zip.aes  (71.25 KiB) ...
  Uploading file duplicati-i8f5c3631ef834479bd40f18afb529c62.dindex.zip.aes (251.08 KiB) ...
  Deleting file duplicati-if295f13bb37847de88d2f823afc932b8.dindex.zip.aes  (230.09 KiB) ...
  Uploading file duplicati-idc1c75b308924e24937e2605aebee027.dindex.zip.aes (44.93 KiB) ...
  Deleting file duplicati-i2ea29e9ab8364952872e2eff6fa5c0af.dindex.zip.aes  (33.56 KiB) ...
  Uploading file duplicati-ic6a2f0e4dd264b239792ea8363bf671e.dindex.zip.aes (53.01 KiB) ...
  Deleting file duplicati-ia2516d45091e4471bd171f77bebee323.dindex.zip.aes  (22.98 KiB) ...

I tested all, expecting 7 - 4 = 3 bad dindex files, but got 5. 2 of 4 are still flagged:

duplicati-i63c3448043e54cf686629a88285d85e7.dindex.zip.aes: 849 errors
duplicati-i8f5c3631ef834479bd40f18afb529c62.dindex.zip.aes: 41 errors
duplicati-idc1c75b308924e24937e2605aebee027.dindex.zip.aes: No errors
duplicati-ic6a2f0e4dd264b239792ea8363bf671e.dindex.zip.aes: No errors

Looking at files that got replaced by 2 new bad ones, they respectively looked like:

duplicati-ibae40f55c39f4bf39688dd5a84431fce.dindex.zip.aes: 21 errors
duplicati-if295f13bb37847de88d2f823afc932b8.dindex.zip.aes: 65 errors

Some of this might be from passage of time helping to increase the wasted space issue, however superficially numerically, the problem in the first file grew far worse than before.

Actual impact might not be worse, as either one might require reading associated dblock. Developer comment on the theory and its impact would help, but there’s proposed cause.

EDIT 1:

Rather than sample more of the missing blocks to see where I could find them, I looked in DeletedBlock table for row count in the dblock associated with the new dindex. Result is:

duplicati-b1d9a404a6302440d820be51e620975e0.dblock.zip.aes has 849
duplicati-b08fe3c79b7e54033bc0fab0225ae14ee.dblock.zip.aes has 41

thereby matching the missing blocks in dindex, and supporting the DeletedBlock theory.
I’m tempted to delete the bad dindex to see what repair does when it replaces the dindex. Seeing it do the same error would mean there’s another route to bad dindex. I’ll wait a bit.

EDIT 2:

While leaving my larger backup alone, I tested a small one which unfortunately looks like Repair makes the same error, and even worse it makes it on 2.1.0.5 Stable. Steps to test:

  • Make unencrypted backup, keeping 1 version, of A.txt and B.txt each with that letter.
  • Deselect B.txt and backup again. Looking in the DB, this splits the original situation where A.txt and B.txt each had a data block and a metadata block in the Block table.
  • Delete or hide (e.g. change prefix) the dindex file, and Database Repair to replace it.
  • Extract the vol file from new dindex and notice it only has two blocks rather than four.

{"blocks":[{"hash":"VZrq0IJk1XldOQlxjN0Fq9SVcuhP5VWQ7vMaiKCP3/0=","size":1},{"hash":"UW7CCxtLGw1gqSW6qxiGU8Xd/arQWsoW55bRZeAeQug=","size":137}],"volumehash":"LrqTo3JmaqseCgJKz52Te2L5VJt++iJR0FPzmrVNJzQ=","volumesize":1128}

The dblock file has four blocks, therefore the dindex file associated with it should index all.

EDIT 3:

To clarify “splits” and show what went where, here are Block and DeletedBlock tables.

As had been suspected, blocks for that volume are lost from dindex if in DeletedBlock, however let’s see if that actually causes a dblock read, as the dlist reference is gone too.

Database Recreate did not read the dblock, and DeletedBlock table was not populated.

Running test all with full-remote-verification complains of extra dblock blocks:

duplicati-b065a67d0d8574c9587ced93727e109a4.dblock.zip: 2 errors
	Extra: 335w5QIVRPSDS77mSp43if68S+gUcN9inK1t2wMyClw=
	Extra: h90JSQPN+uqg6ows41QAIdBiEbLFXeNVDrx740zJLVE=

So the good news is it doesn’t seem to have slowed a DB recreate, but is this still a bug?
In addition to new “Extra” case, there’s also the “Missing” ones that were reported earlier.

I get on some backups the following error

2025-06-12 12:02:03 +02 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
TaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing.

Full log

            {
  "DeletedFiles": 0,
  "DeletedFolders": 0,
  "ModifiedFiles": 0,
  "ExaminedFiles": 42337,
  "OpenedFiles": 311,
  "AddedFiles": 311,
  "SizeOfModifiedFiles": 0,
  "SizeOfAddedFiles": 9224469559,
  "SizeOfExaminedFiles": 449953090240,
  "SizeOfOpenedFiles": 9224469559,
  "NotProcessedFiles": 0,
  "AddedFolders": 0,
  "TooLargeFiles": 0,
  "FilesWithError": 0,
  "TimestampChangedFiles": 0,
  "ModifiedFolders": 0,
  "ModifiedSymlinks": 0,
  "AddedSymlinks": 0,
  "DeletedSymlinks": 0,
  "PartialBackup": false,
  "Dryrun": false,
  "MainOperation": "Backup",
  "CompactResults": null,
  "VacuumResults": null,
  "DeleteResults": null,
  "RepairResults": null,
  "TestResults": null,
  "ParsedResult": "Fatal",
  "Interrupted": false,
  "Version": "2.1.0.119 (2.1.0.119_canary_2025-05-29)",
  "EndTime": "2025-06-12T10:02:03.4431846Z",
  "BeginTime": "2025-06-12T09:50:01.1589564Z",
  "Duration": "00:12:02.2842282",
  "MessagesActualLength": 86,
  "WarningsActualLength": 7,
  "ErrorsActualLength": 2,
  "Messages": [
    "2025-06-12 11:50:01 +02 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: Die Operation Backup wurde gestartet",
    "2025-06-12 11:50:06 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2025-06-12 11:50:08 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (1,620 KiB)",
    "2025-06-12 11:50:55 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b2d02f48ee0a54e20a9d251210c9f5c33.dblock.zip.aes (499,150 MiB)",
    "2025-06-12 11:51:54 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bd9c2169db97d4a9a849eab9c0d288836.dblock.zip.aes (499,336 MiB)",
    "2025-06-12 11:52:37 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-b2d02f48ee0a54e20a9d251210c9f5c33.dblock.zip.aes ()",
    "2025-06-12 11:52:47 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-b2d02f48ee0a54e20a9d251210c9f5c33.dblock.zip.aes (499,150 MiB)",
    "2025-06-12 11:52:47 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-bc2204b0f7a1f43a995f74a7f02a2e68a.dblock.zip.aes (499,150 MiB)",
    "2025-06-12 11:52:47 +02 - [Information-Duplicati.Library.Main.Backend.PutOperation-RenameRemoteTargetFile]: Renaming \"duplicati-b2d02f48ee0a54e20a9d251210c9f5c33.dblock.zip.aes\" to \"duplicati-bc2204b0f7a1f43a995f74a7f02a2e68a.dblock.zip.aes\"",
    "2025-06-12 11:52:48 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bc2204b0f7a1f43a995f74a7f02a2e68a.dblock.zip.aes (499,150 MiB)",
    "2025-06-12 11:53:37 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-bd9c2169db97d4a9a849eab9c0d288836.dblock.zip.aes ()",
    "2025-06-12 11:53:47 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-bd9c2169db97d4a9a849eab9c0d288836.dblock.zip.aes (499,336 MiB)",
    "2025-06-12 11:53:47 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-b90fe1f2086d9428588687acb25910140.dblock.zip.aes (499,336 MiB)",
    "2025-06-12 11:53:47 +02 - [Information-Duplicati.Library.Main.Backend.PutOperation-RenameRemoteTargetFile]: Renaming \"duplicati-bd9c2169db97d4a9a849eab9c0d288836.dblock.zip.aes\" to \"duplicati-b90fe1f2086d9428588687acb25910140.dblock.zip.aes\"",
    "2025-06-12 11:53:48 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b90fe1f2086d9428588687acb25910140.dblock.zip.aes (499,336 MiB)",
    "2025-06-12 11:54:30 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-bc2204b0f7a1f43a995f74a7f02a2e68a.dblock.zip.aes ()",
    "2025-06-12 11:54:39 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b657c908f975147468fe537aa061fca7d.dblock.zip.aes (499,270 MiB)",
    "2025-06-12 11:54:40 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-bc2204b0f7a1f43a995f74a7f02a2e68a.dblock.zip.aes (499,150 MiB)",
    "2025-06-12 11:54:40 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-bd9965371bdff4ad8b930e6424198afc1.dblock.zip.aes (499,150 MiB)",
    "2025-06-12 11:54:40 +02 - [Information-Duplicati.Library.Main.Backend.PutOperation-RenameRemoteTargetFile]: Renaming \"duplicati-bc2204b0f7a1f43a995f74a7f02a2e68a.dblock.zip.aes\" to \"duplicati-bd9965371bdff4ad8b930e6424198afc1.dblock.zip.aes\""
  ],
  "Warnings": [
    "2025-06-12 12:02:00 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerHandlerFailure]: Error in handler: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing.\nTaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing.",
    "2025-06-12 12:02:00 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeWhileActive]: Terminating 3 active uploads",
    "2025-06-12 12:02:00 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-06-12 12:02:00 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 2 active upload(s) are still active",
    "2025-06-12 12:02:00 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-06-12 12:02:00 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 1 active upload(s) are still active",
    "2025-06-12 12:02:00 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled"
  ],
  "Errors": [
    "2025-06-12 12:02:03 +02 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error\nTaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing.",
    "2025-06-12 12:02:03 +02 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Backup has failed\nTaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing."
  ],
  "BackendStatistics": {
    "RemoteCalls": 20,
    "BytesUploaded": 0,
    "BytesDownloaded": 0,
    "FilesUploaded": 0,
    "FilesDownloaded": 0,
    "FilesDeleted": 0,
    "FoldersCreated": 0,
    "RetryAttempts": 17,
    "UnknownFileSize": 0,
    "UnknownFileCount": 0,
    "KnownFileCount": 1659,
    "KnownFileSize": 426581533455,
    "KnownFilesets": 1,
    "LastBackupDate": "2025-04-22T02:25:11+02:00",
    "BackupListCount": 1,
    "TotalQuotaSpace": 0,
    "FreeQuotaSpace": 0,
    "AssignedQuotaSpace": -1,
    "ReportedQuotaError": false,
    "ReportedQuotaWarning": false,
    "MainOperation": "Backup",
    "ParsedResult": "Success",
    "Interrupted": false,
    "Version": "2.1.0.119 (2.1.0.119_canary_2025-05-29)",
    "EndTime": "0001-01-01T00:00:00",
    "BeginTime": "2025-06-12T09:50:01.1589607Z",
    "Duration": "00:00:00",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null
  }
}
        

Another backup fails with

2025-06-12 12:12:42 +02 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
HttpRequestException: Error while copying content to a stream.
            {
  "DeletedFiles": 0,
  "DeletedFolders": 0,
  "ModifiedFiles": 0,
  "ExaminedFiles": 5029,
  "OpenedFiles": 80,
  "AddedFiles": 80,
  "SizeOfModifiedFiles": 0,
  "SizeOfAddedFiles": 10155713166,
  "SizeOfExaminedFiles": 448218251710,
  "SizeOfOpenedFiles": 10155713166,
  "NotProcessedFiles": 0,
  "AddedFolders": 0,
  "TooLargeFiles": 0,
  "FilesWithError": 0,
  "TimestampChangedFiles": 0,
  "ModifiedFolders": 0,
  "ModifiedSymlinks": 0,
  "AddedSymlinks": 0,
  "DeletedSymlinks": 0,
  "PartialBackup": false,
  "Dryrun": false,
  "MainOperation": "Backup",
  "CompactResults": null,
  "VacuumResults": null,
  "DeleteResults": null,
  "RepairResults": null,
  "TestResults": null,
  "ParsedResult": "Fatal",
  "Interrupted": false,
  "Version": "2.1.0.119 (2.1.0.119_canary_2025-05-29)",
  "EndTime": "2025-06-12T10:12:42.6948891Z",
  "BeginTime": "2025-06-12T10:02:04.5741129Z",
  "Duration": "00:10:38.1207762",
  "MessagesActualLength": 146,
  "WarningsActualLength": 8,
  "ErrorsActualLength": 2,
  "Messages": [
    "2025-06-12 12:02:04 +02 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: Die Operation Backup wurde gestartet",
    "2025-06-12 12:02:13 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2025-06-12 12:02:15 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (7,321 KiB)",
    "2025-06-12 12:02:16 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: Removing file listed as Deleting: duplicati-20250612T074738Z.dlist.zip.aes",
    "2025-06-12 12:02:16 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-KeepIncompleteFile]: Keeping protected incomplete remote file listed as Temporary: duplicati-20250612T092148Z.dlist.zip.aes",
    "2025-06-12 12:02:16 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-ba54192e2b377400c8f3b91d8cf0c2924.dblock.zip.aes",
    "2025-06-12 12:02:16 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-i68091f33d59a4d93a922b0ab45601181.dindex.zip.aes",
    "2025-06-12 12:02:16 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-b1ec23ac59cd44ca3bbfc31a73ef9c7fb.dblock.zip.aes",
    "2025-06-12 12:02:17 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-ide269db720e9495aaba7c817d1943a59.dindex.zip.aes",
    "2025-06-12 12:02:17 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-b9a708ab70ed14c7d8151b8fe278e7eef.dblock.zip.aes",
    "2025-06-12 12:02:17 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-if4f3e8ec036346188208e54c96fd705e.dindex.zip.aes",
    "2025-06-12 12:02:18 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-b0e013ebb793d4525b0c3a7854d503575.dblock.zip.aes",
    "2025-06-12 12:02:18 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-ida3769eacdb347809cae64f3487e62ee.dindex.zip.aes",
    "2025-06-12 12:02:18 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: Scheduling missing file for deletion, currently listed as Uploading: duplicati-bdd0c222d2896436ea2c48db7bfe1dee0.dblock.zip.aes",
    "2025-06-12 12:02:18 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: Removing file listed as Temporary: duplicati-i9134e5e3c49b445a8ed86350639400f0.dindex.zip.aes",
    "2025-06-12 12:02:18 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: Removing file listed as Deleting: duplicati-b8c27744ace75451b8f8417d0dbd7cd3a.dblock.zip.aes",
    "2025-06-12 12:02:18 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: Removing file listed as Deleting: duplicati-bca9ac51402f5423395bf8e42d50763d7.dblock.zip.aes",
    "2025-06-12 12:02:18 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: Removing file listed as Deleting: duplicati-bb85baf55cf0548078303eaf068fc4f3e.dblock.zip.aes",
    "2025-06-12 12:02:18 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: Removing file listed as Deleting: duplicati-b0fc10b2b8d9b4f8d8566e04e31271aba.dblock.zip.aes",
    "2025-06-12 12:02:18 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: Removing file listed as Deleting: duplicati-b73b799a8bbe941219ab2410baea17365.dblock.zip.aes"
  ],
  "Warnings": [
    "2025-06-12 12:12:42 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerHandlerFailure]: Error in handler: Error while copying content to a stream.\nHttpRequestException: Error while copying content to a stream.",
    "2025-06-12 12:12:42 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeWhileActive]: Terminating 3 active uploads",
    "2025-06-12 12:12:42 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-06-12 12:12:42 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 2 active upload(s) are still active",
    "2025-06-12 12:12:42 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-06-12 12:12:42 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Terminating, but 1 active upload(s) are still active",
    "2025-06-12 12:12:42 +02 - [Warning-Duplicati.Library.Main.Backend.Handler-BackendManagerDisposeError]: Error in active upload: Cancelled",
    "2025-06-12 12:12:42 +02 - [Warning-Duplicati.Library.Main.Backend.BackendManager-BackendManagerShutdown]: Backend manager queue runner crashed\nAggregateException: One or more errors occurred. (Error while copying content to a stream.)"
  ],
  "Errors": [
    "2025-06-12 12:12:42 +02 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error\nHttpRequestException: Error while copying content to a stream.",
    "2025-06-12 12:12:42 +02 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Backup has failed\nHttpRequestException: Error while copying content to a stream."
  ],
  "BackendStatistics": {
    "RemoteCalls": 26,
    "BytesUploaded": 2330173,
    "BytesDownloaded": 0,
    "FilesUploaded": 1,
    "FilesDownloaded": 0,
    "FilesDeleted": 0,
    "FoldersCreated": 0,
    "RetryAttempts": 20,
    "UnknownFileSize": 0,
    "UnknownFileCount": 0,
    "KnownFileCount": 7497,
    "KnownFileSize": 967366953349,
    "KnownFilesets": 6,
    "LastBackupDate": "2025-06-12T09:47:39+02:00",
    "BackupListCount": 8,
    "TotalQuotaSpace": 0,
    "FreeQuotaSpace": 0,
    "AssignedQuotaSpace": -1,
    "ReportedQuotaError": false,
    "ReportedQuotaWarning": false,
    "MainOperation": "Backup",
    "ParsedResult": "Success",
    "Interrupted": false,
    "Version": "2.1.0.119 (2.1.0.119_canary_2025-05-29)",
    "EndTime": "0001-01-01T00:00:00",
    "BeginTime": "2025-06-12T10:02:04.5741176Z",
    "Duration": "00:00:00",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null
  }
}
        

Can you provide a screenshot of what you see? I have seen there are some button issues, but it looks translated to me.

Thanks! We are working on them.

I suspect that this is caused by a missing message somewhere.

Thanks for the issue.

The plan is to have less need for the commandline UI.
I would prefer if we have proper flows for each of the commands, so you do not need to deal with the commandline UI.

Makes a lot of sense. I have registered an issue for this.

Generally, the repair command should be able to fix it. But for dblock files it depends on having the data still available locally, which is not always the case.

There is a new option that can be used to suppress warnings such as this:

--suppress-warnings=CompressionReadErrorFallback

It will then convert those messages to information messages.

That is an update to the way authentication is done. Initial implementation would send the token as a query string to the socket, but this could end up logging the token. It is short-lived, so it should be a minor issue, but I have updated it to use a real authentication handshake now.

The new code is much too verbose, so I have a PR the scales it down.

Not much to go after. Could you try to add --console-log-level=verbose to see if we can zoom in on where it gets stuck?

Yes! It no longer buffers everything, but streams it from disk to Jottacloud.

I have added an issue for that.

I spent quite a lot of energy on getting ngax to be robust against various failures. I will apply the same handling to ngclient.

Yes, that is quite confusing. The issue is that the verification has been completed once the repair kicks in. I guess we should remove any deleted files from the output.

This line indicates that the files are actually NOT being repaired.

This is not fixed by the repair command. It happens due to to a bug that has been fixed, where the same chunks (from a blocklist) could be written to multiple .dblock files. When this happens, the database will only track a single one of them, and report all the others as “extra”.

Generally, you can ignore “extra”, as it means that there is more data in the file than what was expected.

This is a file that is most likely not needed. The database has no entries for the file, so it should be safe to delete. But because it is not an empty file (due to the size) it is not deleted.

That sounds odd. I have tested with some of the newer EC variants and they work, so something as standard as AES256 should certainly work. Would you be able to provide a certificate that does not work, so I can test with that? (obviously not a copy of actual certs).

I have created an issue to track it here.

If you can either attach to the issue or PM it to me, I will follow up.

Generally, this warning means that Duplicati gets an EPERM (permission denied) error when trying to read the attributes from the file or directory. Duplicati will instead just store an empty set of metadata.

I can see that this case is not correctly re-classified as an access issue. I have a PR for that

Yes, macOS will create these files as needed. They are similar to Thumbs.db files on Windows.

There has been no change that I am aware of that should change how attributes are read between the two versions. From what I can see in the code, the previous logic would also log this exact problem with a warning message.

My best guess is that “something” was filtering these errors before, but the update to classify the error messages is now letting them through.

If you want to just ignore the messages, you can add:

--suppress-warnings=MetadataProcessFailed

I don’t fully follow the issue here. The --replace-faulty-index-files option only affects index files.

If you set --full-remote-verification=indexonly it will not test any .dlist files. If you use ListAndIndexes, it may check multiple files, but it will only fix index files.

I think this is “as designed”. At then end of the backup it will verify the backup, and if it discovers problematic index files, it will replace them. But because it does not verify everything on each run, you will get some files fixed on each run.

Sounds odd (assuming you have more than 2).
What about --backup-test-samples=100 ?

Do you mean that the errors are detected but not fixed?

I don’t think so. I think the problem is that macOS somehow prevents access to the attributes.

If you can run with .118 and try to have verbose logging enabled, maybe you can see messages with the type FailedAttributeRead ? If so, I think the problem is that .118 would log these as verbose errors, and for some reason .119 treats them as warnings.

The mention under ngclient is because there was a section in the backends, specifically for the http options.

However, the idea was meant to make it easier to do system-wide configurations, but caused some confusion, because you could apply --http-operation-timeout for all backends, including FTP and SSH which just ignored the option.

Instead of this, the code is now using a common-options module that has a few options that are shared across modules, such as --read-write-timeout. The intention is that you can supply these in the general settings as advanced options, and they apply to all backups.

This is still a bit weird for cases like --accept-specified-ssl-hash which only applies to HTTP-based backends. The backends are now updated so they expose all options they support, so you can look at the advanced options in the UI and know that the option has an effect on that backend. I sadly botched the --oauth-url by not following this standard, so that will be added in the next canary.

It should not have any impact on existing setups. It is just a move of where the options are defined, making it clearer going forward. The options --http-operation-timeout and --allowed-ssl-versions were only working for backends using the deprecated WebClient which we have been removing. They most likely have not had an effect for a while.

For --http-operations-timeout, the replacement is the new --list-timeout, --read-write-timeout and --short-timeout options that now work on all backends, including the non-http ones.

The --allowed-ssl-versions is supported for FTP, but not for any HTTP-based backends, as it instead uses the OS to determine what to support.

Are the errors “extra” or “missing”? The number of “missing” should certainly go down.

I have created an issue. I will investigate.

That is a bug. I have created an issue for it. I think the other error is similar enough, that I would guess it is the same.

1 Like

I’ve been seeing such general lack of translation for awhile on Windows 10 on Edge.
Haven’t tested other combinations. ngax is translated on all left-bar screens present.

ngclient was, I thought, totally untranslated, except yesterday I found a tiny amount -
along with my further-down-screen jobs having their contents vanish, e.g. for French:

Going back to settings, here is the last one. FWIW, browser hard refresh doesn’t help.

I went to try Firefox, then remembered another oddity which is language starts empty:

image

Dropdown is available, so I can change it to Dutch. Screen updates, but is still English.
I was watching browser local storage in Edge, and noticed an immediate change there.
I’m not sure if it’s better or worse, but ngax goes to translations on Settings screen OK.

Maybe impact is a sudden warning (which did happen) for an apparently ignored option?

The migration information provided right here is another impact that should be publicized because it will help avoid confused users and support requests. 2.1 had the same issues, pretending that little needed to be said, and the result (still happening) is as I state above.

Missing. Here is a sample from the dindex previously mentioned (I just tested again):

duplicati-i63c3448043e54cf686629a88285d85e7.dindex.zip.aes: 849 errors
	Missing: +0EjhgA6iYRosesHA26Jb1T5YFeHC0Bmf24f8fm4EaM=
	Missing: +2pAOihKec/irvcx7+3eUZz7iL4gqIQxBFcZr5uBs9s=

Broken reupload (issue link in your post) that loses all DeletedBlocks blocks raises it?

EDIT 1:

Predecessor dindex of the 849 error one began like this:

duplicati-ibae40f55c39f4bf39688dd5a84431fce.dindex.zip.aes: 21 errors
	Missing: /Xak0StzhVfhfNS+/D+CHUOOlREjv0DZW1CGB2NME1w=
	Missing: 1eihuW+X1B2RUjMMPBnOnFAbWRnOm3Sm9+C2pyAsT7E=

Those are both in BlocklistHash table and might be related to the missing list folder.
These blocks are not missing after the repair, but the missing DeletedBlock list is larger.

On the Home screen some things are translated.

Thank you very much for your answers.

Please. In which filter category should I include the expression:
–suppress-warnings=MetadataProcessFailed

Thanks, that’s a relief and helps a lot

This was all that I got:

The operation PurgeBrokenFiles has started
Setting custom SQLite option 'cache_size=-251904'.
Found 11 broken filesets with 934 affected files, purging files
Purging 105 file(s) from fileset 28/04/2025 05:00:07
Starting purge operation
Processing filelist volume 1 of 1

It spends a minute or two heavily accessing the job database file, then that stops completely, then there is no file activity for long periods, then just a bunch of DLLs that come and go. But nothing else.

Update: I had a thought as I wrote this and checked the memory usage of the server - the standby cache was completely full, so I used sysinternals rammap64.exe to free it, then immediately the activity to the database came back, but very low read/writes and memory not really changing:

Still nothing further got logged to the console

Ignore the parts about memory, when I re-ran it with explicitonly logging, it never changed. But the logging shows better where it’s stuck:

The operation PurgeBrokenFiles has started
Starting - Running PurgeBrokenFiles
Setting custom SQLite option 'cache_size=-251904'.
Starting - ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES (@Description, @Timestamp); SELECT last_insert_rowid();
ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES (@Description, @Timestamp); SELECT last_insert_rowid(); took 0:00:00:00.046
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > @Size
ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > @Size took 0:00:00:00.000
Starting - ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC
ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC took 0:00:00:00.000
Found 11 broken filesets with 934 affected files, purging files
Starting - ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC
ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC took 0:00:00:00.000
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId
ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId took 0:00:00:00.012
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId
ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId took 0:00:00:00.009
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId
ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId took 0:00:00:00.009
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId
ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId took 0:00:00:00.011
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId
ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId took 0:00:00:00.011
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId
ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId took 0:00:00:00.012
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId
ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId took 0:00:00:00.010
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId
ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId took 0:00:00:00.010
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId
ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId took 0:00:00:00.013
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId
ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId took 0:00:00:00.011
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId
ExecuteScalarInt64: SELECT COUNT(*) FROM "FilesetEntry" WHERE "FilesetID" = @FilesetId took 0:00:00:00.018
Starting - ExecuteScalarInt64: SELECT "ID" FROM "Blockset" WHERE "FullHash" = @EmptyHash AND "Length" == @EmptyHashSize AND "ID" NOT IN (SELECT "BlocksetID" FROM "BlocksetEntry", "Block" WHERE "BlocksetEntry"."BlockID" = "Block"."ID" AND "Block"."VolumeID" NOT IN ())
ExecuteScalarInt64: SELECT "ID" FROM "Blockset" WHERE "FullHash" = @EmptyHash AND "Length" == @EmptyHashSize AND "ID" NOT IN (SELECT "BlocksetID" FROM "BlocksetEntry", "Block" WHERE "BlocksetEntry"."BlockID" = "Block"."ID" AND "Block"."VolumeID" NOT IN ()) took 0:00:00:00.000
Starting - ExecuteScalarInt64: SELECT "ID" FROM "Blockset" WHERE "Length" == @EmptyHashSize AND "ID" NOT IN (SELECT "BlocksetID" FROM "BlocksetEntry", "Block" WHERE "BlocksetEntry"."BlockID" = "Block"."ID" AND "Block"."VolumeID" NOT IN ())
ExecuteScalarInt64: SELECT "ID" FROM "Blockset" WHERE "Length" == @EmptyHashSize AND "ID" NOT IN (SELECT "BlocksetID" FROM "BlocksetEntry", "Block" WHERE "BlocksetEntry"."BlockID" = "Block"."ID" AND "Block"."VolumeID" NOT IN ()) took 0:00:00:02.313
Starting - ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC
ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC took 0:00:00:00.000
Purging 105 file(s) from fileset 28/04/2025 05:00:07
Starting - ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC
ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC took 0:00:00:00.000
Starting purge operation
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC
ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC took 0:00:00:00.000
Starting - ExecuteReader: SELECT "ID" FROM "Fileset"  WHERE  "ID" IN (@Fileset10) ORDER BY "Timestamp" DESC
ExecuteReader: SELECT "ID" FROM "Fileset"  WHERE  "ID" IN (@Fileset10) ORDER BY "Timestamp" DESC took 0:00:00:00.000
Starting - ExecuteReader: SELECT COUNT(*) FROM "FileLookup" WHERE "ID" NOT IN (SELECT DISTINCT "FileID" FROM "FilesetEntry")
ExecuteReader: SELECT COUNT(*) FROM "FileLookup" WHERE "ID" NOT IN (SELECT DISTINCT "FileID" FROM "FilesetEntry") took 0:00:00:00.551
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > @Size
ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > @Size took 0:00:00:00.000
Starting - ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC
ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC took 0:00:00:00.000
Processing filelist volume 1 of 1
Starting - ExecuteScalarInt64: SELECT "ID" FROM "Remotevolume" WHERE "Name" = @Name
ExecuteScalarInt64: SELECT "ID" FROM "Remotevolume" WHERE "Name" = @Name took 0:00:00:00.000
Starting - ExecuteScalarInt64: SELECT "ID" FROM "Remotevolume" WHERE "Name" = @Name
ExecuteScalarInt64: SELECT "ID" FROM "Remotevolume" WHERE "Name" = @Name took 0:00:00:00.000
Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE "TempDeletedFilesTable-CCCFAD47E18A2C4687243CFDC4C93804" ("FileID" INTEGER PRIMARY KEY)
ExecuteNonQuery: CREATE TEMPORARY TABLE "TempDeletedFilesTable-CCCFAD47E18A2C4687243CFDC4C93804" ("FileID" INTEGER PRIMARY KEY)  took 0:00:00:00.000
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "TempDeletedFilesTable-CCCFAD47E18A2C4687243CFDC4C93804"
ExecuteScalarInt64: SELECT COUNT(*) FROM "TempDeletedFilesTable-CCCFAD47E18A2C4687243CFDC4C93804" took 0:00:00:00.000
Starting - ExecuteScalarInt64:
                        SELECT
                            SUM("C"."Length")
                        FROM
                            "TempDeletedFilesTable-CCCFAD47E18A2C4687243CFDC4C93804" A, "FileLookup" B, "Blockset" C, "Metadataset" D
                        WHERE
                            "A"."FileID" = "B"."ID"
                            AND("B"."BlocksetID" = "C"."ID" OR("B"."MetadataID" = "D"."ID" AND "D"."BlocksetID" = "C"."ID"))

Something to note, when I kill the command with ctrl-c, I can see the process in the monitor starts to access the database before eventually disappearing - Windows cleaning up I suppose.

Pls.
Where should I include it?
My question in the previous post.
Tks.

It’s an option, so add it on Options screen, in Advanced options, using the dropdown.

You can add it as main option for the whole server, in Settings, or as a per job setting.

1 Like

Tks @ts678
I can’t include it here. What should I do?

Tks @Taomyn
I managed to include it here and it worked!

Why not? For me, I can either use the Search box, or laboriously scroll and look for it.
I’m not sure if it was intentional to drop alphabetization ngax had, but it lacked search.
I’m considering asking for it back, but nobody else seems to have complained - so far.

I scroll with my mouse wheel, however there’s also a scroll bar to use on the right side.

@ts678
There must be a bug here:
In Chrome, Firefox, Safari:
For me the dropdown list box does not have the scrolling option.
What version are you using?
oldUI or new UI?

If you have Safari, that means macOS? I’m on 2.1.0.119 on Windows 10.
Initially on Edge which “should” be similar to Chrome in terms of drawing.
Firefox draws the scroll bar differently. Here’s the one from Settings page:

This has all been in new UI. One can sort of tell from the way pages look.

How did you manage to get it set up from Settings page but not job page?
The dropdown presentations look basically identical for me in both places.

In the oldUI:

In the new UI:

@ts678

In the oldUI:

in the new UI:

Both without scrool.

and yet you show it included although did you wind up adding without scrolling, from ngax, etc.? I was asking you about your missing scrollbar anyway, and didn’t get an answer on it.

I did do a quick Google search that says scrollbar appearance can be controlled by site, so Duplicati might be able to cause trouble if it does it wrong. I can’t repro the problem though.

If it’s only on macOS, the dev can likely test.

Thanks for the follow-up post on scroll bars.

Edge, old UI, Options:

Edge, old UI, Settings:

Original questions remains:

I think you showed Options screens without scrolling. Does Settings have scroll bar?

Yes.


You’re not showing scrollbar on the dropdown. That’s the one you lack on Options screen.