Release: 2.0.9.107 (Canary) 2024-09-11

This new log confirms that the DataErrorException you had before in compact is persistent:

  "Errors": [
    "2024-09-30 13:23:32 +02 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error\r\nDataErrorException: Data Error",
    "2024-09-30 13:23:32 +02 - [Error-Duplicati.Library.Main.Operation.BackupHandler-RollbackError]: Rollback error: Cannot access a disposed object.\r\nObject name: 'SQLiteConnection'.\r\nObjectDisposedException: Cannot access a disposed object.\r\nObject name: 'SQLiteConnection'.",
    "2024-09-30 13:23:32 +02 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Backup has failed with error: Data Error\r\nDataErrorException: Data Error"
  ],

The compact information is not available this time. I was hoping for more info rather than less.

One interesting this is the Warnings. Are you using VSS shadow copy (snapshot-policy), or is

ā€œ2024-09-30 09:57:33 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Users\administrator.MYDOMAIN\Desktop\duplicati-b0d7db3c279cf46c6a1bc4042bf347e0b.dblock\Xh_6jfv4J415W33JnRUHyj04MKaKlS0vX5Wt4EJAME0=\r\nIOException: A device which does not exist was specified. : ā€˜\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy13\\Users\administrator.MYDOMAIN\Desktop\duplicati-b0d7db3c279cf46c6a1bc4042bf347e0b.dblock\Xh_6jfv4J415W33JnRUHyj04MKaKlS0vX5Wt4EJAME0=ā€™ā€,
ā€œ2024-09-30 09:57:33 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Users\administrator.MYDOMAIN\Desktop\duplicati-b0d7db3c279cf46c6a1bc4042bf347e0b.dblock\xi0RJGCWWeeqjb8ofJaaEBju1NUUMlBMvQ38n63IiRM=\r\nIOException: A device which does not exist was specified. : ā€˜\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy13\\Users\administrator.MYDOMAIN\Desktop\duplicati-b0d7db3c279cf46c6a1bc4042bf347e0b.dblock\xi0RJGCWWeeqjb8ofJaaEBju1NUUMlBMvQ38n63IiRM=ā€™ā€

(and more, looking kind of like the block contents of a single dblock file) actually on a desktop?

Yes, snapshot-policy is set to ā€œonā€, and those files are on my desktop - theyā€™re the copies of the ā€œbadā€ dblock files I downloaded from Wasabi, the original file plus the .zip and the extracted files in a sub-folder:

A Google search for below finds only this topic, so Iā€™m not sure whatā€™s happening.

ā€œDuplicatiā€ ā€œA device which does not exist was specifiedā€ ā€œGLOBALROOT Deviceā€

Log says there were 348 warnings. Not all get kept, but all kept were similar error.

This is another place where an actual log-file would help to understand the failure.

You could also try a small test backup of known-complainng files to try to repeat it.

As second run was successful, maybe itā€™s already known the issue is inconsistent.

Maybe a developer will have a better guess at how this error might be happening?

EDIT:

Are the files that were extracted from the dblock files still around? If deleted, when?

A VSS snapshot is intended to freeze the view of the filesystem, but it has its limits.

DOS device paths describes the \\?\ prefix that is used to avoid a Windows oddity.

Maybe some combination of deleting the files and VSS not working made the error?

Iā€™m not sure how much of the full path Windows considers to be the ā€œdeviceā€ portion.

Regardless, more testing would show if thereā€™s some kind of persistent problem here.

No they are still on my desktop as shown

Iā€™m going to try with the newly released .108 and see what happens when I try to backup again, but with the normal job settings.

I suppose another option would be to see what 2.0.8.1 does. Was this known to work there? Apologies if you said somewhere, but weā€™re trying to let release topics focus on new issues.

Yes, worked fine on 2.0.8.1, then I tried upgrading a few times to 2.0.9.x and after some stalls with the installer, not with actual backups if I recall, once it was installed it worked fine for weeks until I reported it here.

The first Wasabi backup since I installed 2.0.9.108 earlier is still running, so will report on that once itā€™s complete. I decided to keep the same options as the last test with ā€œretryā€ logging to a file and no auto-compact. If that works will try a manual compact.

Hitting another progress bar update lack on Verify files (after a backup failed on a cloud error because I have mine set to fail rather than retry ā€“ basically inviting the fails to see what happens).

image

and thatā€™s it unless I peek at the log-file on disk to see itā€™s running, or see job log afterwards. Duplicating the tab for repeat works and polls for progressstate. Tabs seem to fail eventually, because one that had been open used to work on this test then stopped, so I duplicated to test.

Iā€™m not seeing output in the developer tools console or the terminal of the TrayIcon to add clues.

EDIT:

Compact now has the same behavior, except for not making a job log (might be normal though).

image

The backup with auto-compact off worked fine, but a manual compact just fails:

{
  "DeletedFileCount": 0,
  "DownloadedFileCount": 0,
  "UploadedFileCount": 0,
  "DeletedFileSize": 0,
  "DownloadedFileSize": 0,
  "UploadedFileSize": 0,
  "Dryrun": false,
  "VacuumResults": null,
  "MainOperation": "Compact",
  "ParsedResult": "Fatal",
  "Interrupted": false,
  "Version": "2.0.9.108 (2.0.9.108_canary_2024-10-03)",
  "EndTime": "2024-10-04T13:32:54.2559964Z",
  "BeginTime": "2024-10-04T13:29:21.3134095Z",
  "Duration": "00:03:32.9425869",
  "MessagesActualLength": 416,
  "WarningsActualLength": 0,
  "ErrorsActualLength": 1,
  "Messages": [
    "2024-10-04 15:29:21 +02 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Compact has started",
    "2024-10-04 15:29:46 +02 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting because there are 99 fully deletable volume(s)",
    "2024-10-04 15:29:46 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2024-10-04 15:29:48 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (26.94 KB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-b000b757393b146c594e93bc5319882f1.dblock.zip.aes (49.98 MB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Completed: duplicati-b000b757393b146c594e93bc5319882f1.dblock.zip.aes (49.98 MB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-i930f7c7345584219bcda0ede2d2a6068.dindex.zip.aes (18.54 KB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Completed: duplicati-i930f7c7345584219bcda0ede2d2a6068.dindex.zip.aes (18.54 KB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-b04b636e1ecf14c449f39c0863413844b.dblock.zip.aes (49.95 MB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Completed: duplicati-b04b636e1ecf14c449f39c0863413844b.dblock.zip.aes (49.95 MB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-ie060342596694e6caf962c2f41d508bf.dindex.zip.aes (18.39 KB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Completed: duplicati-ie060342596694e6caf962c2f41d508bf.dindex.zip.aes (18.39 KB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-b06d665d4f4aa4c7f926b3201774c21fe.dblock.zip.aes (49.96 MB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Completed: duplicati-b06d665d4f4aa4c7f926b3201774c21fe.dblock.zip.aes (49.96 MB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-ibf658d036cda45d6a6371a5fa2944f38.dindex.zip.aes (18.36 KB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Completed: duplicati-ibf658d036cda45d6a6371a5fa2944f38.dindex.zip.aes (18.36 KB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-b0a45230a148a41a89da64a2dcb1ab302.dblock.zip.aes (49.93 MB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Completed: duplicati-b0a45230a148a41a89da64a2dcb1ab302.dblock.zip.aes (49.93 MB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-ibc3e510f43e44efabf830706cc602e64.dindex.zip.aes (18.36 KB)",
    "2024-10-04 15:29:59 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Completed: duplicati-ibc3e510f43e44efabf830706cc602e64.dindex.zip.aes (18.36 KB)"
  ],
  "Warnings": [],
  "Errors": [
    "2024-10-04 15:32:54 +02 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Compact has failed with error: Data Error\r\nDataErrorException: Data Error"
  ],
  "BackendStatistics": {
    "RemoteCalls": 207,
    "BytesUploaded": 52523930,
    "BytesDownloaded": 157017447,
    "FilesUploaded": 2,
    "FilesDownloaded": 3,
    "FilesDeleted": 201,
    "FoldersCreated": 0,
    "RetryAttempts": 0,
    "UnknownFileSize": 0,
    "UnknownFileCount": 0,
    "KnownFileCount": 27577,
    "KnownFileSize": 719766933845,
    "LastBackupDate": "2024-10-04T11:34:07+02:00",
    "BackupListCount": 16,
    "TotalQuotaSpace": 0,
    "FreeQuotaSpace": 0,
    "AssignedQuotaSpace": -1,
    "ReportedQuotaError": false,
    "ReportedQuotaWarning": false,
    "MainOperation": "Compact",
    "ParsedResult": "Success",
    "Interrupted": false,
    "Version": "2.0.9.108 (2.0.9.108_canary_2024-10-03)",
    "EndTime": "0001-01-01T00:00:00",
    "BeginTime": "2024-10-04T13:29:21.3134166Z",
    "Duration": "00:00:00",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null
  }
}     

This was the desired test, but it needs the below:

Ah, but that means using the worst part of the GUI, the command-line section - Iā€™ll do it soon, when I get homeā€¦

Not so. These are in the Advanced options section of Options screen.

This is not a new problem in 2.0.9.107 (though new since 2.0.8.1), but hereā€™s some more info:

image

was on one http://localhost:8200/ngax/index.html tab before, but spread to 3 out of 4.

image

image

image

I have not used the Log out button, but the PC has been sleeping overnight. Local time GMT-4.

Nice catch! That should be easy to fix.

The issue is registered here: SQL logic error during database Repair Ā· Issue #5489 Ā· duplicati/duplicati Ā· GitHub
Fix is here: Fixed an incorrect string quote by kenkendk Ā· Pull Request #5583 Ā· duplicati/duplicati Ā· GitHub

It looks like the error message is attempting to include some information that cannot be serialized.
I have a fix here Force serialize exception data before emitting JSON by kenkendk Ā· Pull Request #5584 Ā· duplicati/duplicati Ā· GitHub

I think this may have been answered already, but this indicates that you are indeed running an old version of Duplicati. There is a max-db-version inside each binary and it matches the largest known database version. When it starts, it upgrades the database to that version, and a downgrade would then trigger this message.

I cannot fully follow how you get into this state, but at least be aware that you are running an old Duplicati and at some point you have been running a newer.

That sounds like you have managed to connect with an older version, which (on Windows) would use RC4 encryption on the database. This feature was removed from SQLite since, so canary builds use a field-level encryption as @ts678 mentions.

Ok, this sounds like a bug with Avalonia. Maybe I can try different versions of stopping/restarting tray icon if we cannot figure out the real reason. I think something similar has been reported for Ubuntu.

Maybe it makes more sense to store the first + last? At least in this case it is not valuable to see the first ones.

I think the error message about the rollback can (mostly) be ignored, the data error is more worrying. It is a safeguard that tries to undo any pending database updates in case of an error. But the message you see is that the transaction has already been disposed (either committed or rolled back).

I am tracking it here: Backup can leave a transaction set after dispose Ā· Issue #5585 Ā· duplicati/duplicati Ā· GitHub

I think LZMA is technically a better algorithm, but it is far less tested than Deflate. It does look like a real genuine bug within the archive, because the decompressor should not run unless the hash matches. If the hash matches, it means the original upload and the current download are the same. However, since you did a repair (was it for this backup?), then the hash values would be lost. But since the decryption works, it is not possible for it to be corrupted by chance.

Maybe a good idea would be to record some of this metadata in the dlist of dindex files as well.

It sounds like the problem appeared but then did not? If that is the case, we can hope that the error is in SharpCompress LZMA and is some kind of race condition on reading, instead of having a faulty archive written.

I have traced this down to a conflict between two error handlers that both try to report the error. It happens if the hostname is rejected. You can either set --webservice-allowed-hostnames or use the IP address.

It is supposed to work. The token is not tied to the hostname, but the hostname needs to be allowed for it to work. You can try with 127.0.0.1 and that should work if localhost works.

Great find! I have a PR ready for that: Ensure that we report 404 when a file is not found by kenkendk Ā· Pull Request #5587 Ā· duplicati/duplicati Ā· GitHub

SharpCompress was not updated, and I suspect that the error is in that library.

I think SharpCompress was updated after 2.0.8.1 (most libraries were). But if you can extract with non-SharpCompress, it is hopefully fixable in the decompressor code.

Can you create an issue with reproducible steps? Is it just any failure and then the progressbar is stuck because it is waiting for ā€œcompleteā€ but it never happens?

I agree! It was a way to get all the super advanced features in there without making a real UI. Not user friendly at all!

To reproduce this, would I need to be logged in, then ā€œsleep 12hā€ and retry?

Repros for flakies in general havenā€™t been readily findable, otherwise they would be given.
Itā€™s also possible that the 3 A.M wake that Windows does to check for updates is part of it.

In general it seems like more tabs open in Duplicati is more prone to having odd behavior.
When I was testing with just one, GUI was less flaky. Recently Iā€™ve had more than normal.

You can see above where I was leaving a trail of broken ones. The next bunch was due to upgrading to 2.0.9.108. TrayIcon doesnā€™t seem to emit to stdout any URL to initial connect, therefore I open yet more tabs through TrayIcon request. Anyway, hereā€™s one specific test:

Make a backup of a short file. Duplicate the tab to make 8 total. Reload the tabs if you like.
Run Compact now maybe 10 times (no need to be super fast). Maybe a status bar will stop responding to Compact now, or will go into a progressstate loop even though all is done.

For a more vivid visual display, do the above with a Destination folder without write access.
Survey tabs later to see which ones gave a red popup. No access at all gives similar result.

In either case, wander around the tabs to look for status bars that are stuck or donā€™t update.

EDIT 1:

Browser is Edge on Windows 10, but I donā€™t know if it matters.

EDIT 2:

Refreshed all eight tabs, had PC sleep (except for Window wake at around 3 local, 7 UTC).
Woke this morning with Connection lost.popups on all eight, though dev tools log varied.
FWIW thereā€™s a 15 minute sleep on wake set to delay a job that would otherwise raise load.

Six wound up like this, with a very dark Duplicati screen underneath (look hard):

image

with roughly this stop:

image

Two wound up with home page visible below, less dimmed from the usual look:

image

with roughly this stop:

image

The 11:07 UTC entries would be around the PC wake time by me.

EDIT 3:

Since one can see the jobs in the background of the second image,
the first one began as a no write test, allowing read and list access.
This took more advanced ACL work, but no access got same result.
For faster fail, set number-of-retries=0. Job test 1 was normal.
Main oddity with it was it didnā€™t show up on Save ā€“ until I refreshed.

EDIT 4.

The night after my eight tab Edge test, I did a four tab Chrome test.
All failed, half with the super dark background and longer stop path.
Half with the slightly greyed background and the shorter stop path.

One or two tabs seem to survive. All this may be system dependent.
Coming out of sleep is a super busy time for a system, and itā€™s slow.

Last night I tried four Edge tabs without a past-due job at wake time.
Typical wake is around 7, and I moved job from 5:20 to 7:20. Result:
All failed, just like four tab Chrome test, with 50/50 split failure mode.

Iā€™ll say again that this is now 2.0.9.108, but the history is here in 107.

Yeah, eventually it seemed to sort itself out, I just never worked out why.

[quote=ā€œkenkendk, post:74, topic:18983, full:trueā€]

No, the extraction was after having manually downloaded all the broken files, deleting from the bucket and then later on my desktop I tested the files.

I hope so

I just added

--log-file=C:\duplicati_LISA.log
--log-file-log-level=retry

To the main settings of Duplicati, ran the compact but nothing is being logged - the filename was new and thatā€™s not even being created

This was the job log

            {
  "DeletedFileCount": 0,
  "DownloadedFileCount": 0,
  "UploadedFileCount": 0,
  "DeletedFileSize": 0,
  "DownloadedFileSize": 0,
  "UploadedFileSize": 0,
  "Dryrun": false,
  "VacuumResults": null,
  "MainOperation": "Compact",
  "ParsedResult": "Fatal",
  "Interrupted": false,
  "Version": "2.0.9.108 (2.0.9.108_canary_2024-10-03)",
  "EndTime": "2024-10-07T10:01:08.7750811Z",
  "BeginTime": "2024-10-07T09:58:34.8196652Z",
  "Duration": "00:02:33.9554159",
  "MessagesActualLength": 12,
  "WarningsActualLength": 0,
  "ErrorsActualLength": 1,
  "Messages": [
    "2024-10-07 11:58:35 +02 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Compact has started",
    "2024-10-07 12:00:25 +02 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting because there are 1 fully deletable volume(s)",
    "2024-10-07 12:00:26 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2024-10-07 12:00:28 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (26.74 KB)",
    "2024-10-07 12:00:28 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-b0d0fee3d1f784621834ad1e22e18ec47.dblock.zip.aes",
    "2024-10-07 12:00:28 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-icf5a4946c92e4b3eb1c05d80a433fefa.dindex.zip.aes",
    "2024-10-07 12:00:28 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-b229b1417061147a9adb5c5e966c6f3d8.dblock.zip.aes",
    "2024-10-07 12:00:28 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-i73071f0c719349c68de5436f2e2f8197.dindex.zip.aes",
    "2024-10-07 12:00:40 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-b0d7db3c279cf46c6a1bc4042bf347e0b.dblock.zip.aes (49.98 MB)",
    "2024-10-07 12:00:41 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-b0d7db3c279cf46c6a1bc4042bf347e0b.dblock.zip.aes (49.98 MB)",
    "2024-10-07 12:00:41 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-b0eea475a30f1400790621f5b8d96fd1d.dblock.zip.aes (49.85 MB)",
    "2024-10-07 12:00:42 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-b0eea475a30f1400790621f5b8d96fd1d.dblock.zip.aes (49.85 MB)"
  ],
  "Warnings": [],
  "Errors": [
    "2024-10-07 12:01:08 +02 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Compact has failed with error: Data Error\r\nDataErrorException: Data Error"
  ],
  "BackendStatistics": {
    "RemoteCalls": 3,
    "BytesUploaded": 0,
    "BytesDownloaded": 104677594,
    "FilesUploaded": 0,
    "FilesDownloaded": 2,
    "FilesDeleted": 0,
    "FoldersCreated": 0,
    "RetryAttempts": 0,
    "UnknownFileSize": 0,
    "UnknownFileCount": 0,
    "KnownFileCount": 27378,
    "KnownFileSize": 714579567242,
    "LastBackupDate": "2024-10-04T11:34:07+02:00",
    "BackupListCount": 16,
    "TotalQuotaSpace": 0,
    "FreeQuotaSpace": 0,
    "AssignedQuotaSpace": -1,
    "ReportedQuotaError": false,
    "ReportedQuotaWarning": false,
    "MainOperation": "Compact",
    "ParsedResult": "Success",
    "Interrupted": false,
    "Version": "2.0.9.108 (2.0.9.108_canary_2024-10-03)",
    "EndTime": "0001-01-01T00:00:00",
    "BeginTime": "2024-10-07T09:58:34.8196687Z",
    "Duration": "00:00:00",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null
  }
}
        

Weā€™ve been talking about two different extractions, so big question is why WinRAR one worked, whereas the SharpCompress one inside Duplicati didnā€™t. New theory is a SharpCompress issue. Previous theory was that live log missed output (as it does sometimes), so get a disk log-file.

Test in Advanced options on Options screen:

2024-10-07 07:51:45 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Compact has started
2024-10-07 07:51:45 -04 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting not required

then in Default options on Settings screen.

2024-10-07 08:04:04 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Compact has started
2024-10-07 08:04:04 -04 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting not required

Both used the job Compact now button, and created a new file. I donā€™t know why yours didnā€™t.

Regardless, of the three suspect files from previous test, the new job log supports prior result:

The timing seems a little off though, but there is some sort of asynchronous download at work.

So the suspect file list has dropped to two from three, due to something odd involving this one:

What does your ā€œdeleting from the bucketā€ mean? Was this manually deleted, with others left?
Manual dblock deletion causes loss of data, shouldnā€™t be done casually, and causes problems.

Ignoring this one, two of the original three are still around, and the problem is still reproducible.

Question is how to reproduce it in a way that a developer can reproduce it, or at least outside Compact? Reading dblock files is a bit rare. Compact needs to. Restore might, but youā€™d need either a full restore (slow, costly) or target a suspect dblock, e.g. consult list-broken-files. Database recreate could be forced to read dblock files, by depriving it of dindex files it prefers.

I donā€™t constantly follow Wasabi policies, but donā€™t they potentially get upset by high download?

Comments are welcome on the concerns of resources, privacy, and getting a better look at this.

Both were fine for awhile, which could also just be due to not compacting some bug-causing file. Potentially there are other bug-causing files around, but to find them would mean big download.

SharpCompress has indeed been updated since 2.0.8.1, but I couldnā€™t see announced changes making potential problems of compatibility with the old version. You can tell when your suspect dblock files were made from the file date (maybe) or from the Each dblock file has a manifest giving Created time and Duplicati AppVersion.

Google search site:forum.duplicati.com ā€œLZMAā€ ā€œData errorā€ ā€œzip.aesā€ led to two reports in 2018.

SharpCompress.Compressors.LZMA.DataErrorException
SharpCompress.Compressors.LZMA.DataErrorException: Data Error

It looks like the SharpCompress message comes from actual file extractions, so just reading the names of the files (which is possibly all some operations do) might not be enough. Another idea:

Duplicati.CommandLine.RecoveryTool.exe

you can use the Recovery Tool to convert your backup files to another compression type.

This would force reads of the LZMA files (so maybe fail) while changing compression to Deflate.

Duplicati.CommandLine.RecoveryTool.exe recompress zip ā€œC:\tmp\recompress.preā€ ā€œC:\tmp\recompress.postā€

is what I just tried, with .pre folder having a dblock file from an LZMA backup. I can confirm compression method in 7-Zip. The .post file got Deflate. You can test copies of bad dblocks.

Unfortunately, new ZIP code in 2.0.9.108 chokes, so you might need a 2.0.9.107 .zip install.

1/1: duplicati-b0a69695168d443dfa437b9d44b6de54e.dblock.zip - downloading (773 bytes)... recompressing ... error: System.IO.InvalidDataException: The archive entry was compressed using LZMA and is not supported.
   at System.IO.Compression.ZipArchiveEntry.Open()
   at Duplicati.Library.Compression.ZipCompression.BuiltinZipArchive.OpenRead(String file)
   at Duplicati.Library.Compression.ZipCompression.FileArchiveZip.OpenRead(String file)
   at Duplicati.CommandLine.RecoveryTool.Recompress.Run(List`1 args, Dictionary`2 options, IFilter filter)

Another technique which I havenā€™t tried yet is an open source GUI tool using SharpCompress.

SimpleZIP (Microsoft page, and itā€™s also in Microsoft Store, if for some reason you prefer that).