Problem after update to 2.1.0.2 (Beta)

Will this help?

code = Constraint (19), message = System.Data.SQLite.SQLiteException (0x800027AF): constraint failed
UNIQUE constraint failed: DuplicateBlock.BlockID, DuplicateBlock.VolumeID
at System.Data.SQLite.SQLite3.Reset(SQLiteStatement stmt)
at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt)
at System.Data.SQLite.SQLiteDataReader.NextResult()
at System.Data.SQLite.SQLiteDataReader…ctor(SQLiteCommand cmd, CommandBehavior behave)
at System.Data.SQLite.SQLiteCommand.ExecuteNonQuery(CommandBehavior behavior)
at Duplicati.Library.Main.Database.ExtensionMethods.ExecuteNonQuery(IDbCommand self, Boolean writeLog, String cmd, Object values)
at Duplicati.Library.Main.Database.LocalRecreateDatabase.CleanupMissingVolumes()
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.Run(String path, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
at Duplicati.Library.Main.Operation.RepairHandler.RunRepairLocal(IFilter filter)
at Duplicati.Library.Main.Operation.RepairHandler.Run(IFilter filter)
at Duplicati.Library.Main.Controller.<>c__DisplayClass21_0.b__0(RepairResults result)
at Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action1 method) at Duplicati.Library.Main.Controller.RunAction[T](T result, IFilter& filter, Action1 method)
at Duplicati.Library.Main.Controller.Repair(IFilter filter)
at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

Another log report I get, when I try to delete and recreate the database, is this:

Duplicati.Library.Interface.UserInformationException: The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.
at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(String backendurl, Options options, BackupResults result)
at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String sources, IFilter filter, CancellationToken token)
at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
at Duplicati.Library.Main.Operation.BackupHandler.Run(String sources, IFilter filter, CancellationToken token)
at Duplicati.Library.Main.Controller.<>c__DisplayClass17_0.b__0(BackupResults result)
at Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action`1 method)
at Duplicati.Library.Main.Controller.Backup(String inputsources, IFilter filter)
at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

How can that happen? The hash is the filename, so it should not be able to exist twice in the same volume? If that can happen, it is surely the cause of the problem.

Do you have a recipe for how to generate this situation?

Yes, thanks! It pinpoints the exact place that needs fixing. Now I just have to figure out how to trigger the error.

That one is “expected”. If the database is not fully reconstructed, any attempts to make a backup based on it will cause faulty backups, so this is prevented.

You can safely delete such a database, as it cannot be recovered with any tools.

Once I fix the above issue, you can recreate the database again. If you need the data recovered, there is a recovery tool that is a bit slower, but does not use a database.

1 Like

7-Zip:

image

Google search for “does zip format allow duplicate files” says yes. Regardless, evidence above.
Unfortunately file is from long enough ago that I think the nice profiling log with it is now deleted.

EDIT 1:

… because I obtained its creating Duplicati version from manifest. Though logs are deleted, the database rotation from 2.0.8.108_debug_2024-05-08 still has relevant databases which were left after I went to the next version. One bad/good thing about fast releases is many leftovers remain.

More study could be conducted, but from looking at a list import to spreadsheet to sort by date, files in question probably came from a compact, as they follow a synthetic filelist and regular dlist:

{"Name":"duplicati-b4f01f6883c914b9a93b638978182c464.dblock.zip.aes"    "LastModification":"2024-05-08T15:57:40.848Z"   "Size":52392941 "IsFolder":false}
{"Name":"duplicati-b3bd15b4662364717892b374dbde33b37.dblock.zip.aes"    "LastModification":"2024-05-08T15:57:45.875Z"   "Size":52345853 "IsFolder":false}
{"Name":"duplicati-ib0b80714015e4b13a3596e2f41a59fd4.dindex.zip.aes"    "LastModification":"2024-05-08T15:58:06.476Z"   "Size":61245    "IsFolder":false}
{"Name":"duplicati-i3f9a28eebb464a19849d928b7cc0375d.dindex.zip.aes"    "LastModification":"2024-05-08T15:58:06.587Z"   "Size":32797    "IsFolder":false}
{"Name":"duplicati-b981a9e277fef42c28a634997aef6e6c4.dblock.zip.aes"    "LastModification":"2024-05-08T15:58:35.021Z"   "Size":52333837 "IsFolder":false}
{"Name":"duplicati-b243b2a1dcf9644568e1422cdceaa78fe.dblock.zip.aes"    "LastModification":"2024-05-08T15:58:46.469Z"   "Size":52356973 "IsFolder":false}
{"Name":"duplicati-i1f120e05d29a41ff802d59a5834810a7.dindex.zip.aes"    "LastModification":"2024-05-08T15:59:28.339Z"   "Size":47453    "IsFolder":false}
{"Name":"duplicati-i9d9ca8bb70cd4eb7bd492ccac08e8dde.dindex.zip.aes"    "LastModification":"2024-05-08T15:59:28.502Z"   "Size":47517    "IsFolder":false}
{"Name":"duplicati-b07d977016a244138b10dfe0835c94a28.dblock.zip.aes"    "LastModification":"2024-05-08T16:00:42.239Z"   "Size":39128221 "IsFolder":false}
{"Name":"duplicati-if3faae1809a549e29ba3a3f5240eca31.dindex.zip.aes"    "LastModification":"2024-05-08T16:00:53.085Z"   "Size":250877   "IsFolder":false}
{"Name":"duplicati-20240425T200809Z.dlist.zip.aes"                      "LastModification":"2024-05-08T16:00:56.682Z"   "Size":962877   "IsFolder":false}
{"Name":"duplicati-20240508T155502Z.dlist.zip.aes"                      "LastModification":"2024-05-08T16:00:56.799Z"   "Size":963021   "IsFolder":false}
{"Name":"duplicati-b886a86d10c2d4468873a5ad2029a5233.dblock.zip.aes"    "LastModification":"2024-05-08T16:05:08.043Z"   "Size":52476621 "IsFolder":false}
{"Name":"duplicati-iab4699b0234344dfaa81184d3524cd4b.dindex.zip.aes"    "LastModification":"2024-05-08T16:05:55.448Z"   "Size":40077    "IsFolder":false}
{"Name":"duplicati-b6c6d2a45168b49ff9f88c0ad706e5a5c.dblock.zip.aes"    "LastModification":"2024-05-08T16:06:52.586Z"   "Size":52481021 "IsFolder":false}
{"Name":"duplicati-iab37ddcdb8ea4ad789d96bb7b28f2894.dindex.zip.aes"    "LastModification":"2024-05-08T16:07:04.154Z"   "Size":71501    "IsFolder":false}
{"Name":"duplicati-bd4cd9d9703dc4e2da48e3bef5daeb2ce.dblock.zip.aes"    "LastModification":"2024-05-08T16:08:10.355Z"   "Size":52471741 "IsFolder":false}
{"Name":"duplicati-i8fa2700e65de436885b69e5a12682ebc.dindex.zip.aes"    "LastModification":"2024-05-08T16:08:47.544Z"   "Size":26973    "IsFolder":false}
{"Name":"duplicati-b622d5d9245ab4496ba0ad8af504ac4bf.dblock.zip.aes"    "LastModification":"2024-05-08T16:09:27.557Z"   "Size":11854989 "IsFolder":false}
{"Name":"duplicati-id1ff6a0b518f47748a81b34f8c36aecf.dindex.zip.aes"    "LastModification":"2024-05-08T16:09:30.975Z"   "Size":33245    "IsFolder":false}
{"Name":"duplicati-verification.json"                                   "LastModification":"2024-05-08T16:09:41.411Z"   "Size":106399   "IsFolder":false}

Dups are in:

duplicati-bd4cd9d9703dc4e2da48e3bef5daeb2ce.dblock.zip.aes
duplicati-i8fa2700e65de436885b69e5a12682ebc.dindex.zip.aes

Are these files with duplicate hash entries? Or just duplicated files?
I tried creating some duplicated files, but that does not trigger the error.
Next attempt is to try to create duplicate entries in the files.

I found a messy workaround. If you add --unittest-mode=true when recreating, it should actually proceed to generate the database. It is a bit messy because the database is not properly cleaned with respect to duplicates, but for restoring, that does not matter.

1 Like

Duplicate hash entries. I posted the 7-Zip image of the dblock file duplicate. Its dindex has:

{"hash":"Merv5GbRee6UJ8qz3Kgna79leGOcj9hxGDlKiJ+4U2I=","size":160},{"hash":"sPaznhDsoLc559OlMl3JyVe4Z98eKSTayhRPKq+VW10=","size":102400},
...
{"hash":"sPaznhDsoLc559OlMl3JyVe4Z98eKSTayhRPKq+VW10=","size":102400},{"hash":"FpRzPeoc5lD6UeYTcTbV9b+NqmMT+piMxrBxcPP6ZQg=","size":102400},

from two Notepad lines of duplicati-bd4cd9d9703dc4e2da48e3bef5daeb2ce.dblock.zip.aes

I tested my other Duplicati production backup (2.0.8.1 to OneDrive), and got some findings:

checker52.py finds
Duplicate block yPQ+yYSWbgWkym0AwcFeL7lG3ApFFUyGMzcVYnVIzBo=

Checker DB finds
duplicati-b76ad56f31c6c4631b0e18c05a552ba0a.dblock.zip.aes / duplicati-iacf2b22587fb4c85b62f4c59e7e6aaf2.dindex.zip.aes
duplicati-b8769cc358966495892974d30f087a561.dblock.zip.aes / duplicati-i00ccd5fe773346fd9b19dd1a8de28ae0.dindex.zip.aes

Regular DB finds
duplicati-b76ad56f31c6c4631b0e18c05a552ba0a.dblock.zip.aes / duplicati-iacf2b22587fb4c85b62f4c59e7e6aaf2.dindex.zip.aes

test with --full-remote-verification=True finds

duplicati-b8769cc358966495892974d30f087a561.dblock.zip.aes: 1 errors
	Extra: yPQ+yYSWbgWkym0AwcFeL7lG3ApFFUyGMzcVYnVIzBo=

duplicati-i00ccd5fe773346fd9b19dd1a8de28ae0.dindex.zip.aes: 1 errors
	Extra: yPQ+yYSWbgWkym0AwcFeL7lG3ApFFUyGMzcVYnVIzBo=

A theory is it starts with duplicate blocks in different volumes, then compact puts them in one.

Theory will be better when I look at B2 (the other) backup more, but it’s in debug for log error.

I also found that my old --full-remote-verification broke courtesy of a breaking change:

  --full-remote-verification (Enumeration): Activate in-depth verification of files
    After a backup is completed, some (dblock, dindex, dlist) files from the remote backend are selected for verification. Use this option to turn on full verification, which will decrypt the files and
    examine the insides of each volume, instead of simply verifying the external hash. If the option --no-backend-verification is set, no remote files are verified. This option is automatically set when
    then verification is performed directly. ListAndIndexes is like True but only dlist and index volumes are handled.
    * values: True, False, ListAndIndexes
    * default value: False

used to be below. I’m pretty sure I objected, so should have changed my own usages…

  --full-remote-verification (Boolean): Activates in-depth verification of
    files
    After a backup is completed, some (dblock, dindex, dlist) files from the
    remote backend are selected for verification. Use this option to turn on
    full verification, which will decrypt the files and examine the insides
    of each volume, instead of simply verifying the external hash, If the
    option --no-backend-verification is set, no remote files are verified.
    This option is automatically set when then verification is performed
    directly.
    * default value: false

EDIT 1:

OneDrive had only one duplicate, so B2 is more interesting. I can test in another Duplicati.

EDIT 2:

Tested with a brand new 2.1.0.105, an rclone sync of B2 destination, and the 2.1.0.104 DB.
Surprisingly, test all with --full-remote-verification=True was clean, if it worked…

A Recreate wound up with two rows in DuplicateBlock table. Not surprising, because I had previously seen the same thing in 2.1.0.104 post-backup Recreate test database, and had accounted for the other nine complaints from my checker as being dups all in single dblock.

sPaznhDsoLc559OlMl3JyVe4Z98eKSTayhRPKq+VW10= 248
FpRzPeoc5lD6UeYTcTbV9b+NqmMT+piMxrBxcPP6ZQg= 248
EnVNI+7tspnQeKOQ6SIJ+QWn1aBEgFhuRMWnKwejjFg= 248
0BJxhj10ydSioTYLA9nnkp5kJ9V9pJnY5KhZzEtWlfY= 248
yn2HXIspg7W1nFb0WH7taNRhGEDUptulV4i8min380M= 248
ZnCqRTwwKH2II1eDmKWiexmf/qDcdIc8hbVeP7pknKQ= 248
aeGC3P769M8Asuz3hLWTDd5fPAweXbmXC2JP5406ths= 248
sV9XOv0yWiw4O2tb+kKdhL4y2bpSyr/UBDsrhdVDtbM= 248
nzyAT8GXhTupLyHnAvR8X6aEj+tlKy0S7XJ3dJ27Rcw= 248

VolumeID 248 is duplicati-bd4cd9d9703dc4e2da48e3bef5daeb2ce.dblock.zip.aes

0PCRrDi7fAYE/9iytkGtdW4uRFConEq5PlkSEmGGWTg= 139,272
FR0wRA4Rdy7oeUBNoK69bymiCu1e2cpVKqNB3p5X1sE= 272,322

VolumeID 139 is duplicati-b2f8a98d827fc4faf847f74db470e0fd6.dblock.zip.aes
VolumeID 272 is duplicati-bf66b5b966c0c40a88a933fc0367784c0.dblock.zip.aes
VolumeID 322 is duplicati-baea7fcd2e269420391ffa6f5e76d9f11.dblock.zip.aes

Above was the checker DB. Below is 2.1.0.105 Recreate DuplicateBlock table

0PCRrDi7fAYE/9iytkGtdW4uRFConEq5PlkSEmGGWTg= 80,238
FR0wRA4Rdy7oeUBNoK69bymiCu1e2cpVKqNB3p5X1sE= 185,238

VolumeID 80 is duplicati-b2f8a98d827fc4faf847f74db470e0fd6.dblock.zip.aes
VolumeID 185 is duplicati-baea7fcd2e269420391ffa6f5e76d9f11.dblock.zip.aes
VolumeID 238 is duplicati-bf66b5b966c0c40a88a933fc0367784c0.dblock.zip.aes

I forget (or don’t know) the mechanism behind the Extra block problem, but I think having runs of more than one Extra in a given dblock or dindex complaint was common. Maybe seeing a run of duplicates in a dblock coming out of compact could be due to a run in now-gone feeder volumes.

Looking at this another way, is there anything in compact that will weed out incoming duplicates?

I added a lot of data via edits to my post. Let me know any further questions.

An interesting test would be to remove it when someone has this recreate fail.
Possibly even in this case, if the pCloud backup hasn’t been deleted already?

EDIT:

The timing might be hard. Removal survives version update, but not Recreate.

Where should I add it? I haven’t used command lines, just the buttons (of which there are three: “Repair,” “Delete,” and “Recreate (delete and repair)”)?

Should I use the command “Repair” under “Commandline” and add --unittest-mode=true in the “Commandline arguments” box?

That was back in 2.0.7.100 this change was introduced?

I can confirm that both SharpCompress and the .NET Zip library is able to create duplicate entries in a zip file. However, both implementations in Duplicati will mask this problem and return a list that only has only a entries with unique names.

I have added a test to verify that this detail does not leak into Duplicati.

In other words, it is odd that the zip/7z archives contains duplicates, but this is not visible to Duplicati, so unlikely the cause of the reported constraint violation issue.

I fixed an issue where this could happen if compact was interrupted twice, because the left-overs from the first compact would be included in the second compact. But even having the same block in multiple volumes would not trigger the reported constraint violation, it would just populate the DuplicatiBlock tables.

There is a bunch of filtering going on, but there is also a pending PR that will further reduce the number of blocks.

Looks like they are finding the same duplicated blocks?

Yes, but it is a bit hard to do for a non-programmer as it happens during the re-create. You would need to somehow pause the process, then alter the database and continue. The SQL command to do so is this:

DROP INDEX "UniqueBlockVolumeDuplicateBlock";

The workaround with --unittest-mode exploits a workaround that should be removed where the cleanup function (that fails here) is not being triggered during unittest.

There is a comment that it is disabled because:

// In some cases we have a stale reference from an index file to a deleted block file

I removed the exclusion, and all tests pass, so not sure what this was originally doing.

Yes, that would be one way. You can also edit the backup and change the option there (last page, Advanced settings) and then save the job. When you run “Recreate” from the UI, it should then apply the option.

If you use the commandline, make sure you remove the sources from the text box, otherwise it will fail.

–full-remote-verification` changes from a boolean option to a tri-valued one. Existing configurations should not be impacted.

was that release note, but the claim about “not be impacted” turns out to be not so true.
There was discussion and test on the PR that supported it, though my result is different.
I’m not sure if anyone else got hurt, and I didn’t see the error in output file until just now.

Other theories are welcome. I could pursue mine further, but it might not be worthwhile…

Yes, and I was pleased to see that because the checker with the DB is still very untested.
Enabling this sort of analysis easily (just look in its tables) is one of the goals of using DB.
The other main goal was performance, playing with speedup from batched (no chat) SQL.

Success!! I managed to provoke the error with the following steps:

  • Create a backup with 1 index file and 1 block file
  • Create 2 copies of block file
  • Create 2 index files that reference the 2 extra block files
  • Delete the 2 duplicated block files
  • Repair

So it looks like the problem is that there is (at least) one dblock file that is no longer present and (at least) two dindex files that point to it.

Now I just need to fix it :slight_smile:

This would be something like copy the dblock file under a new name, as upload retry might do?

Repair

Without a database, so recreates it? I’m surprised it didn’t complain about bad dindex reference.

recreate “Remote file referenced as …” error after interrupted backup then compact #5023

Situation here may be different though. My speculation about above Issue it reminds me of was:

Possibly here it doesn’t know the name because IndexBlockLink didn’t get filled out. Regardless, the extra dindex file is harmless until dblock+new dindex from the fix get deleted by a compact. After that, there’s old dindex pointing to a dblock that’s not there. Before that, maybe recreate was happy because it found two dindex, but both pointed to the same dblock, and had the same information, so no harm done. Just a guess.

That might be a wrong conclusion, as it looking like it was off might be from no Extra due to fix.

The option --full-remote-verification does not support the value “”. Supported values are: True, False, ListAndIndexes

is what it said though, to my attempt to set the Boolean option true without giving specific value.

EDIT:

Running CommandLine test in 2.0.8.1 on a backup with Extra got --full-remote-verification complaint at the top – yet also the Extra at the bottom, so it did seem to be interpreted as a True.

If you can wait a few days, I will make a new canary build that fixes the problem. If you prefer, I can also make a custom debug build for Windows now (unsigned, only for testing use).

I think there are multiple ways to get there, but they are uncommon, most likely require some combination of compact and interruption.

Now that I managed to fix the issue, I know exactly what happens:

  • You need to have 2 dindex file that references one or more dblock files that is missing
    Lets call them d1b1 and d2b2 respectively, where both b1 and b2 are missing.

  • You need to have 1 dindex file that references one or more blocks that were also present in the missing dblock file(s)
    Lets call that d3b3 where b3 is still there

  • One dindex file that points to a missing dblock file must be processed first

With this setup, the repair will assign all the blocks from d1 and d2 to missing files, which will not be restorable. Since d2 and d3 also contains a least one of the blocks, the DuplicateBlock table will then have two entries:

hash1, d2 (missing)
hash1, d3 (existing)

The cleanup will then go in and figure out that hash1 is pointing to a missing file, and correct it by mapping hash1d3.

The error in this issue is that since there is another entry, it will incorrectly update the DuplicateBlock table so it looks like:

hash1, d1 (existing)
hash1, d1 (existing)

And this causes the constraint violation as we have two blocks in the same volume.

The fix is to only update the volume that needs updating, so it ends up like this:

hash1, d2 (missing)
hash1, d1 (missing)

The parsing is not logical here. We should standardize the option parsing, but yes, an empty string (i.e., no value given) results in “true”, where lack of an option gives “false”, which is unlike any of the other option parsing code.
The validator runs prior to actually parsing the value, so there might be deviations in what is reported working and what is actually working (unfortunately).

1 Like

Done, but when I attempt to recreate the database, it fails (after hours of trying).


Result:

It failed too, in mere seconds this time.

Result:

I can try that too (no hurry at all), if you expect a different result. My multiple attempts to recreate the database might have damaged something.

At any rate… thank you very much for your time!

Hello! I see the same error: “constraint failed UNIQUE constraint failed: DuplicateBlock.BlockID, DuplicateBlock.VolumeID”. Have I make another forum thread?

My error occurred after message about db malfunctions. I recreated DB and see the next messages in the log:

* 2025-01-12 15:28:44 +03 - [Error-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-b6fb1499584624539b2d92fa6a6301a12.dblock.zip.aes by duplicati-i50c82057dc7d457abf359e1d86249529.dindex.zip.aes, but not found in list, registering a missing remote file
* 2025-01-12 15:29:29 +03 - [Error-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-b77e5568473db4374bc75416f85982a35.dblock.zip.aes by duplicati-i573884406f3b463da9786381ce64d750.dindex.zip.aes, but not found in list, registering a missing remote file
* 2025-01-12 15:31:50 +03 - [Error-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-bc960abec0e514fef8f305d67f3b5c1e7.dblock.zip.aes by duplicati-i63fe0b55e5814313b837b2eec3d66232.dindex.zip.aes, but not found in list, registering a missing remote file
* 2025-01-12 15:33:53 +03 - [Error-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-bb0b40f36949d47238d5fa1aeb2e9388c.dblock.zip.aes by duplicati-icdf43deb183d42f3991f9b2e4feebd13.dindex.zip.aes, but not found in list, registering a missing remote file
* 2025-01-12 15:34:10 +03 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: Сбой при операции Repair с ошибкой: constraint failed UNIQUE constraint failed: DuplicateBlock.BlockID, DuplicateBlock.VolumeID SQLiteException: constraint failed UNIQUE constraint failed: DuplicateBlock.BlockID, DuplicateBlock.VolumeID

What should I do? Remove index files?

Sorry to hear that. I have put up Canary Build 2.1.0.106 which includes the fix for recreating the database in this case.

I guess that could work, but it will be slow.

Alternative is to download 2.1.0.106, which has a fix for this situation.

1 Like

A post was split to a new topic: Recreate constraint violation on DuplicateBlock

Sorry for the noob question, but which asset(s) do I need to download to install Canary Build 2.1.0.106 on Windows 10? duplicati-2.1.0.106_canary_2025-01-11-win-x64-gui.msi? duplicati-2.1.0.106_canary_2025-01-11-win-x64-agent.msi?

Package options describes. GUI is likely, as Agent is for specialized use.

106 didn’t help another tester. It will be interesting to see if it helps yours.

1 Like

:+1:

Infortunately, it didn’t. I successfully installed Canary, but when I try to recreate the database, I get this error message:

constraint failed UNIQUE constraint failed: DuplicateBlock.BlockID, DuplicateBlock.VolumeID

And from the log:

Jan 16, 2025 5:37 PM: Failed while executing Repair “PA on pCloud” (id: 10)
code = Constraint (19), message = System.Data.SQLite.SQLiteException (0x800027AF): constraint failed
UNIQUE constraint failed: DuplicateBlock.BlockID, DuplicateBlock.VolumeID
at System.Data.SQLite.SQLite3.Reset(SQLiteStatement stmt)
at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt)
at System.Data.SQLite.SQLiteDataReader.NextResult()
at System.Data.SQLite.SQLiteDataReader…ctor(SQLiteCommand cmd, CommandBehavior behave)
at System.Data.SQLite.SQLiteCommand.ExecuteNonQuery(CommandBehavior behavior)
at Duplicati.Library.Main.Database.ExtensionMethods.ExecuteNonQuery(IDbCommand self, Boolean writeLog, String cmd, Object values)
at Duplicati.Library.Main.Database.LocalRecreateDatabase.CleanupMissingVolumes()
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.Run(String path, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
at Duplicati.Library.Main.Operation.RepairHandler.RunRepairLocal(IFilter filter)
at Duplicati.Library.Main.Operation.RepairHandler.Run(IFilter filter)
at Duplicati.Library.Main.Controller.<>c__DisplayClass26_0.b__0(RepairResults result)
at Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action1 method) at Duplicati.Library.Main.Controller.RunAction[T](T result, IFilter& filter, Action1 method)
at Duplicati.Library.Main.Controller.Repair(IFilter filter)
at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)