Unrepairable DB, not clear what to do

Good day folks,

I am encountering an extremely frustrating issue, for the last week I have been trying to get Duplicati to play nice with my Jottacloud subscription. For some reason there are files that return the following error:
‘Found inconsistency in the following files while validating database.’
And repair, or recreate (delete and repair), do not work. The log says that repair was successful but resuming the backup job takes a few seconds to error out.

Using Google I find very old threads and all mention some SQL fiddling… Is there another way to resolve this? Or at least, I don’t really seem to understand what those threads are telling me. I am not looking forward to having to wait a near full day to backup my stuff.

Edit with some additional info:
I tried list-broken-files, which did not return a list of broken files. I also tried purge-broken-files, which also did not work.

Edit 2:
Figured out how to wrangle the SQL.
The SELECT Remotevolume.Name FROM Remotevolume INNER JOIN Block ON (Block.VolumeID=Remotevolume.ID) INNER JOIN BlocksetEntry ON (Block.ID=BlocksetEntry.BlockID) WHERE BlocksetID=<offendingBlockSetID>
does not return a row, and it does not return a row with any of the offending files.

Edit 3: When clicking show on the error popup, it does not show an entry in the General tab.

Welcome to the forum @CAREFiSH

Is this an initial backup problem or an ongoing backup that failed? Is this a basic Windows install?

That’s not the whole message (there are examples in the other reports). Do you have more lines?

Getting back to question of initial backup versus ongoing that died. It may take longer to investigate.
Advantage of investigating is there’s a chance something may be learned to avoid future problems.

Did you try The PURGE command? Latest version (0) could be the starter. Add more if necessary.
Besides the low-level DB DELETE command plan, this seems like one of the confirmed cleanups.

So you’ve got some sort of browser? That may come in handy to look around further at the error.
Creating a bug report would be nice to allow someone else to examine, but names are redacted.

So you have a longer message with source file names? I don’t need names, but you do to purge.
Can you post more, redacting names as you wish? Try About --> Show log --> Live --> Warning.

Fatal backup errors often wind up in server log at Home --> About --> Show log. Click error lines.

Good day @ts678,

For the first question, initial backup problem. I solved it currently by splitting up a large backup jobs into smaller ones (from 365GB in one go to ~70~75GB per job).

Second question, since I was not aware how to enable or view better logs (the Home -> About -> Show log) I can not properly answer that anymore.

And I did try the purge command but it seemed to return immediately without performing anything.

I do think I have a bug report made, which can be found here.

So currently I do not have the problem anymore (because I split up the large backup job into smaller ones), but I would like to know how to solve this in the future if it ever happens again and hopefully add something meaningful to the existing ‘confirmed cleanup’ strategies. I am enjoying the simplicity of Duplicati and how well it seems to restore backups as well.

The server logs persist unless you take the rather extreme step to delete Duplicati-server.sqlite manually. That would destroy all of your job configurations, so it’s not often done unless one really wants to clean…

If the error is not in there but the error is still happening, then just backup again while watching the live log, however if you mean you split the job after the bug report, so original is deleted, maybe we lost a chance.

Even if you deleted the remote, to save space, unless you also deleted database, there’s still info around,
and if you deleted the job and the database but still have the destination, then you can Recreate to debug.

You said you tried purge-broken-files which is a different command. If you purge, you need to give names, however based on prior response I’m not sure if you have the error message lines that tell you the names.

Or did you run both? It “looks” from the bug report like you found two bad blocksets, each with four names.
Perhaps you have some exact duplicates of files in different places, and that might create a result like this:

CalcLen Length  BlocksetID      Path
0       3021161 476             X:\50\93547.bin
0       3021161 476             X:\54\93546.bin
0       3021161 476             X:\57\137.bin
0       3021161 476             X:\61\136.bin
0       1676297 480             X:\50\93551.bin
0       1676297 480             X:\54\93550.bin
0       1676297 480             X:\57\141.bin
0       1676297 480             X:\61\140.bin

You can take the BlocksetID to your FileLookup table to see what your actual paths are. Look familiar?

To get a little technical, a blockset here is probably data content of a file of the given Length. Content is broken into fixed size blocks for deduplication. The file should have a bunch of blocks, but these do not.

How the backup process works

To get more technical, here’s the query I ran on the bug report. If you decide to try it on regular database then it’d be better to run it on a DB copy, in case accident happens. Change FixedFile to FileLookup:

SELECT "CalcLen"
	SELECT "A"."ID" AS "BlocksetID"
		,IFNULL("B"."CalcLen", 0) AS "CalcLen"
	FROM "Blockset" A
		SELECT "BlocksetEntry"."BlocksetID"
			,SUM("Block"."Size") AS "CalcLen"
		FROM "BlocksetEntry"
		LEFT OUTER JOIN "Block" ON "Block"."ID" = "BlocksetEntry"."BlockID"
		GROUP BY "BlocksetEntry"."BlocksetID"
		) B ON "A"."ID" = "B"."BlocksetID"
	) A
WHERE "A"."BlocksetID" = "File"."BlocksetID"
	AND "A"."CalcLen" != "A"."Length";

Based on forum posts and code below (no guarantees it’s right, but it gave me the results that I posted).