Unexpected update count: 0

Started getting this error.

Unexpected update count: 0 Not sure how to proceed?

Where are you seeing this error (log, email, other…) and what were you doing when it happened (backup, restore, other…)? Oh, and what version of Duplicati are you using?

Unfortunately, that exact worded error shows up in 2 different places in the code so I can’t tell exactly what was being done when the error happened.

I’m also running into this issue. It occurs when I attempt to do a database repair. The problem began after the C drive of the server (same drive Duplicati is installed on) ran out of space (due to unrelated log files) while a backup was in progress.

I freed up more than 100GB of space on the C drive, but all backups have failed since that happened and a database repair attempt gives me the error in question.

I’m running Duplicati - 2.0.4.5_beta_2018-11-28

Edit: Meh, never mind. Duplicati shows promise, but this is the second time I’m dealing with either spending a month recreating the DB (which would probably fail) or just starting a new backup set. I don’t have time for this; I need to find something production-ready, which may include spending some money.

Looking forward to checking out Duplicati again in a few years.

Confirmed, exactly same problem still exists. Duplicati simply self-destructs and repair won’t work. This with “duplicati-2.0.5.110_canary_2020-08-10-x64” version.

So this feature confirms that the system is just inherently broken. Because this shouldn’t happen.

Can you give more background in your situation? Did the disk run out of space and cause the backup to fail, like a previous poster?

You are attempting a database recreate at this point and getting this error?

Disk full during backup, initial reason. Backup started to fail and also repair fails with this error.

Recreate is still running, it’s going to take at least a few days.

I did save the “corrupted” database. Yet it’s not corrupted on sqlite3 level, it’s logically corrupted on software implementation level, which tells me that something is very wrong.

This just made me think, has anyone made any “sane” sanitization script for the db? Because technically the DB shouldn’t contain anything I couldn’t share. But I would like to sanitize filenames and stuff like that. Hashes in the database aren’t sensitive and most of information was anyway given with the job, and not stored in the db to begin with. I’m also able to share the DB encryption key, if that helps.

The “create bug report” option makes a copy of the job database and sanitizes filenames and other sensitive data, just as you describe. Not sure if there’s a dev with database expertise that has time to analyze it though.

I agree with what you’re saying. Ideally Duplicati could recover more gracefully from this type of problem. And if it can’t, database recreations should be fast. I have done a lot of experimenting with database recreation. It SHOULD only need to download dindex and dlist files. If your recreation is downloading dblocks, it can be slow. Maybe even REALLY slow. (Check the Live Log to see how many blocks it is downloading/processing.)

The reason dblocks are needed seems to be caused by incorrectly-written dindex files. I have been able to “fix” this on my systems by regenerating the dindex files, which requires the database being in a good state. After I do that I was able to recreate the local database quickly and without needing to process dblocks.

Just open the log-database.sqlite sanitized database inside the bug report and browse some of it.

DB Browser for SQLite can do this. What used to be your filenames are in sanitized FixedFile table.
That should be quite reliable because it’s all filenames so there’s no guessing needed. LogData has a selective deletion. I think it looks for filenames (which might be in messages) and deletes those logs…

Or if you’re willing to just assume it’s sanitized well enough, no need to do any of that. Just post report.
These are key to at least understanding the errored state, and maybe lead to a more robust repair, but figuring out how it got broken in the first place is a whole other game, and might require a lot of logging.

But even clues like this help because possibly someone can set it up to fail that way with logs running.

As a note on bug reports, you can slide an old database into place temporarily just to do its bug report, however you want to make sure not to permit any other operations to happen until the right DB is back.

First bit more background information, I did check the logs. Which initiated the need to run the repair process:

This was the initial error:

Error Log:

System.IO.InvalidDataException: Found 3 file(s) with missing blocklist hashes
at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency(Int64 blocksize, Int64 hashsize, Boolean verifyfilelists, IDbTransaction transaction)
at Duplicati.Library.Main.Operation.Backup.BackupDatabase.<>c__DisplayClass34_0.b__0()
at Duplicati.Library.Main.Operation.Common.SingleRunner.<>c__DisplayClass3_0.b__0()
at Duplicati.Library.Main.Operation.Common.SingleRunner.d__21.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task) at Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.<Backup>b__0(BackupResults result) at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action1 method)
at Duplicati.Library.Main.Controller.Backup(String inputsources, IFilter filter)
at Duplicati.CommandLine.Commands.Backup(TextWriter outwriter, Action1 setup, List1 args, Dictionary2 options, IFilter filter) at Duplicati.CommandLine.Program.ParseCommandLine(TextWriter outwriter, Action1 setup, Boolean& verboseErrors, String args)
at Duplicati.CommandLine.Program.RunCommandLine(TextWriter outwriter, TextWriter errwriter, Action`1 setup, String args)

After this repair gave the error I mentioned earlier.

And after rebuild, it’s all good again. Backus run and repair won’t complain. I didn’t expect this good results, based on earlier bad experiences. Yet, as mentioned it shouldn’t have required rebuild at all.

Rebuild was quite slow, but with fast ssd and high speed networking, it took less than 24 hours. And for sure it did download the dblock files. Yet the time spent downloading is really small compared to the time of updating the database. - I still think it’s done in some quite inefficient way. My personal guess is that the system runs way too many small transactions calling commit in between.

Finding right balance between simple logic, high performance and reliability is hard balancing problem. I do deal with that daily.

@ts678
I’m very familiar with SQLite, no issues there.

Yeah, finding root cause. Oh joy. I often raise the point that if system is designed and implemented correctly, this kind of problems shouldn’t ever happen. If everything didn’t to right, well then stop, don’t corrupt things. -> This is my personal motto when dealing with production systems. Yes, it annoys people when things are “unreliable” in a way, that those stop working. - Sure, that’s right. But the worse option is that someone figures out after two years that all the data they got is corrupted, invalid, misleading or unusable in some way. -> Yet this does happen, when someone wants something very cheaply and quickly. Simple logic -> stuff out, without any validation.

Slide database back in, I’ve been using the CLI version all the time. Gotta check out how I’ll stick the database to the UI instance on another system. Yup, seemed to be working beautifully, I created new backup set with manually configure DB path, and replaced the DB with the “corrupted” one.

Why I did the corrupted in quotes? Well, pragma integrity_check says, it’s all good.

Report generation went well. Actually the filenames in the backup were so general that basically even if those would leak, it wouldn’t matter at all. Anyone could guess the filenames based on my user account name and making some very quick Googling.

I’m sorry if this isn’t according the protocol. But the bug generation didn’t provide direct submit option. ( I guess you don’t want to store all the potentially sent databases, hah). But I’ll send the link to the bugreport.zip link to both of you guys in PM. - Let’s hope it’s useful.

Thank you for being so helpful. I wouldn’t go through this, if I wouldn’t believe Duplicati is worth of the effort.

Repair for missing BlocklistHash may fix single but do nothing for multiple #4397 is theory.

and that’s unfortunately more than I can say, so feel free to go look. I’ll show my test below.

FixMissingBlocklistHashes contains sql which (when reformatted by poorsql.com) is:

SELECT *
FROM (
	SELECT "N"."BlocksetID"
		,(("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected"
		,CASE 
			WHEN "G"."BlocklistHashCount" IS NULL
				THEN 0
			ELSE "G"."BlocklistHashCount"
			END AS "BlocklistHashCountActual"
	FROM (
		SELECT "BlocksetID"
			,COUNT(*) AS "BlockCount"
		FROM "BlocksetEntry"
		GROUP BY "BlocksetID"
		) "N"
	LEFT OUTER JOIN (
		SELECT "BlocksetID"
			,COUNT(*) AS "BlocklistHashCount"
		FROM "BlocklistHash"
		GROUP BY "BlocksetID"
		) "G" ON "N"."BlocksetID" = "G"."BlocksetID"
	WHERE "N"."BlockCount" > 1
	)
WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual"

Running above in sqlitebrowser saw your 3 inconsistencies. Only original DB knows paths:

BlocksetID      BlocklistHashCountExpected      BlocklistHashCountActual
153661          1                               6
153724          1                               4
153728          1                               4

Test fortunately found the Repair code reachable if an empty FileLookup table is put back.
Creating the DB bug report rolls that and PathPrefix table to FixedFile. Easy on humans.
Similar to how it was previously, in File table, however it breaks code expecting new tables.

The Database Structure tab of another job database gave me table CREATE, so I did this:

CREATE TABLE "FileLookup" (
	"ID" INTEGER PRIMARY KEY
	,"PrefixID" INTEGER NOT NULL
	,"Path" TEXT NOT NULL
	,"BlocksetID" INTEGER NOT NULL
	,"MetadataID" INTEGER NOT NULL
	)

Your DB report has the odd look of having extra BlocklistHash entries, which look like they should get fixed were it not for a code bug where multiple such errors result in none fixed. So I will back the problem down to one at a time, first leaving 153661 as sole bad one. Repair fixed that, then went further and errored over what it saw as passphrase removal. I guess you encrypt, and I didn’t on my backup for test and transplant.

Repeat process from top for single repairs to 153724 and 153728, and both work. So I think Repair bug is understood as one accidental line omission, however a root cause is not yet understood. Because current team is very low on database expertise, would you like to see if you can find cause? There is an interesting pattern where the extra (max size) BlocklistHash entries have Hash from other file. You can get the names from FileLookup and PathPrefix using the BlocksetID values below. You can search with extra Hash values (everything not Index 0) to find which other files appear to share some data. Maybe this will get explainable after you see the names, but more likely something scrambled, and it’s not clear when from one snapshot.

BlocksetID      Index   Hash
153661          0       9B9O80u/zHzQwaHXtst08wpemnqCCNKrowRtvL3cNx8=
153661          1       MXKeyllR/rXCyoUcLIThDolx+WP9rQ+/6FYp5z9PB9I=
153661          2       ktj60Fu77oGW3j1FjBzUVIo/I/FUOc4fl8ijuHp1/7g=
153661          3       Cnh6RFgtUYIfLYoBxI8xO49vmsX0HQUWOLDOkTt+7Nw=
153661          4       tYXjUHZVPS3INL/o8t7jMLqugVCG4RPCJb25yP6vU9o=
153661          5       cdeaRVTSzeSriK4N2k4OzEclllgGk9/NY8B+uHoPDWA=

BlocksetID      Index   Hash
153724          0       OJdK5AjUfCVccwOrAOv8isrOGuTsweAdj0LjOEZe9ng=
153724          1       jx1hjXpJ3mbSCD7qe1W9ttWpKtBF8EfQ+Whwa4cd1D0=
153724          2       pR9bxn7E7GkIBUHK9uRBuJBLe6aHSOnAyKL3SBITaDE=
153724          3       T609Bpu5jcO5Y9+mMMFIQUmm8h8zhoYhKfjmdGkhl+c=

BlocksetID      Index   Hash
153728          0       +D6R2EHGk2v2nOgLTM4c00msCPRWA1iYsPt7lOY6wXQ=
153728          1       p04v2l5N6uN6HKcYm2dkrWa80QKPOPfz89MazqrpqMI=
153728          2       AcRISlqWJmkUC0bqMm7Q+1ET7F2Y3O/LT3p55wsE7D8=
153728          3       33oaAikH9Izomy6a7Q0QEJPMU27hmygTol9qGsFy3jk=