I created a new backup from one hard drive to a hard drive in my little network. It started and run appr. an hour without issues. Then I rebooted my computer. But now I can’t start this backup any more. Errors see below.
Windows 7 64 bit, Duplicati - 2.0.3.7_canary_2018-06-17
2018-06-27 18:07:33 +02 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2018-06-27 18:07:33 +02 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.Exception: Detected non-empty blocksets with no associated blocks!
bei Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency(Int64 blocksize, Int64 hashsize, Boolean verifyfilelists, IDbTransaction transaction)
bei Duplicati.Library.Main.Operation.Backup.BackupDatabase.<>c__DisplayClass31_0.<VerifyConsistencyAsync>b__0()
bei Duplicati.Library.Main.Operation.Common.SingleRunner.<>c__DisplayClass6_0.<RunOnMain>b__0()
bei Duplicati.Library.Main.Operation.Common.SingleRunner.<>c__DisplayClass5_0`1.<<DoRunOnMain>b__1>d.MoveNext()
--- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde ---
bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bei Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__19.MoveNext()
Repair of the database didn’t help: [Information-Duplicati.Library.Main.Operation.RepairHandler-DatabaseIsSynchronized]: Destination and database are synchronized, not making any changes
I deleted the database and started Duplicati again. It ended with following error:
2018-06-27 23:59:54 +02 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-ifbb1ef804a184fa396e7dcbd16782ffe.dindex.zip
2018-06-27 23:59:54 +02 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-iffea6c4773fa4d7685501f3b61cfb508.dindex.zip
2018-06-27 23:59:54 +02 - [Error-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraRemoteFiles]: Found 418 remote files that are not recorded in local storage, please run repair
2018-06-27 23:59:54 +02 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
Duplicati.Library.Interface.UserInformationException: Found 418 remote files that are not recorded in local storage, please run repair
bei Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, String protectedfile)
bei Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile)
bei Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__19.MoveNext()
3rd attempt (No filelists found on the remote destination):
Started to recreate the database. It ended with following error:
2018-06-28 00:06:56 +02 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Repair has started
2018-06-28 00:06:57 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()
2018-06-28 00:06:57 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: (418 bytes)
=======> ERROR from GUI: Error while running PC1--Name--Desktop auf Q: No filelists found on the remote destination
Do you recommend as a first step to downgrade to Beta 2.0.3.3 instead of using Canary 2.0.3.7??
Everything worked, but immediately after update (to 2.0.3.8) this error message is given and backup won’t run.
So something is different, and seems to be broken too.
Software as usual, I love it when people ask if this new update will fix issues. Sure it will fix some, and you’ll get many interesting new ones too. - But that’s the way it is.
Fatal error => System.Exception: Detected non-empty blocksets with no associated blocks!
at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency (Int64 blocksize, Int64 hashsize, Boolean verifyfilelists, IDbTransaction transaction) <0x4109d4c0 + 0x006c7> in <filename unknown>:0
at Duplicati.Library.Main.Operation.Backup.BackupDatabase+<>c__DisplayClass31_0.<VerifyConsistencyAsync>b__0 () <0x4109b360 + 0x0003b> in <filename unknown>:0
at Duplicati.Library.Main.Operation.Common.SingleRunner+<>c__DisplayClass6_0.<RunOnMain>b__0 () <0x4109b320 + 0x00019> in <filename unknown>:0
at Duplicati.Library.Main.Operation.Common.SingleRunner+<>c__DisplayClass5_0`1+<<DoRunOnMain>b__1>d[T].MoveNext () <0x4109ae60 + 0x000e7> in <filename unknown>:0
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () <0x7fa54f8016d0 + 0x00029> in <filename unknown>:0
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) <0x7fa54f7ff6b0 + 0x000a7> in <filename unknown>:0
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) <0x7fa54f7ff630 + 0x0006b> in <filename unknown>:0
at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) <0x7fa54f7ff5e0 + 0x0003a> in <filename unknown>:0
at System.Runtime.CompilerServices.TaskAwaiter.GetResult () <0x7fa54f7ff5c0 + 0x00012> in <filename unknown>:0
at Duplicati.Library.Main.Operation.BackupHandler+<RunAsync>d__19.MoveNext () <0x4103d520 + 0x011de> in <filename unknown>:100:
Linux is mentioned in the subject, because I did run backup on several Windows machines, and I didn’t get this error message. Let’s try with another Linux computer too… And see if the same happens.
Interesting, no problem with alternate system. Anyway, what’s the root cause and how it should be fixed?
Sounds like some older version could have left something in “unclean” state in the database, and this new version doesn’t like that.
It seems that some users experienced this already with the previous version:
@DUser & @Sami_Lehtinen - I’ve merged your topics as they are both the same thing. (@Sami_Lehtinen, I tweaked your post to make it clear you’re on 2.0.3.8.)
I suspect this won’t improve things, but what happens if you add --npo-backend-verification=true?
--no-backend-verification
If this flag is set, the local database is not compared to the remote filelist on startup. The intended usage for this option is to work correctly in cases where the filelisting is broken or unavailable.
Default value: “false”
Unfortunately, I’m having trouble finding what I thought was an even earlier instance of this for which @Pectojin had a solution (I think).
Until he “gets here” all I can say is that from what I recall there’s an issue where a destination compacting that happens resulting in a file being deleted (as it should) but one part of the local database doesn’t get updated to indicate the delete happened.
Normally, a database recreate would resolve this but part of the issue is that since the database record isn’t deleted, neither is a destination file (either dlist or dindex, I forget which). The results of this are that even if you recreate the database, it comes back still broken because the orphaned destination file still reports that there should be an empty dlist file, which then re-screws up the database.
I now get this error on two of my backup sets. The only fix seems to be to delete the whole backup and re-run it, which is not viable at all! Hope we can pinpoint and fix this one soon!
I’m now getting this same error on my 2.0.4.2_experimental_2018-11-12 system (Windows 10 laptop). I tried a Verify (got this error again) and Database Repair. Is there a fix?
I now get the same error. Repairing the database does not solve the issue. Is there a command to launch to get what orphaned files are causing the issue on the remote destination? Would removing them solve the issue?
I was able to get again functionality since I also backup separately the local job database. So I restored it to the date at which the main job was last functioning, and remotely deleted all dblock and dlist files following said date. I then repaired the database and jobs are now working again.
Hopefully I neither needed a database recreate, nor to start the job from scratch. I speculate the issue was due to crappy internet connection, and inability by Duplicati to handle this.
However, given this kind of issues in Duplicati, I find it crucial to have a separate backup of the local job database, and to disable the automatic compact feature (using --no-auto-compact). Can anybody confirm that, this way, remote dblock and dlist files are never modified again, unless manually requested? (Besides files affected by retention policy, of course)
Hi, I would be happy to contribute, but I was not so systematic in my
debugging, so I already deleted the faulty database, once the restored
one was proven to work.
I have this issue as well on one of my backups. It just started recently, but I didn’t notice the update to 2.0.4.5 beta causing the issue. The profile worked for a while after I updated I think. First, I noticed some errors that my file name was too long and backups were failing. Then, I noticed it was having issues backing up my new outlook OST file. So I made a filter to exclude *.OST file extensions and re-ran the backup to get a better log of the long named file. I no longer got that error, but now got this one about non-empty blocksets.
If I can help with any testing, please let me know. I have a second backup going off-site that’s still working, so I’m not in a critical state where I can’t pause for troubleshooting.
If you could follow the steps in Fatal error: Detected non-empty blocksets with no associated blocks replies 3 and 6, which involve using an SQL browser (dl link provided there) and running a query, among other steps, and then posting the results it would help. Unfortunately, the file name that is messing up is not recorded. But file names “nearby” are. Might give a hint.
Interesting that your problem happened after adding a filter to exclude some files. That’s generally consistent with the theory that when Duplicati goes to remove a file, it removes it from two of its DB tables but not the 3rd.
I’m not sure if I did this correctly as I don’t mess with SQL much.
I downloaded the DB Browser for SQLite, opened the database in read-only “C:\Windows\System32\config\systemprofile\AppData\Roaming\Duplicati\7481…8475.sqlite”.
Then I pasted your code from the other thread “SELECT * FROM Blockset WHERE Length > 0 AND ID NOT IN (SELECT BlocksetId FROM BlocksetEntry)” without quotes.
The response I got was: 0 rows returned in 1ms from: SELECT * FROM Blockset WHERE Length > 0 AND ID NOT IN (SELECT BlocksetId FROM BlocksetEntry)
Sorry, also don’t know how to put the commands or logs into separate boxes in the thread to make them more readable.
Ok thanks @sjl1986 it sounds like you did it right.
But, those results are inconsistent with others. Quite interesting.
For sanity checks, can you ensure that there is only one .sqlite file in that directory with that naming scheme? And ensure that you’re still getting this specific error: Detected non-empty blocksets with no associated blocks. Because that sql query you did comes directly from the code which triggers this error. Also did you happen to run this with 2 different user names? I’m trying to see how the sql query in the code gets a different result than the sql browser
use ~~~ or ``` (those are backticks) before and after to make a code block.
Thanks for the code block tip. I have always run Duplicati as a service, under the local system account. This is the only SQL file that was numbered this way. I have others labeled like “backup 201812#####.sqlite” and “MEOIF…sqlite” (with all capital letters) and a Duplicati-server.sqlite.
I ran it again and below is the abbreviated output of the profiling log. I removed all but the last file before each new section to keep it short. If you want to see the whole thing I can host a cleansed copy of it somewhere.
2019-01-21 00:13:20 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: C:\Program Files (x86)\Steam\steamapps\common\wolfenstein 3d\base\SHADERS\CRT-geom-blend.fx
2019-01-21 00:13:20 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 16000 - 1) / 16000) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual") took 0:00:00:00.206
2019-01-21 00:13:20 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "Blockset" WHERE "Length" > 0 AND "ID" NOT IN (SELECT "BlocksetId" FROM "BlocksetEntry")
2019-01-21 00:13:20 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: C:\Program Files (x86)\Steam\steamapps\common\wolfenstein 3d\base\SHADERS\Matrix.fx
2019-01-21 00:13:20 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "Blockset" WHERE "Length" > 0 AND "ID" NOT IN (SELECT "BlocksetId" FROM "BlocksetEntry") took 0:00:00:00.324
2019-01-21 00:13:20 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: C:\Program Files (x86)\Steam\steamapps\common\wolfenstein 3d\base\SHADERS\MCGreen.fx
2019-01-21 00:13:22 -05 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.Exception: Detected non-empty blocksets with no associated blocks!
at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency(Int64 blocksize, Int64 hashsize, Boolean verifyfilelists, IDbTransaction transaction)
at Duplicati.Library.Main.Operation.Backup.BackupDatabase.<>c__DisplayClass32_0.<VerifyConsistencyAsync>b__0()
at Duplicati.Library.Main.Operation.Common.SingleRunner.<>c__DisplayClass3_0.<RunOnMain>b__0()
at Duplicati.Library.Main.Operation.Common.SingleRunner.<DoRunOnMain>d__2`1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__19.MoveNext()
2019-01-21 00:13:22 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:00:32.195
2019-01-21 00:13:24 -05 - [Profiling-Duplicati.Library.Modules.Builtin.SendMail-SendMailResult]: Whole SMTP communication: ~~~
In general you can use the git help on formatting. It works here too. What I’ve tried so far.
I don’t know what to make of that log. @ts678 can you help? Is that file related to the error? The error is inconsistency in the current database. Old files that were stored last time. The file reported seems like part of the new scan. Maybe I just dont know how it works in enough detail.
I do like that it does report the exact sql query we are doing manually. (Actually the log has COUNT(*) whereas we have been using just “*”)
So, you got this error while running a backup? And did you try the sql query in the sql browser before and/or after this, ie, without any other Duplicati changes intervening? And you still get zero rows reported in the sql browser?
Also, another sanity check. In the job settings, if you expand out the drop-down, under “Advanced” there’s “Database…”. Can you just make sure that that location is what you used sql browser on? I’m sure it is, but just in case…
That makes two of us. Duplicati runs lots of concurrent processors that pass data to each other through channels. I discovered some documentation here of some of the ones apparently involved with backups. Possibly delays can occur. The best way to see if that happened here might be to look around inside the database to try to see the relationship between where the error happened there, and the log file names. Though I’ve only been following this loosely (thanks for orchestrating some pursuit), I think some people weren’t having luck seeing the exact bad new file, but had some luck seeing when finished files stopped.
Referring to the log then, I’d guess it would be safe to say that something that went by is what broke, but exactly which file it might have been is less clear, therefore needing matching between database and log.
Previously running the SQL test query came up clean. Presumably after the logged events, it now won’t.