I’ve tried deleting and recreating the database, but the problem recurs.
You aren’t perhaps running out of disk space OR getting a sqlite database file size that is larger than your disk format can handle, are you?
Good thought, but the .sqlite files are on an NTFS disk with tens of GB of free space, and the largest is 2GB, so that should not be hitting any filesystem or space limits.
I have looked into this and it appears this error is thrown by SQLite when internal DB corruption occurs. It seems to affect the index rather than tables themselves, which could be the reason you can browse the DB and see the data in it. It can be repaired when the issue occurs on a standalone deployment of SQLite, but Duplicati uses the embedded option, so it probably cannot be done. At this stage I would imagine there is no way to fix the issue and if you can restore from this job AND you need the historical backups then just leave it as is for restores. A new job and new DB will need to be created to start a fresh SQLite DB.
I found some info around this error on the following URL:
https://techblog.dorogin.com/sqliteexception-database-disk-image-is-malformed-77e59d547c50
Sorry to be the bearer of bad news.
Thanks for the research. At this point, I’m still testing things out, so I’m not doing critical restores (though I am in the process of moving all my family members from Crashplan to Duplicati, so I do hope it is generally reliable). I checked the disk (it was fine), added the disable-filetime-check you suggested in the other thread, and did a database recreate. I’m now running the backup again. It seems to be redoing the entire backup, which is maybe a side-effect of the disable-filetime-check?
I’ll let you know if that resolves the issue, though previous attempts to recreate the DB just had the same problem crop up in the new DB.
It doesn’t exactly do the backup again, although it will hash all the files and any changes based on hashes will be backed up. Keep us updated as I’m curious what could be causing this issue.
@kenkendk, can you confirm that using “embedded sqlite” excludes the ability to repair internal DB corruption?
Wouldn’t a sqlite database file be the same regardless of which engine was accessing it? Though the engine itself might not support the repair functionality?
The database for one of my backup sets is OK for a bit after a rebuild, and then starts reporting as corrupt. This has happened a few times. I’m trying to do a PRAGMA integrity_check with DB Browser to see what that turns up on the database.
OK, that was quick. Here’s what the integrity check reported:
"*** in database main ***
Multiple uses for byte 3793 of page 373402"
“row 8555068 missing from index BlockHashSize”
The disk the database is on is not corrupt, and there is plenty of free space. Since this keeps occurring, it looks to me like somehow Duplicati (or the sqlite engine in it) is producing a corrupt database for this backup set.
Is there anything I can do to help debug the issue?
Probably - but unfortunately there’s not much I can do at this point, so I’ve pinged somebody with a lot knowledge on how Duplicati’s database works. Note that he usually shows up a few times a week so it may be a bit before he sees this.
Thanks. I am a developer with reasonable C# experience, so I’m happy to run test builds and do debugging to track this down, especially if I can run one in parallel with the current beta, so that only a single job runs on a test build.
That should work fine - just not that when run Duplicati tries to get port 8200. If it’s already in use (such as by another Duplicati server) then it will check higher ports in increments of 100 until one is found.
So if your normal Duplicati runs at port 8200 then you can run your dev version and it will grab 8300. You could also run your dev version in portable mode just to make sure all it’s settings files are kept “local”…
If you build Duplicati in Debug mode (default) it will use “portable” mode and place all files in the bin/Debug
folder.
Not sure what you mean. The SQLite file is how it is, you can just use the sqlite3.exe
tool on the database.
Unfortunately, it uses the RC4 “database scrambling” on Windows, so you may need a copy of sqlite3.exe
with the RC4 support compiled in.
I meant that Duplicati doesn’t use SQLite in client-server type scenario, but rather directly via DLL. I wasn’t aware that the file format was the same. Thanks for clarifying.
Hello, folks,
I know it’s a few months later but I just had this issue myself and I think I found a solution:
@jonmikeiv, Maybe the person with a lot of knowledge on the DB can chime in on if this method is inappropriate . But, here is what I did.
First, stop services, as a precaution.
Now, make a backup copy of the {fooo}.sqlite file.
Next, load the bad DB in your favourite SQLite editor. I use DBeaver, but the exact tool is un important.
Then I did:
pragma integrity_check;
/* RESULT: (takes forever)
integrity_check |
--------------------------------------------|
row 34538 missing from index FilePath |
row 56151 missing from index FilePath |
row 56162 missing from index FilePath |
row 172197 missing from index BlockHashSize |
row 220677 missing from index BlockHashSize |
wrong # of entries in index BlockHashSize |
*/
As we can see, the errors solely affected indexes. So, I investigated the DDL of the tables and recovered that the indexes were created like this:
CREATE UNIQUE INDEX "FilePath" ON "File" ("Path", "BlocksetID", "MetadataID");
CREATE UNIQUE INDEX "BlockHashSize" ON "Block" ("Hash", "Size");
We now have enough information to remove the corrupt indexes and recreate them.
drop index "FilePath";
drop index "BlockHashSize";
CREATE UNIQUE INDEX "FilePath" ON "File" ("Path", "BlocksetID", "MetadataID");
CREATE UNIQUE INDEX "BlockHashSize" ON "Block" ("Hash", "Size");
It was fast for me, but index creation could be quite slow.
pragma integrity_check;
/* SUCCESS!
integrity_check |
----------------|
ok |*/
Then, all I did was copy the db file back over the original and restart services.
I now am able to complete the backup without error:
vivaldi_2018-03-11_21-12-53|513x205
When the error was occuring, the backup halted.
For me, the problem started at about the time that an upgrade didn’t succeed.
So, while I’m not super familiar with SQLite, it seems that at least in cases where this error affects indexes, manual intervention can help.
Though, the obvious question is ‘why did this happen?’ (I have preserved the bad database in case devs !have questions.)
And a good question. This should not happen, but it is not something that Duplicati can control. It happens somewhere inside the SQLite library.
Perhaps we can catch the error and recreate the indexes in Duplicati in this edge case?
I’m assuming it’s in the job specific DB, which means we just have to make sure we’re not using it before attempting the repair.
We could also write it into the repair method and then make the error trigger a “Please run repair” message?
I can say that I had this problem as well and it was the index BlockHashSize. I followed the instructions to recreate the index and everything is fine now. If Duplicati can recognize this error and recreate the indicies, that would be great. It would be better to know the root cause, but I didn’t see anything particular with mine as well.
At random* one of my regular jobs started complaining about “disk image malformed”, pulled up DBeaver and got the following when running an integrity check.
(slightly edited result)
integrity_check
Page 51630: free space corruption|
row 696150 missing from index FilesetentryFileIdIndex |
row 696151 missing from index FilesetentryFileIdIndex |
row 1559442 missing from index FilesetentryFileIdIndex |
row 2422734 missing from index FilesetentryFileIdIndex |
row 3286028 missing from index FilesetentryFileIdIndex |
row 5012620 missing from index FilesetentryFileIdIndex |
row 5875916 missing from index FilesetentryFileIdIndex |
row 6739205 missing from index FilesetentryFileIdIndex |
row 7602503 missing from index FilesetentryFileIdIndex |
row 8457470 missing from index FilesetentryFileIdIndex |
row 9316511 missing from index FilesetentryFileIdIndex |
row 10175552 missing from index FilesetentryFileIdIndex |
row 10809664 missing from index FilesetentryFileIdIndex |
row 22460111 missing from index FilesetentryFileIdIndex |
row 23397674 missing from index FilesetentryFileIdIndex |
row 24334312 missing from index FilesetentryFileIdIndex |
row 25270700 missing from index FilesetentryFileIdIndex |
row 26223263 missing from index FilesetentryFileIdIndex |
*
The only thing I can think of, I may have put my computer to sleep while that backup job was running.
I was about to drop index "FilesetentryFileIdIndex"
but then realized I have no clue how to recreate it. I paused for a moment, then dropped the index anyways… I then reran the integrity check, which it passed and Duplicati just finished running that job.
I don’t use DBeaver, but DB Browser for SQLite shows the index’s creation SQL. This one looks like
CREATE INDEX “FilesetentryFileIdIndex” on “FilesetEntry” (“FileID”)
Also see
How To Corrupt An SQLite Database File gives some possible ways. I sleep my Windows on Duplicati randomly (an effort downgrade from sleeping on it on purpose…), though different systems likely differ.