Database disk image is malformed

That’s perfect, thanks!

By the way, I edited your post by putting “~~~” before and after the error message to make it easier to read.

I’m now getting that on another backup, which is going to a different NAS via SMB/CIFS. Other jobs are running fine backing up to both NAS boxes.

Failed to process path: W:\OtherPics\BrettDroid\2016\2016-05-30\160530-IMG_20160530_140215.jpg
System.Data.SQLite.SQLiteException (0x80004005): database disk image is malformed
database disk image is malformed
   at System.Data.SQLite.SQLite3.Reset(SQLiteStatement stmt)
   at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt)
   at System.Data.SQLite.SQLiteDataReader.NextResult()
   at System.Data.SQLite.SQLiteDataReader..ctor(SQLiteCommand cmd, CommandBehavior behave)
   at System.Data.SQLite.SQLiteCommand.ExecuteReader(CommandBehavior behavior)
   at Duplicati.Library.Main.Database.ExtensionMethods.ExecuteScalarInt64(IDbCommand self, String cmd, Int64 defaultvalue, Object[] values)
   at Duplicati.Library.Main.Database.LocalBackupDatabase.AddBlock(String key, Int64 size, Int64 volumeid, IDbTransaction transaction)
   at Duplicati.Library.Main.Operation.BackupHandler.AddBlockToOutput(BackendManager backend, String key, Byte[] data, Int32 offset, Int32 len, CompressionHint hint, Boolean isBlocklistData)
   at Duplicati.Library.Main.Operation.BackupHandler.ProcessStream(Stream stream, CompressionHint hint, BackendManager backend, FileBackedStringList blocklisthashes, FileBackedStringList hashcollector, Boolean skipfilehash)
   at Duplicati.Library.Main.Operation.BackupHandler.HandleFilesystemEntry(ISnapshotService snapshot, BackendManager backend, String path, FileAttributes attributes)

@bkuehner, have you done basic checks, like checking the consistency of the disk that the SQLite databases are stored on. The error is generic meaning there is probably corruption in your database for whatever reason. Another thing you can try is deleting the currently active database and recreating it from your backup files to see whether that addresses your issues.

I’ve tried deleting and recreating the database, but the problem recurs.

You aren’t perhaps running out of disk space OR getting a sqlite database file size that is larger than your disk format can handle, are you?

Good thought, but the .sqlite files are on an NTFS disk with tens of GB of free space, and the largest is 2GB, so that should not be hitting any filesystem or space limits.

I have looked into this and it appears this error is thrown by SQLite when internal DB corruption occurs. It seems to affect the index rather than tables themselves, which could be the reason you can browse the DB and see the data in it. It can be repaired when the issue occurs on a standalone deployment of SQLite, but Duplicati uses the embedded option, so it probably cannot be done. At this stage I would imagine there is no way to fix the issue and if you can restore from this job AND you need the historical backups then just leave it as is for restores. A new job and new DB will need to be created to start a fresh SQLite DB.

I found some info around this error on the following URL:

https://techblog.dorogin.com/sqliteexception-database-disk-image-is-malformed-77e59d547c50

Sorry to be the bearer of bad news.

Thanks for the research. At this point, I’m still testing things out, so I’m not doing critical restores (though I am in the process of moving all my family members from Crashplan to Duplicati, so I do hope it is generally reliable). I checked the disk (it was fine), added the disable-filetime-check you suggested in the other thread, and did a database recreate. I’m now running the backup again. It seems to be redoing the entire backup, which is maybe a side-effect of the disable-filetime-check?
I’ll let you know if that resolves the issue, though previous attempts to recreate the DB just had the same problem crop up in the new DB.

It doesn’t exactly do the backup again, although it will hash all the files and any changes based on hashes will be backed up. Keep us updated as I’m curious what could be causing this issue.

@kenkendk, can you confirm that using “embedded sqlite” excludes the ability to repair internal DB corruption?

Wouldn’t a sqlite database file be the same regardless of which engine was accessing it? Though the engine itself might not support the repair functionality?

The database for one of my backup sets is OK for a bit after a rebuild, and then starts reporting as corrupt. This has happened a few times. I’m trying to do a PRAGMA integrity_check with DB Browser to see what that turns up on the database.

OK, that was quick. Here’s what the integrity check reported:
"*** in database main ***
Multiple uses for byte 3793 of page 373402"
“row 8555068 missing from index BlockHashSize”

The disk the database is on is not corrupt, and there is plenty of free space. Since this keeps occurring, it looks to me like somehow Duplicati (or the sqlite engine in it) is producing a corrupt database for this backup set.

Is there anything I can do to help debug the issue?

Probably - but unfortunately there’s not much I can do at this point, so I’ve pinged somebody with a lot knowledge on how Duplicati’s database works. Note that he usually shows up a few times a week so it may be a bit before he sees this.

Thanks. I am a developer with reasonable C# experience, so I’m happy to run test builds and do debugging to track this down, especially if I can run one in parallel with the current beta, so that only a single job runs on a test build.

That should work fine - just not that when run Duplicati tries to get port 8200. If it’s already in use (such as by another Duplicati server) then it will check higher ports in increments of 100 until one is found.

So if your normal Duplicati runs at port 8200 then you can run your dev version and it will grab 8300. You could also run your dev version in portable mode just to make sure all it’s settings files are kept “local”…

If you build Duplicati in Debug mode (default) it will use “portable” mode and place all files in the bin/Debug folder.

Not sure what you mean. The SQLite file is how it is, you can just use the sqlite3.exe tool on the database.

Unfortunately, it uses the RC4 “database scrambling” on Windows, so you may need a copy of sqlite3.exe with the RC4 support compiled in.

I meant that Duplicati doesn’t use SQLite in client-server type scenario, but rather directly via DLL. I wasn’t aware that the file format was the same. Thanks for clarifying.

Hello, folks,

I know it’s a few months later but I just had this issue myself and I think I found a solution:

@jonmikeiv, Maybe the person with a lot of knowledge on the DB can chime in on if this method is inappropriate . But, here is what I did.

First, stop services, as a precaution.

Now, make a backup copy of the {fooo}.sqlite file.

Next, load the bad DB in your favourite SQLite editor. I use DBeaver, but the exact tool is un important.

Then I did:

pragma integrity_check;

/* RESULT: (takes forever)
	integrity_check                             |
	--------------------------------------------|
	row 34538 missing from index FilePath       |
	row 56151 missing from index FilePath       |
	row 56162 missing from index FilePath       |
	row 172197 missing from index BlockHashSize |
	row 220677 missing from index BlockHashSize |
	wrong # of entries in index BlockHashSize   |
 */

As we can see, the errors solely affected indexes. So, I investigated the DDL of the tables and recovered that the indexes were created like this:

CREATE UNIQUE INDEX "FilePath" ON "File" ("Path", "BlocksetID", "MetadataID");
CREATE UNIQUE INDEX "BlockHashSize" ON "Block" ("Hash", "Size");

We now have enough information to remove the corrupt indexes and recreate them.

drop index "FilePath";
drop index "BlockHashSize";

CREATE UNIQUE INDEX "FilePath" ON "File" ("Path", "BlocksetID", "MetadataID");
CREATE UNIQUE INDEX "BlockHashSize" ON "Block" ("Hash", "Size");

It was fast for me, but index creation could be quite slow.

pragma integrity_check;

/* SUCCESS!
	integrity_check |
	----------------|
	ok              |*/

Then, all I did was copy the db file back over the original and restart services.

I now am able to complete the backup without error:

vivaldi_2018-03-11_21-12-53|513x205

When the error was occuring, the backup halted.

For me, the problem started at about the time that an upgrade didn’t succeed.

So, while I’m not super familiar with SQLite, it seems that at least in cases where this error affects indexes, manual intervention can help.

Though, the obvious question is ‘why did this happen?’ (I have preserved the bad database in case devs !have questions.)

3 Likes

And a good question. This should not happen, but it is not something that Duplicati can control. It happens somewhere inside the SQLite library.

Perhaps we can catch the error and recreate the indexes in Duplicati in this edge case?

I’m assuming it’s in the job specific DB, which means we just have to make sure we’re not using it before attempting the repair.

We could also write it into the repair method and then make the error trigger a “Please run repair” message?