Mono.Data.Sqlite.SqliteException (0x80004005) The database file is locked

I have duplicate working on several systems but there is one that has never worked.

System Specifics: Mac OS 10.13.3 Duplicati: 2.0.2.1_beta_2017-08-01

Backend: SFTP

Error Message:

Failed: The database file is locked
database is locked
Details: Mono.Data.Sqlite.SqliteException (0x80004005): The database file is locked
database is locked
  at Mono.Data.Sqlite.SQLite3.Step (Mono.Data.Sqlite.SqliteStatement stmt) [0x00089] in <54894e20c59b4c3b9ef1557952b1a6a6>:0
  at Mono.Data.Sqlite.SqliteDataReader.NextResult () [0x00104] in <54894e20c59b4c3b9ef1557952b1a6a6>:0
  at Mono.Data.Sqlite.SqliteDataReader..ctor (Mono.Data.Sqlite.SqliteCommand cmd, System.Data.CommandBehavior behave) [0x0004e] in <54894e20c59b4c3b9ef1557952b1a6a6>:0
  at (wrapper remoting-invoke-with-check) Mono.Data.Sqlite.SqliteDataReader:.ctor (Mono.Data.Sqlite.SqliteCommand,System.Data.CommandBehavior)
  at Mono.Data.Sqlite.SqliteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x00006] in <54894e20c59b4c3b9ef1557952b1a6a6>:0
  at Mono.Data.Sqlite.SqliteCommand.ExecuteNonQuery () [0x00000] in <54894e20c59b4c3b9ef1557952b1a6a6>:0
  at Duplicati.Library.Main.Database.LocalDatabase.LogMessage (System.String type, System.String message, System.Exception exception, System.Data.IDbTransaction transaction) [0x00067] in <118ad25945a24a3991f7b65e7a45ea1e>:0
  at Duplicati.Library.Main.BasicResults.LogDbMessage (System.String type, System.String message, System.Exception ex) [0x00027] in <118ad25945a24a3991f7b65e7a45ea1e>:0
  at Duplicati.Library.Main.BasicResults.AddMessage (System.String message) [0x00083] in <118ad25945a24a3991f7b65e7a45ea1e>:0
  at Duplicati.Library.Main.BasicResults.AddMessage (System.String message) [0x00008] in <118ad25945a24a3991f7b65e7a45ea1e>:0
  at Duplicati.Library.Main.Database.LocalRepairDatabase.FixDuplicateMetahash () [0x00036] in <118ad25945a24a3991f7b65e7a45ea1e>:0
  at Duplicati.Library.Main.Operation.RepairHandler.RunRepairCommon () [0x0008d] in <118ad25945a24a3991f7b65e7a45ea1e>:0
  at Duplicati.Library.Main.Operation.RepairHandler.Run (Duplicati.Library.Utility.IFilter filter) [0x00147] in <118ad25945a24a3991f7b65e7a45ea1e>:0
  at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify (Duplicati.Library.Main.BackendManager backend, System.String protectedfile) [0x000c7] in <118ad25945a24a3991f7b65e7a45ea1e>:0
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x00860] in <118ad25945a24a3991f7b65e7a45ea1e>:0
  at Duplicati.Library.Main.Controller+<>c__DisplayClass16_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x0030f] in <118ad25945a24a3991f7b65e7a45ea1e>:0
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x00072] in <118ad25945a24a3991f7b65e7a45ea1e>:0

My troubleshooting: I have attempted to repair the database and delete/repair the database several times. I the repair seems to work for an hour or so (i think) but I cannot tell if it completes successfully or just fails then spits out this error for every attempt of backing up the system.

I have attempted uninstalling mono and duplicate completely and reinstalling with same effect.

We use to use crashplan but that has long been removed from this system

I am open to suggestions/ideas.

Thanks

Is it possible you’re running Duplicati more than once - such as as a service / daemon AND us a local user / tray icon (or as two different local users)?

(By the way, I edited your post by adding “~~~” before and after the error message to help it stand out a bit more.)

So first I will say it is fixed. Whoot! However I do think there is a bug somewhere but since it is working I am not sure if I can reproduce this issue anymore. At the bottom here I will post my solution even though it isn’t much. Before that here is the troubleshooting I did before it got fixed.

I opened terminal and ran the following.

while : ; do lsof ~/.config/Duplicati/VUXJTLGGOW.sqlite; done >> ~/pidLockmon.txt

Then I ran the backup and waited for it to fail. Once it did I broke out of that infiniate loop and looked at the outpue. This is the snippit that matters.

Tressa-Macbook-Pro:~ tressabeckler$ cat ~/pidLockmon.txt
COMMAND     PID          USER   FD   TYPE DEVICE  SIZE/OFF       NODE NAME
mono-sgen 13034 tressabeckler   16u   REG    1,4 184173568 8593572049 /Users/tressabeckler/.config/Duplicati/VUXJTLGGOW.sqlite
COMMAND     PID          USER   FD   TYPE DEVICE  SIZE/OFF       NODE NAME
mono-sgen 13034 tressabeckler   16u   REG    1,4 184173568 8593572049 /Users/tressabeckler/.config/Duplicati/VUXJTLGGOW.sqlite
COMMAND     PID          USER   FD   TYPE DEVICE  SIZE/OFF       NODE NAME
mono-sgen 13034 tressabeckler   16u   REG    1,4 184173568 8593572049 /Users/tressabeckler/.config/Duplicati/VUXJTLGGOW.sqlite
mono-sgen 13034 tressabeckler   26u   REG    1,4 184173568 8593572049 /Users/tressabeckler/.config/Duplicati/VUXJTLGGOW.sqlite
COMMAND     PID          USER   FD   TYPE DEVICE  SIZE/OFF       NODE NAME
mono-sgen 13034 tressabeckler   16u   REG    1,4 184173568 8593572049 /Users/tressabeckler/.config/Duplicati/VUXJTLGGOW.sqlite
mono-sgen 13034 tressabeckler   26u   REG    1,4 184173568 8593572049 /Users/tressabeckler/.config/Duplicati/VUXJTLGGOW.sqlite
COMMAND     PID          USER   FD   TYPE DEVICE  SIZE/OFF       NODE NAME
mono-sgen 13034 tressabeckler   16u   REG    1,4 184173568 8593572049 /Users/tressabeckler/.config/Duplicati/VUXJTLGGOW.sqlite
mono-sgen 13034 tressabeckler   26u   REG    1,4 184173568 8593572049 /Users/tressabeckler/.config/Duplicati/VUXJTLGGOW.sqlite
COMMAND     PID          USER   FD   TYPE DEVICE  SIZE/OFF       NODE NAME
mono-sgen 13034 tressabeckler   16u   REG    1,4 184173568 8593572049 /Users/tressabeckler/.config/Duplicati/VUXJTLGGOW.sqlite
mono-sgen 13034 tressabeckler   26u   REG    1,4 184173568 8593572049 /Users/tressabeckler/.config/Duplicati/VUXJTLGGOW.sqlite
COMMAND     PID          USER   FD   TYPE DEVICE  SIZE/OFF       NODE NAME
mono-sgen 13034 tressabeckler   16u   REG    1,4 184173568 8593572049 /Users/tressabeckler/.config/Duplicati/VUXJTLGGOW.sqlite
mono-sgen 13034 tressabeckler   26u   REG    1,4 184173568 8593572049 /Users/tressabeckler/.config/Duplicati/VUXJTLGGOW.sqlite

Then I used the ps -ef to get the processes.

 ps -ef | grep 13034
  501 13034 13027   0  7:48PM ??         5:41.83 Duplicati Duplicati.GUI.TrayIcon.exe
  501 13391 13034   0  7:56PM ??         0:00.63 /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python /Applications/Duplicati.app/Contents/Resources/OSXTrayHost/osx-trayicon-rumps.py
  501 13722 13034   0  8:05PM ??         0:00.01 /usr/bin/caffeinate -s
  501 13758 13205   0  8:06PM ttys000    0:00.00 grep 13034

This was going to be the end of my response until it got fixed.

The fix (sort of) for me.
The client was originally 100+ miles away. First I just tried the backup which failed just like above. Then I deleted the repaired the database. Once this was done the backup completed successfully. Have never had bandwidth throttling enabled but since the destination was in the house the backup ran much faster than it could over the internet. I have (several times) previously deleted and rebuilt the database and it worked for a while before this issue returned.The backup schedule was also disabled because I thought it was maybe stepping on its own feet. But that can’t be it either.

So I’m not sure where to go from here. It is currently working (even over the internet now). The total backup was 140G so yeah that would take time over the internet but still doesn’t seem like it should matter.

If there is anything I can do to help let me know because it really was a plague for this system. However now I can finally retire crashplan as this was my last system.

Glad to hear it’s working for you now (despite not knowing exactly why) and thanks for the detailed PID info!

Unfortunately, I’m not too familiar with the mono side of things but it’s possible there’s a race condition that happens in certain circumstances - perhaps your log can help somebody like @kenkendk or @Pectojin better pin down potential causes…

I’m a bit confused about the two line output of lsof. On my system it only ever lists one line for mono-sgen.

The ps -ef output looks fine. There’s also two entires with the same PID on my system for Duplcati and caffinate.

I don’t know if mono can somehow lock the database twice, causing the error, but it kind of looks like that’s happening.

Is this machine running the same version of mono as the rest?

Yes all systems are running the same version of mono and same version of duplicati as I used the same installer across all of them. I have also confirmed they are all on the same manor version of macOS 10.13.

For what it’s worth, the log trace seems to indicate that the pre-backup verification step fails, which triggers a DB repair and then the repair finds duplicate metadata hashes in the database. Then, in attempt to remove the duplicate(s), it fails because the DB is locked.

That’s all local actions, so it shouldn’t matter where the backend is, although it’s possible outside factors could be causing the duplicate hashes to appear in the DB in the first place.

From that error I would assume that the problem would disappear for you after a successful repair, but it sounds like you tried that multiple times previously and it still recurred.

Based on @Pectojin’s post I’d suggest a few TEST can you trying setting --no-backend-verification=true?

--no-backend-verification
If this flag is set, the local database is not compared to the remote filelist on startup. The intended usage for this option is to work correctly in cases where the filelisting is broken or unavailable.
Default value: “false

If the issue really is a duplicate hash in the database then I’d suggest turning off scheduling for the failing backup then making a 2nd backup that is just a subset of the first one’s source files. Do some manual runs of it and increase the source file list after each successful run.

In theory, if you’ve got two blocks that happen to have the exact same hashes we should be able to narrow down what files are causing the hash problem.