Raspberry Pi: SQLite error cannot commit - no transaction is active

I’m running Duplicati under Manjaro ARM on a Raspberry Pi 4 with 8GB of RAM. I used the AUR package here: https://aur.archlinux.org/packages/duplicati-latest

I keep seeing

SQLite error cannot commit - no transaction is active

and I have no idea how to solve it. The latest log I have with the error is posted below. I’ve tried moving the database to a larger disk, I’ve tried setting the tmpdir in the web gui to the same larger disk. The backup is roughly 6TB big. Does anyone know what could be wrong and any possible solutions?

Jul 28, 2022 7:18 PM: Failed while executing “Backup” with id: 1
Mono.Data.Sqlite.SqliteException (0x80004005): SQLite error

cannot commit - no transaction is active
at Mono.Data.Sqlite.SQLite3.Reset (Mono.Data.Sqlite.SqliteStatement stmt) [0x00084] in :0
at Mono.Data.Sqlite.SQLite3.Step (Mono.Data.Sqlite.SqliteStatement stmt) [0x0003d] in :0
at Mono.Data.Sqlite.SqliteDataReader.NextResult () [0x00104] in :0
at Mono.Data.Sqlite.SqliteDataReader…ctor (Mono.Data.Sqlite.SqliteCommand cmd, System.Data.CommandBehavior behave) [0x0004e] in :0
at (wrapper remoting-invoke-with-check) Mono.Data.Sqlite.SqliteDataReader…ctor(Mono.Data.Sqlite.SqliteCommand,System.Data.CommandBehavior)
at Mono.Data.Sqlite.SqliteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x00006] in :0
at Mono.Data.Sqlite.SqliteCommand.ExecuteNonQuery () [0x00000] in :0
at Mono.Data.Sqlite.SqliteTransaction.Commit () [0x0002e] in :0
at Duplicati.Library.Main.Operation.Common.DatabaseCommon.Dispose (System.Boolean isDisposing) [0x0000f] in <7201532dcc0443468ec0ba778f89f3ac>:0
at Duplicati.Library.Main.Operation.Common.SingleRunner.Dispose () [0x00000] in <7201532dcc0443468ec0ba778f89f3ac>:0
at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x01048] in <7201532dcc0443468ec0ba778f89f3ac>:0
at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0
at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00009] in <7201532dcc0443468ec0ba778f89f3ac>:0
at Duplicati.Library.Main.Controller+<>c__DisplayClass14_0.b__0 (Duplicati.Library.Main.BackupResults result) [0x0004b] in <7201532dcc0443468ec0ba778f89f3ac>:0
at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0026f] in <7201532dcc0443468ec0ba778f89f3ac>:0
at Duplicati.Library.Main.Controller.Backup (System.String inputsources, Duplicati.Library.Utility.IFilter filter) [0x00074] in <7201532dcc0443468ec0ba778f89f3ac>:0
at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00349] in <59054a017605435993aba9f724246795>:0

Oh so I thought the Web GUI setting applied to everything Duplicati did. But it seems like I ran into the issue of SQLite still storing the files in the standard /tmp/ location.

I’m running the backup again now, will report back if I hit the same issue again.

I believe the most common cause for that is lack of disk space where the sqlite database is stored.

For a 6TB backup you should be using a much larger deduplication block size than the default of 100KiB. I would recommend probably 10MiB. This will also help your database issue as the database will be smaller if you use larger block sizes. The bad news is you’ll need to start your backup over from scratch, as you cannot change the dedupe block size after a backup has been started.

The option I recommend:

--blocksize=10240KB

Note this is not the same as the remote volume size. The default for that is 50MiB, which is fine for most cases.

Is that something I can set in web gui?

Yep, absolutely… edit your job and set on page 5:

Capture

So I have couple of other questions. I went to start over. Configuration > Delete > then check marking the box that says “Delete remote files”.

That was taking a long time to complete, then it dawned on me. Can’t I just remove the configuration without deleting remote files. Then log into the cloud account to just tell it to remove the backup folder I used with the duplicati backup encrypted files, which should be faster? Why would I want to let duplicati delete the files remotely in this case?

@drwtsn32 Thank you for the hint. I have completely removed all of duplicati including the files it uploaded to the remote location. Starting over with the blocksize set to 10240 KBytes it seems to be doing better knock on wood. It’s going to take a number of weeks at current upload speed, but it seems to actually be saturating my upload speed this time around. 1MB/s or around 10Mbps

I’ll report back as soon as it makes a successful first backup to let you and everyone know how it went.

Ok great, and to answer your earlier question you can certainly delete the remote side files yourself. That’s almost always how I do it. You also need to delete the local job database but that’s a quick operation. The next time you run a backup (after deleting all remote files and the local database) Duplicati will behave as if it’s the first backup.

My backup has completed successfully, finally

@drwtsn32 thank you so much for the help. On top of the setting you suggested I had to add the following.

I set “number-of-retries” to 50 and “retry-delay” to 20. So that it should try 50 total times before giving up and wait 20 seconds between each attempt. This was needed because every once in a while the connection to my Google Workspace Drive would fail with 403, 500, 502, 503, and 504 errors. I’m not sure if this was a side effect of a backup hitting tiny drop outs in connection stability, or a measure from google to try to limit too many requests or uploads.

I also had to make sure to exclude to backup the folder that I had the duplicati database files in since I chose to store it on the same drive with all of my files or duplicati would seemingly get stuck on “get waiting for the upload to finish”. The issue is something I found described here with the hint about excluding the folder: backup to google drive freezes on "waiting for the upload to finish" · Issue #4216 · duplicati/duplicati · GitHub

But with this backups are now completing daily without any major issues since now the first big backup of everything has completed.