Database full errors

I have backups working on 3 systems (2 Ubuntu, 1 Windows 8) to my nextcloud webdav folder. I recently added another Linux system running Debian. The major difference here is that I’ve limited the bandwidth to 5Mbps. I am using 500MB as the volume size on the first 3 systems, however when I used that on the new system with the limited bandwidth the backup failed with an error about needing to finish the previous volume before starting a new volume. I changed the volume size back to the default of 50MB for this new host and the initial backup succeeded. Now I’m getting database full messages when there is plenty of space on the disk. Yesterday after receiving the emailed error I manually started the backup and it completed just fine. This morning I got the same error message (below). The only messages in the log are these database full messages. Does anyone have some ideas as to what is going on here?

Failed: Insertion failed because the database is full
database or disk is full
Details: Mono.Data.Sqlite.SqliteException (0x80004005): Insertion failed because the database is full
database or disk is full
  at Mono.Data.Sqlite.SQLite3.Reset (Mono.Data.Sqlite.SqliteStatement stmt) [0x00096] in <fe9fd999cd9f407db94500dce293e66f>:0
  at Mono.Data.Sqlite.SQLite3.Step (Mono.Data.Sqlite.SqliteStatement stmt) [0x00046] in <fe9fd999cd9f407db94500dce293e66f>:0
  at Mono.Data.Sqlite.SqliteDataReader.NextResult () [0x00129] in <fe9fd999cd9f407db94500dce293e66f>:0
  at Mono.Data.Sqlite.SqliteDataReader..ctor (Mono.Data.Sqlite.SqliteCommand cmd, System.Data.CommandBehavior behave) [0x00051] in <fe9fd999cd9f407db94500dce293e66f>:0
  at (wrapper remoting-invoke-with-check) Mono.Data.Sqlite.SqliteDataReader:.ctor (Mono.Data.Sqlite.SqliteCommand,System.Data.CommandBehavior)
  at Mono.Data.Sqlite.SqliteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x00006] in <fe9fd999cd9f407db94500dce293e66f>:0
  at Mono.Data.Sqlite.SqliteCommand.ExecuteNonQuery () [0x00000] in <fe9fd999cd9f407db94500dce293e66f>:0
  at Duplicati.Library.Main.Database.ExtensionMethods.ExecuteNonQuery (System.Data.IDbCommand self, System.String cmd, System.Object[] values) [0x0004e] in <118ad25945a24a3991f7b65e7a45ea1e>:0
  at Duplicati.Library.Main.Database.LocalDatabase.Vacuum () [0x0000c] in <118ad25945a24a3991f7b65e7a45ea1e>:0
  at Duplicati.Library.Main.Database.LocalDatabase.PurgeLogData (System.DateTime threshold) [0x00072] in <118ad25945a24a3991f7b65e7a45ea1e>:0
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x0082a] in <118ad25945a24a3991f7b65e7a45ea1e>:0

Hi @jpschewe, welcome to the forum!

The “database is full” message usually comes about due to running out of disk space where the sqlite database is stored OR in the temp folder.

On the Debian Linux system that’s having the problem are you running Duplicati in a Docker?

When you say “there’s plenty of space on the disk” do you mean the source, the destination, or /tmp?

1 Like

I’m running Duplicati natively on the source system as a service.
Here’s a df from the source system

Filesystem            Size  Used Avail Use% Mounted on
udev                   10M     0   10M   0% /dev
tmpfs                 405M   42M  363M  11% /run
/dev/sda1              22G   17G  5.1G  77% /
tmpfs                1011M  8.0K 1011M   1% /dev/shm
tmpfs                 5.0M  4.0K  5.0M   1% /run/lock
tmpfs                1011M     0 1011M   0% /sys/fs/cgroup
/dev/sdc1            1016M   52M  964M   6% /tmp
/dev/mapper/vg0-home   89G   78G   12G  88% /home
tmpfs                 203M     0  203M   0% /run/user/10105
tmpfs                 203M  4.0K  203M   1% /run/user/10121

Here’s a df from the destination system, backups are going to /mnt/crashplan/friends

Filesystem                  Size  Used Avail Use% Mounted on
udev                        1.4G     0  1.4G   0% /dev
tmpfs                       276M   29M  247M  11% /run
/dev/mapper/jon--0-u--root  118G   73G   41G  64% /
tmpfs                       1.4G   24K  1.4G   1% /dev/shm
tmpfs                       5.0M  4.0K  5.0M   1% /run/lock
tmpfs                       1.4G     0  1.4G   0% /sys/fs/cgroup
/dev/loop0                   82M   82M     0 100% /snap/core/4110
/dev/loop1                   82M   82M     0 100% /snap/core/4017
/dev/sdc1                   1.9T  105G  1.8T   6% /mnt/data
/dev/sdb2                   2.3T  137G  2.2T   6% /mnt/crashplan/friends
/dev/sdb1                   2.3T  1.5T  818G  65% /mnt/crashplan/local
/dev/sda1                   922M  152M  707M  18% /boot
/dev/mapper/jon--0-vhs       99G   69G   26G  73% /mnt/vhs
cgmfs                       100K     0  100K   0% /run/cgmanager/fs
tmpfs                       276M  4.0K  276M   1% /run/user/1000
/dev/loop3                   82M   82M     0 100% /snap/core/4206

Is it possible that my /tmp partition needs to be larger on the source system?

I grew my temp directory from 1GB to 5GB. Then ran a manual backup and that succeeded. I then waited for the scheduled backup to run and got an error about needing to wait for the previous volume to finish before starting the next volume. However I can’t find the error message in the log and it’s as if the scheduled job never started.

There seems to be a bigger problem now. I had specified a temp directory to use to see if more space helped. That did. So I reworked my system to have a larger temp partition and then removed the temp directory option from Duplicati. However the next backup still tried to use the same option.

Furthermore I’m now not getting email on backup failure and that was working before.

I reset the settings using edit as text. I also restarted the service. I then tried a backup and was told there was a database inconsistency. Repair didn’t work. So I tried delete and recreate.

Now 1 scheduled backup has succeeded. I’m hoping that it continues.

I’m fairly new to Duplicati but hoping I can help a bit. My emails randomly stopped working recently; I had to add:

–send-mail-any-operation=true
–send-mail-level=all

to my settings and it is now working again. IIRC there is a bug in the current beta where one of the entries in the settings form edits the wrong value in the config (i.e., you think you’re entering an SMTP server and it attaches the value to the wrong parameter, though I don’t recall which parameter), so editing by hand is the way to go if you’re tweaking mail settings. You can enable verbose logging and select “mail-test” from the “Command Line…” option under a backup job to see if mail is hanging up somewhere.

500MB is a massive remote volume size (you may want to rethink this size, because in order to restore a 1K file Duplicati would have to download 500MB worth of data). It sounds like your initial backup crashed because Duplicati creates up to 4 (?) temp files of your remote volume size in the temp directory, so you were trying to write potentially 2GB of data to that 1GB partition.

Not sure about the other issues of backups thinking they are still running and Duplicati not respecting your temp settings. I have my temp in /mnt/ramdrive since I’m running an SSD and I don’t want constant writes hammering the drive. My settings has:

–tempdir=/mnt/ramdisk/

with /etc/fstab entry:

tmpfs /mnt/ramdisk tmpfs nodev,nosuid,noexec,nodiratime,size=1024M 0 0

And duplicati is using that directory.

1 Like

Thanks for the ideas. I had changed my volume size back to 50MB and still had the temp directory problem.

Good to know about the bug in editing settings. I’ll check the text version of the settings to verify.

It seems that now that I’ve restarted things and rebuilt the backup database, the correct temp directory is being used.