I have backups working on 3 systems (2 Ubuntu, 1 Windows 8) to my nextcloud webdav folder. I recently added another Linux system running Debian. The major difference here is that I’ve limited the bandwidth to 5Mbps. I am using 500MB as the volume size on the first 3 systems, however when I used that on the new system with the limited bandwidth the backup failed with an error about needing to finish the previous volume before starting a new volume. I changed the volume size back to the default of 50MB for this new host and the initial backup succeeded. Now I’m getting database full messages when there is plenty of space on the disk. Yesterday after receiving the emailed error I manually started the backup and it completed just fine. This morning I got the same error message (below). The only messages in the log are these database full messages. Does anyone have some ideas as to what is going on here?
Failed: Insertion failed because the database is full
database or disk is full
Details: Mono.Data.Sqlite.SqliteException (0x80004005): Insertion failed because the database is full
database or disk is full
at Mono.Data.Sqlite.SQLite3.Reset (Mono.Data.Sqlite.SqliteStatement stmt) [0x00096] in <fe9fd999cd9f407db94500dce293e66f>:0
at Mono.Data.Sqlite.SQLite3.Step (Mono.Data.Sqlite.SqliteStatement stmt) [0x00046] in <fe9fd999cd9f407db94500dce293e66f>:0
at Mono.Data.Sqlite.SqliteDataReader.NextResult () [0x00129] in <fe9fd999cd9f407db94500dce293e66f>:0
at Mono.Data.Sqlite.SqliteDataReader..ctor (Mono.Data.Sqlite.SqliteCommand cmd, System.Data.CommandBehavior behave) [0x00051] in <fe9fd999cd9f407db94500dce293e66f>:0
at (wrapper remoting-invoke-with-check) Mono.Data.Sqlite.SqliteDataReader:.ctor (Mono.Data.Sqlite.SqliteCommand,System.Data.CommandBehavior)
at Mono.Data.Sqlite.SqliteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x00006] in <fe9fd999cd9f407db94500dce293e66f>:0
at Mono.Data.Sqlite.SqliteCommand.ExecuteNonQuery () [0x00000] in <fe9fd999cd9f407db94500dce293e66f>:0
at Duplicati.Library.Main.Database.ExtensionMethods.ExecuteNonQuery (System.Data.IDbCommand self, System.String cmd, System.Object[] values) [0x0004e] in <118ad25945a24a3991f7b65e7a45ea1e>:0
at Duplicati.Library.Main.Database.LocalDatabase.Vacuum () [0x0000c] in <118ad25945a24a3991f7b65e7a45ea1e>:0
at Duplicati.Library.Main.Database.LocalDatabase.PurgeLogData (System.DateTime threshold) [0x00072] in <118ad25945a24a3991f7b65e7a45ea1e>:0
at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x0082a] in <118ad25945a24a3991f7b65e7a45ea1e>:0
I grew my temp directory from 1GB to 5GB. Then ran a manual backup and that succeeded. I then waited for the scheduled backup to run and got an error about needing to wait for the previous volume to finish before starting the next volume. However I can’t find the error message in the log and it’s as if the scheduled job never started.
There seems to be a bigger problem now. I had specified a temp directory to use to see if more space helped. That did. So I reworked my system to have a larger temp partition and then removed the temp directory option from Duplicati. However the next backup still tried to use the same option.
Furthermore I’m now not getting email on backup failure and that was working before.
I reset the settings using edit as text. I also restarted the service. I then tried a backup and was told there was a database inconsistency. Repair didn’t work. So I tried delete and recreate.
Now 1 scheduled backup has succeeded. I’m hoping that it continues.
to my settings and it is now working again. IIRC there is a bug in the current beta where one of the entries in the settings form edits the wrong value in the config (i.e., you think you’re entering an SMTP server and it attaches the value to the wrong parameter, though I don’t recall which parameter), so editing by hand is the way to go if you’re tweaking mail settings. You can enable verbose logging and select “mail-test” from the “Command Line…” option under a backup job to see if mail is hanging up somewhere.
500MB is a massive remote volume size (you may want to rethink this size, because in order to restore a 1K file Duplicati would have to download 500MB worth of data). It sounds like your initial backup crashed because Duplicati creates up to 4 (?) temp files of your remote volume size in the temp directory, so you were trying to write potentially 2GB of data to that 1GB partition.
Not sure about the other issues of backups thinking they are still running and Duplicati not respecting your temp settings. I have my temp in /mnt/ramdrive since I’m running an SSD and I don’t want constant writes hammering the drive. My settings has: