This is a rather old thread, but running Duplicati with daily backups for about 2 years on a Linux laptop, where the disk being backed up has no more than 80 GB of disk, is currently taking up 10 GB of size.
My backups, which are stored in my OneDrive, have about 300 GB.
Is there a good reason for this to happen?
Hello and welcome @undercover !
The size of the local database depends on a several factors:
- size of your source data
- how many snapshot versions you retain (what is your retention policy and how many backup versions do you have?)
- how much the data changes between backups
- your deduplication block size (default 100KiB).
Also, have you ever vacuumed the database, either manually or by setting the –auto-vacuum option? You might try running a manual vacuum operation to see if it shrinks your local database.
I completely forgot to say, here are my settings:
- Two folders being backed up: ~ (1) and a personal folder shared across different O.S.s (2)
- Number of files: 419,546 (1) + 25699 (2)
- Source size: 41 GB (1) + 17 GB (2)
- Backup size: 271 GB (1) + 25 (GB) - Note: for (1) there’s a lot of stuff there which is backed up only once which is just there for safety, not really “being used”.
- Snapshot policy: Daily, keep all backups
- Typical backup time: 30 min (1) + 5 min (2)
- Deduplication block: 100 KiB
- How much the data changes between backups: not much
I haven’t tried to vacuum the local database, as I have no idea what that means nor that such a feature existed.
Thanks for the additional info! First thing I would do is vacuum the database:
- On the main web UI, click your backup job to expand options.
- Click the blue “Commandline …” link
- Change the Command dropdown to “vacuum”
- Clear the contents of the Commandline Arguments box
- Scroll to the bottom and click the blue “run vacuum command now” button
See if it changes your database size much. It may not since you’re retaining all backup versions. We can talk about other options after you let me know your results…
Thanks for the fast reply, after doing what you’ve asked the return code was 0 on my personal folder, whereas for my home directory I have gotten the following:
Mono.Data.Sqlite.SqliteException (0x80004005): Insertion failed because the database is full
database or disk is full
at Mono.Data.Sqlite.SQLite3.Reset (Mono.Data.Sqlite.SqliteStatement stmt) [0x00084] in <16c56cdcfe244801a66a9296faeca43a>:0
at Mono.Data.Sqlite.SQLite3.Step (Mono.Data.Sqlite.SqliteStatement stmt) [0x0003d] in <16c56cdcfe244801a66a9296faeca43a>:0
at Mono.Data.Sqlite.SqliteDataReader.NextResult () [0x00104] in <16c56cdcfe244801a66a9296faeca43a>:0
at Mono.Data.Sqlite.SqliteDataReader..ctor (Mono.Data.Sqlite.SqliteCommand cmd, System.Data.CommandBehavior behave) [0x0004e] in <16c56cdcfe244801a66a9296faeca43a>:0
at (wrapper remoting-invoke-with-check) Mono.Data.Sqlite.SqliteDataReader..ctor(Mono.Data.Sqlite.SqliteCommand,System.Data.CommandBehavior)
at Mono.Data.Sqlite.SqliteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x00006] in <16c56cdcfe244801a66a9296faeca43a>:0
at Mono.Data.Sqlite.SqliteCommand.ExecuteNonQuery () [0x00000] in <16c56cdcfe244801a66a9296faeca43a>:0
at Duplicati.Library.Main.Database.ExtensionMethods.ExecuteNonQuery (System.Data.IDbCommand self, System.Boolean writeLog, System.String cmd, System.Object[] values) [0x0005e] in <0f8b9891555748cb8021cd785e1fd463>:0
at Duplicati.Library.Main.Database.ExtensionMethods.ExecuteNonQuery (System.Data.IDbCommand self, System.String cmd, System.Object[] values) [0x00000] in <0f8b9891555748cb8021cd785e1fd463>:0
at Duplicati.Library.Main.Database.LocalDatabase.Vacuum () [0x00017] in <0f8b9891555748cb8021cd785e1fd463>:0
at Duplicati.Library.Main.Operation.VacuumHandler.Run () [0x00035] in <0f8b9891555748cb8021cd785e1fd463>:0
at Duplicati.Library.Main.Controller.<Vacuum>b__37_0 (Duplicati.Library.Main.VacuumResults result) [0x0000c] in <0f8b9891555748cb8021cd785e1fd463>:0
at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0026f] in <0f8b9891555748cb8021cd785e1fd463>:0
at Duplicati.Lib
There was also no reduction in disk size and the system was a little bit unresponsive during the few minutes it took to run the operation.
Maybe, in order to keep things organized, I should be opening a new thread for my issue (or move this discussion there).
The reason why I commented here in the first place was to avoid creating a new thread when the issue could very well be the same as the OP’s issue, but this might not be the case after all.
Ok I’ve moved this discussion to its own thread, let me know if you want the title changed.
Insertion failed because the database is full
I think this is caused by a lack of free space. I should have mentioned that a vacuum operation may require at least as much free space as the size of your current database. Do you have that much free space?
Thanks, I think the title is good.
Woah, that’s a lot, but running df -h
shows that I have 23 GB of free space left, on a disk with 89 GB of memory.
I’m not quite sure if I’m able to come up with more disk space, as I have two OS’s in the same disk and shrinking could reveal troublesome.
At most, I can get an external drive and use it as additional storage, but it would have to be mounted in a specific place in the filesystem. Is that possible?
Hmm… with 23GB free you shouldn’t have a problem vacuuming a 10GB database. Can you confirm that the size of the one you tried vacuuming (and it failed) was 10GB?
Yes, here’s the output of a few commands:
$ size .config/Duplicati
9.3G .config/Duplicati
$ df -h
df: /tmp/.mount_espansX3QPSO: Transport endpoint is not connected
Filesystem Size Used Avail Use% Mounted on
dev 3.9G 0 3.9G 0% /dev
run 3.9G 1.1M 3.9G 1% /run
/dev/nvme0n1p6 89G 62G 23G 74% /
tmpfs 3.9G 1.4M 3.9G 1% /dev/shm
/dev/nvme0n1p8 25G 17G 8.1G 68% /home/undercover/ze
tmpfs 3.9G 132M 3.8G 4% /tmp
/dev/nvme0n1p2 96M 58M 39M 60% /boot/efi
tmpfs
~ is under /, not in it’s own partition.
SQLite’s 3. How VACUUM works page discusses rollback journal or write-ahead log use, advising
as much as twice the size of the original database file is required in free disk space.
However partitioning may also play in, and that is less clearly stated, and may need further studies.
2.1. Rollback journals says it’s always located in the same directory as the database file
2.9. Transient Database Used By VACUUM might be talking about database not journal
5. Temporary File Storage Locations make me wonder where /var/tmp is. Maybe df that?
If, for example, /var/tmp winds up at /tmp to make them equivalent, /tmp looks too small.
This section also talks about using TMPDIR to control where temporary files are created.
Duplicati tempdir page does too, and could set the temporary location if that’s necessary.
along with TMPDIR to point to wherever it’s mounted might be the last resort to get space.
Easier thing to try is see how /var/tmp space is. If it’s small, relocate to folder residing in /
.
EDIT:
I’m not sure how those things differ, but you can always run df -h
a few times during run.
If it’s that quick, then manual space polling would do. If it gets slower, a script might help…
Thanks for the clarification!