Did names of those new files start with dup-*, or have etilqs
in them?
That would help to determine what was causing fill, and how to avoid it.
While the final fail was in SQLite, something else might also use space.
dup-*
files are named by Duplicati, so can be relocated using tempdir
.
I had the SQLite manual suggestion in mind awhile:
The folder set by PRAGMA temp_store_directory
but then we both found that it comes with limitations.
This pragma is deprecated and exists for backwards compatibility only. New applications should avoid using this pragma. Older applications should discontinue use of this pragma at the earliest opportunity.
SQLite does what it does, unpredictably. It seems like it ought to, but testing it is more reliable.
The FIND command takes not only --all-versions
but --version
, whose use sometimes is
By default, Duplicati will list and restore files from the most recent backup, use this option to select another item. You may enter multiple values separated with comma, and ranges using -
, e.g. 0,2-4,7
.
and if that style is supported by find
, you can see if this sort of trimming down will help things. Trimming down actual stored versions (as opposed to find
versions) might wind up differently.
If you’ve ever successfully done --all-versions
before, and everything else has been steady (hard to say, probably), then you might already have the answer of whether less versions helps.
Splitting the backup might help, but it’s probably difficult to do while keeping the older versions. Awkwardly splitting it might be possible by cloning backup, then doing different purge on each.
Deleting versions is quite easy. Fresh start is easy too, but loses all of the old backup versions.
Since this is a find
, I’d guess it’s kind of name oriented. If your names or files change a lot as time passes (look at a backup log for its stats), then trimming versions will reduce total names.
Your source is slightly beyond recommended for a default 100 KB blocksize, but not enough to reduce SQLite performance greatly, and blocksize can’t be changed without fresh backup, but should you choose to do that anyway, raising blocksize will shrink databases, save space, etc., while making deduplication less effective. Your 5 GB ramdisk may be forcing some tradeoffs…