Failed: SQL logic error or missing database too many SQL variables

I believe setting a retention-policy results in this error:

Fatal Failed: SQL logic error or missing database
too many SQL variables

command that does NOT result in error:
“C:\Program Files\Duplicati 2\Duplicati.CommandLine.exe” backup “file://d:\Duplicati\C” “C:\folder1\” --snapshot-policy=Auto --backup-name=“Server-C to USB” --dbpath=“C:\Program Files\Duplicati 2\data\EMCNAPSHHM.sqlite” --encryption-module= --compression-module=zip --dblock-size=50mb --no-encryption=true --disable-module=console-password-input

command that DOES result in error:
“C:\Program Files\Duplicati 2\Duplicati.CommandLine.exe” backup “file://d:\Duplicati\C” “C:\folder1\” --snapshot-policy=Auto --backup-name=“Server-C to USB” --dbpath=“C:\Program Files\Duplicati 2\data\EMCNAPSHHM.sqlite” --encryption-module= --compression-module=zip --dblock-size=50mb –retention-policy=“3M:1D,10Y:1W” --no-encryption=true --disable-module=console-password-input

There are currently 3209 backups in this job. It takes about 2.5 hours to run a backup. (which is why I’m TRYING to add a retention policy). Is there any command I can run to make duplicati “thin out” the backups according to the retention policy, without having to wait 2.5 hours for the backup to run first?

duplicati 2.0.3.3

Hello @jshipp,

I think the new backup happens before old ones are deleted, so you might have to endure that a few times.
This appears the same issue as Retention policy on backup with many sets throws Exception #3308 where @mnaiman helpfully cited a document showing the SQLite limit is 999. It looks like a deletes-at-once limit.

Working around that might require deleting in smaller batches by tuning the retention policy, however there isn’t a need to do an actual backup while tuning, because --dry-run can be used. It’s also wise because this could be a bit tricky to thin in phases now that the behavior is that anything past the largest timeframe is an implied delete. Although you wouldn’t currently have anything past 10Y, using 10Y:U would maintain the 10Y cutoff, then you can add (in any position in the sequence) 6M:1W and adjust 6M up and down until you get within the delete limit. Then uncheck --dry-run and do the backup and delete, then repeat until fully thinned.

I’m not sure how often Duplicati does things in such volume. If you want to be super-careful against glitches, you could perhaps copy off your job database and all the backend files, however that could be a lot of data.

The easiest way to set this all up is probably to edit the job to temporarily disable its scheduled run, turn on advanced option --dry-run (and checkmark its box), and set --log-level=Information for the experimentation.

Next, on the job’s Commandline menu item, make sure it’s set to “backup”, check that above edits are there, then edit --retention-policy, and run dry-run backup. Look at “Deleting # remote fileset(s)”. Is # low enough?

You could also wait to see if @mnaiman chooses to confirm this or suggest an alternative way to solve this.

I see now that github is the proper place to post stuff like this, sorry.

I bet you are right. I’ll try this and let you know the verdict!

thanks!!!