Backups suddenly taking much longer to complete

Thanks for the heavy pursuit of the problem. I think it’s getting at least some leads that might be used.

I guess that’s what has been called a Heisenbug. Maybe there’s a less disruptive way to watch activity.

I’m still wondering if temporary files are a culprit. I see lots of files in your logs that begin with etilqs
Temporary Files Used By SQLite says less than I’d like about telling why these are being created, but I expect one can sneak a peek (e.g. with the dir command) in their directory to get a rough tally without throwing off the behavior. There’s one section in SQLite’s page that sounds maybe size/load sensitive:

Other Temporary File Optimizations

This means that for many common cases where the temporary tables and indices are small (small enough to fit into the page cache) no temporary files are created and no disk I/O occurs. Only when the temporary data becomes too large to fit in RAM does the information spill to disk.

I’m no SQLite expert (any out there?), but it sounds like changing the number of cached pages is tough, when one wants to try a spur-of-the-moment last-shot experiment before discarding current backup, but changing the page size might be possible without code change using DB Browser for SQLite to use the Edit Pragmas tab to raise Page Size from 4096 to 8192, then use Save button at window bottom right.

PRAGMA page_size seems to say it’s not immediately effective, but you can at least see if it sticks over close and reopen of the job Database. Eventually, you probably need a vacuum, either by Duplicati or by Execute SQL tab with vacuum in it. After that, see if it helps speed. It might destroy backup, but that may happen anyway if you need speed and plan to start over. If you have space, you could copy backup files (probably quite a lot of space) or the database (small, but also in theory can be rebuilt from backup files).

The experiment I’m proposing would certainly be worth trying before the complete start-over of the backup because even if it blows up there’s not really any additional loss. Any damage seems less likely to backup remote files than the database, so it might also be tried before database Recreate test. You could even try several things in parallel if you can spare the storage, e.g. get larger block size one going, then play more.

In support of the something-overloaded theory, I’ll mention that some performance seems to go into sharp slowdown past a certain number of blocks, and in one test I think I saw etilqs files getting hyperactive…

seems like it bumps both tunings, but I’m not sure you can get that to change and stick in regular backup.

1 Like