That sounds similar to what Windows Volume Shadow Copy Service (VSS), per The VSS Model, except it does this to itself. VSS has a freeze/thaw approach, and it asks VSS-aware applications to prepare for the backup, flushing I/O and saving state to allow an application-consistent (not just crash-consistent) backup.
Interrupts and checkpoint/restart is a mainframe practice that sounds similar, providing a stable snapshot.
Duplicati 22.214.171.124 should be able to do a safe “partial backup” using Stop button and “Stop after current file”, however it’s considered a backup stop. When the backup runs again, it can just run as usual, and backup whatever needs backing up that didn’t get backed up by the previous backup (or was changed since then).
It sounds like even TSM might lose data that’s backed up after its last database backup. I expect it’s more reliable than Duplicati though, so that happens less. Duplicati does have crash recovery mechanisms that attempt to repair damage, and also upload a “synthetic filelist” which gives a backup of the last completed backup plus whatever got backed up in a rudely interrupted backup. Synthetic filelist will work in next Beta.
This is the point I’ve been trying to make (maybe with loose wording). Backing up the Duplicati database can’t be done by simply pointing to it, then backing it up with other files. It needs to have a separate step, whether that’s a secondary job that runs after the primary, or something using Duplicati scripting options.
I use this crude script in run-script-before to keep a history of databases while I invite disasters to debug:
rem Get --dbpath oathname value without the .sqlite suffix
rem and use this to maintain history of numbered older DBs
IF EXIST %DB%.4.sqlite MOVE /Y %DB%.4.sqlite %DB%.5.sqlite
IF EXIST %DB%.3.sqlite MOVE /Y %DB%.3.sqlite %DB%.4.sqlite
IF EXIST %DB%.2.sqlite MOVE /Y %DB%.2.sqlite %DB%.3.sqlite
IF EXIST %DB%.1.sqlite MOVE /Y %DB%.1.sqlite %DB%.2.sqlite
IF EXIST %DB%.sqlite COPY /Y %DB%.sqlite %DB%.1.sqlite
I also run a log at
profiling level, which gets big (2 GB on the previous Canary runtime) but shows SQL. Think of it as similar to the flight recorder on an aircraft, to allow some analysis of how things went wrong.
My DB backup is a local COPY because the DB reportedly changes enough that it’s almost fully uploaded when using Duplicati for versioning. The local copy also runs faster than my Internet uplink can manage…
My use case is not the typical case, but anybody who really wants a backup can certainly set up a backup.
and then there’s the restore side, which ideally would be made somewhat automated, or at least get good directions. It’s easier to use the
Recreate button, but (as mentioned recently), it’s not always a nice result, however it’s better than it was before, and my personal wish is to make database or Recreate issues rare.
Unfortunately, chasing somewhat rare bugs based on end user reports is hard because typically users will not want to be running all the debug aids I run. I’ve advocated (and begun) fault insertion and stress testing.
Meanwhile, let’s say one has a series of DB backups. Which one is intact, and which one matches latest? There is somewhat more self-checking at start of backup. If Duplicati fails a self-check, restoring the final database after the previous backup won’t help, because that’s where the new backup failed during startup.
This means maybe the database before the previous backup is the intact one, but it needs to be validated. Throwing it in and trying a backup won’t work, because backup validation will find new files it doesn’t know. Running
Repair will synchronize the DB to the backup by removing the new files. Unclear if this is fixable.
Compacting files at the backend can repackage still-in-use blocks after delete, creating a large mismatch. Stale database will say it can go to some
dblock file to get a block to restore, yet find the
The problem does not exist if
Recreate button is used. Backup is supposed to have all it requires inside it, which is important in disaster recovery situations where a
Direct restore from backup files is done.
dbpath option can be used to point to the database. I’m not advocating for that right now, but description is:
Path to the file containing the local cache of the remote file database.
So proposal becomes that cache backups be done. Phrased that way, does it sound like a standard thing?
I’m not saying it has no value, just that it’s not a simple thing. Very limited development resources could be put to better use, especially since anybody who wants database backups can potentially do it on their own.