Topic could use input from other people, but I’ll start with maintenance one could do on local Database.
VACUUM - performance improvement is huge is a post on this. Results may vary. Size may decrease.
Usage: vacuum <storage-URL> [<options>]
Rebuilds the local database, repacking it into a minimal amount of disk
space.
The storage URL can come from Export As Command-line for true CLI, or you can use Commandline.
You can use auto-vacuum with auto-vacuum-interval if you like vacuum. It’s not really a must-run thing.
Database loss (e.g. from a disaster) and Direct restore from backup files will make a partial temporary database from destination files, and testing helps ensure that still works. A more thorough (and slower) testing method is to occasionally use the Database Recreate button, and maybe plan on waiting awhile.
Looking at About → Show log → Live → Verbose will detail where you are, and demonstrate progresss.
Somewhat safer is to move the database to a backup name, and use the Repair button. This gives you simple recovery if something goes wrong. It won’t change the destination, so old database will still work, however you never want to be using a database obtained somehow (e.g. backup) not fitting destination.
Ordinarily rebuilding the database should get to about 70% on the progress bar. Any further can indicate trouble finding data. 90% to 100% is downloading the rest of the backup, and it’s usually not successful.
Checking on destination file health is good, but sometimes the default verification sample is sort of small.
Large backups or ones with destination trouble can set backup-test-samples or backup-test-percentage.
Corrupted files are hard to predict, but transfer error rates show up in job Complete log
RetryAttempts
which is intended to ride through some isolated issues (subject to the number-of-retries and retry-delay).
The TEST command can be used in some idle period, if you’d rather not slow down your backup cycling.
Unlike the job database, which can be rebuilt from destination data (except for logs), the server database (Duplicati-server.sqlite) should be protected by Export To File, then keep that safe. It’s not needed for the restore, but will save some trouble configuring that backup again on a replacement system after disaster.
If nothing else, make sure you save enough destination and passphrase information to Direct restore
.
Reporting options are good. There are third-party monitors that build on them, to let you keep track easily.
Duplicati Monitoring and dupReport are examples, but people on the forum use all kinds of things for this.
There are loads of options (some probably not much-used therefore not well-proven), if you’re into those, however I’m trying to stay close to “well-maintained” topic including a bit on preparing for disaster restore.
EDIT:
Above “options” reference had Advanced Options in mind for those who like to tune, but in general there’s “good enough” versus levels aimed at extra care or proven needs. This depends on you and your system. Some people probably don’t do any of this speed-helping and extra-testing and all has worked out – so far.