Related:
Hi Everyone,
Have been using duplicati for awhile and for different purposes but always it’s achilles heel has always been the client side database requirement for correct operation.
My desktops current stats are:
Source: 83.44 GB (but has been as large as 520gb)
Backup: 766.26 GB / 17 Versions
Files: ~72666 (varies)
Database: 2GB
Options: auto-vacuum, auto-cleanup
I was concerned that if I needed to rebuild the database in the event it’s corrupted or had to reinstall, I wanted to valida…
For the record, the way this SQL is built dynamically makes it very difficult to optimize things - not just the SQL but also the calling method (I suspect we’re not benefiting from store execution plans on these repeated calls).
Testing (and ultimately final code) for this may entail “hard coded” SQL. If it improves performance enough, that should be a valid reason to suffer through the pain of maintaining it.
Some of the issues of long database rebuilds are probably linked to an issue where compacting does not (or did not) create the correct dindex files. A way to avoid long database recreate times at moments you need them, is described here.
I have noticed sometimes a lot of dblock files needed to be downloaded in order to recreate the database. This is because the “list”-folder seems to be missing in some dindex files.
Note: this should be done on a completely consistent backup set. Be careful wi…
In my opinion, this is the Achilles Heel of Duplicati - if it takes days to restore in the event of a data loss, it’s not a viable backup solution.
So is the temporary best solution to also back up the Duplicati databases themselves and then first put them in place before restoring from another machine?