Have been using duplicati for awhile and for different purposes but always it’s achilles heel has always been the client side database requirement for correct operation.
My desktops current stats are:
Source: 83.44 GB (but has been as large as 520gb)
Backup: 766.26 GB / 17 Versions
Files: ~72666 (varies)
Options: auto-vacuum, auto-cleanup
I was concerned that if I needed to rebuild the database in the event it’s corrupted or had to reinstall, I wanted to validate how duplicati would handle this (as I’m worried about the viability of using it for windows backups).
The rebuild was aborted after 10+ hours, with 4.5gb of data transferred from the origin (only about 900mb of index and list data) with my ssd spending most of the time between 50mb/s and 200mb/s (samsung 850 pro 1tb).
I’m in the process of replicating an environment where I can leave it rebuilding until complete and get a better picture of what that would mean.
However something’s very wrong with database rebuild at the moment (possibly just with my scenario but it’s happened on other instances) as I can see it’s:
- disk throughput
- transfer from the source
Are much larger than I would have expected for the relatively simple scenario I have been utilizing it in.
- I see others have had issues similar issues, are there any tricks to help avoid a massive rebuild time ?
- Have any of the recent thread improvements also brought any tricks ?
The backup was working like clockwork until I decided to delete the existing db and rebuild it.