I believe the issue with slow database queries might still be unresolved? It appears that some of the queries may scale poorly with the size of the local database. See my later posts for more details.
Unfortunately, I am no longer able to replicate this issue as I have since modified my retention settings and trimmed my local database size from over 4GB to a few hundred MB. This has reduced my backup time from over 20 minutes to less than 4 minutes.
I may have found a repeatable scenario to trigger this. It seems like whenever I do multiple database repair / rebuild tests around dindex / dlist failure scenarios I eventually run into the “Metadat awas reported as not changed, but still requires being added?” issue.
And it just so happens I triggered it on a test backup yesterday so I’ll let that sit as a test bed for when new version comes out.
I love the checkbox plugin!
Though I did notice I can change the checkboxes without editing the post and it appears it’s not a wiki…hopefully it’s only because of my access and not everybody can do it.
Latest Canaries are not fun to use. Disk2disk backup, many files and >1TB with just some small MB of change, takes now hours instead of <10 minutes. Despite rebuild of database and use of check-filetime-only. Duplicati reads whole file contents (I see a permanent IO of >80MByte/s) which is terrible, because effectively I cannot run this backup any more since May.
Also, but this is probably not a problem, 10000s of warnings (# of files).
–check-filetime-only for canary from the big 126.96.36.199 rewrite through 188.8.131.52 can cause warnings and slowdowns.
The “Metadata” topic above describes how the option regressed, and has speed comparisions with and without.
184.108.40.206 was Apr 23. Any May problem might have come from 220.127.116.11 picking up an existing --check-filetime-only.
Removal might not fix every performance or read rate worry. I’m just giving a specific workaround that may help.
Thanks. Yeah, I remove the checkfiletime flag and it still reads my whole contents. Backing up my few MBytes will maybe take 5 hrs now. This time I will try to let it finish the process, maybe on next backup it will not read everything again then.
Ok. There are now three open issues left on the list, and I think the snapshot issue is causing the UNIQUE constraint issue.
I think the snapshot issue is fixed, but I was never able to reproduce locally, so it may still lurk. If I am correct, this means we only have the “repair does not work” issue left.
I have some ideas for how we can make the repair process more robust, but it will take some rewriting to complete. I propose that we make a new canary with the fixes already in there, and then make the “repair does not work” issue the top priority for the next release. And we get some feedback on the snapshot issue.