OOOPS! Indeed! Really surprising for me! Not a native speaker
Concur seemed to be like āKonkurrentā in my language, which is a competitor. Not something which is in agreement.
OOOPS! Indeed! Really surprising for me! Not a native speaker
Concur seemed to be like āKonkurrentā in my language, which is a competitor. Not something which is in agreement.
Following up on this, @davegold suggested that we rewrite just the table for browsing files. Since this table is written anyway, and only used within the browse/restore code, it should be possible to split just this table to get faster folder/file listings in the UI.
Just proposed a pull request #2897 with a fix that worked for me. I had the same issue, a simple directory listing in restore usually took more than 5 minutes to list the contents. The same for file searching.
Did a bit of study on queries and noticed that subqueries returned way more than needed to join to temporary tables during restore.
After a simple filtering I get the same results but with times in 5 seconds range, which is ok for me (It could get even better, but it would require some rewriting in database).I hope that could help (and sorry for my poor English)!
Wow, this is awesome! And your English is great!
Did your pull get added? It would be nice to know if this is Coming Soonā¢.
Nevermind. I found a note in .16. Looks like it was added. Great job!
Itās not 5 seconds fast for me, though. WAY faster than before, but still around 30 seconds on a moderate folder.
Do you know if you have a lot of versions, files, or long file paths?
Iām guessing you have lots of files and/or long file paths which the implemented fix doesnāt improve very much. For those a database rewrite is likely needed.
Not sure about the paths. Right now I have 27 versions.
As another point of reference, Iāve just changed to the experimental channel and am to find that browsing operations have dropped from (many) minutes each, to about 15 seconds. This makes restores of arbitrary files tolerable, although hardly snappy. I run 64 versions.
It all happens in the local database, so speed may vary from computer to computer depending on CPU and disk speed.
The canary improved speeds a lot but I figure it should be possible to go down to around 1 second per step from a couple of test queries I made on a test DB.
Understood. And Iām not complaining at all. This performance improvement, and the smart backup retention, are very welcome. Weāll be rolling Duplicati out more widely as a result.
Iām glad to hear that
I think itās primarily me complaining, but I havenāt had time to look more into improving it yet