I’m currently running Duplicati as a regular process, and un-elevated. However, as was mentioned in the other thread, I’m also skeptical that this was the resolution of your issue, and suspect that the re-creation of the database was the true catalyst.
Since my backups are sometimes completing in a normal amount of time and sometimes taking way longer, I want to see if I can figure out if there’s a difference in how Duplicati is accessing the database between these two scenarios. @ts678’s suggestion to use procmon
to monitor resource access got me thinking, so I’m going to try using ProcessActivityView instead (since it provides an easier-to-read summary of activity), along with Profiling
-level logging, to see if I can’t suss out what the difference is.
I do, however, agree with @drwtsn32 that one of the roots of this issue is the too-small block size, which is causing the database to have to keep track of too many individual blocks to be performant. I’ve been putting off doing something about it, since, as they mentioned, it would involve creating an entirely new backup from scratch, but if I can’t nail down what’s happening here, I may have to do just that.