UsageReport at end of backup very slow


I was troubleshooting long-running backup times on my Synology NAS. Even though no files change much of the time, a backup job takes over an hour.

I checked the Live log and see that a UsageReport SQL function is taking 45+ minutes:

Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE “UsageReport-F182E685FB5B234C9114735E1859EC9E” AS SELECT “VolumeID” AS “VolumeID”, SUM(“ActiveSize”) AS …

From searching the posts here, this seems to be Duplicati looking for compactable volumes. Is that correct?

The NAS doesn’t have the most powerful CPU, and Duplicati is protecting 650GB of data (at 50MB volumes). Maybe 45+ minutes for this process is expected?

Could this function be multithreaded in the future? I see it maxes out one of the four cores in the NAS CPU. Or maybe Duplicati could be enhanced to only do compaction check once a week? (Just throwing out ideas…)


On the off chance that this could be helpful – it dramatically improved database performance for my stuck job that was taking days to rebuild missing index files (each index file was taking 2 - 3 hours before, and after doing this trick each remaining one took less than 2 seconds)…


I saw this and tried it on my main PC sqlite database and it made no noticeable difference, but I’ll give it a shot on my NAS database. Will report back when my testing is done!

1 Like

ah, that’s too bad… i hope it might make a difference on your other one by some luck.


Ran ANALYZE on the database and it didn’t help. Oh well…


If would be worth seeing if –no-auto-compact avoids the delay. Do you backup often? If so, it’d be even more worth getting per-backup times down, and perhaps worth space waste until occasional compact.

If it’s a single query, I’m not seeing that SQLite multithreads those (maybe also true of most databases). Probably the first thing to do would be to seek someone who knows SQL well, and can learn Duplicati’s. There’s probably some level of art to taking a large slow query apart to measure which parts are slow…

Mine seems to usually take 17 or 18 milliseconds though, and I’m not sure how well someone could get your 45+ minute result on another system even if you posted a DB bug report. Post one if you care to…


I’ll try the --no-auto-compact option and report back. Thanks for the idea!


Confirmed - using the --no-auto-compact option results in much faster backup times.