Slow restore file browsing

How can I speed up the restore file browsing?
I have seen many posts discussing this problem, but the last one was from 2018.
Has anything changed to speed up restore file browsing, because it takes a long time to view the contents of each directory.

Hello and welcome!

I believe the speed of browsing depends on several factors: how many backup versions you have, speed of CPU, maybe even the number of files you have protected.

How long does it take for you to expand a directory when you are browsing the restore dialog?

For a directory of 140 MB it took 1 min.

Where are the files being restored from and how many files are in that folder?

A 140MB folder with a file or two will list near instantly but a 140MB folder with 30,000 files will easily take a few minutes (and that’s off a local SDD), if it’s remote it’s certainly going to be bit slower.

The folder only contains 82 items.

Is the data local?

Also, what kind of files(.txt, .jpg, .zip) are in that folder?

The data is local.
.jpg, .png and .svg files all below 2 MB each.

Hmm, those file types shouldn’t be too bad, what kind of storage is the local storage? USB flash drive, USB HDD, internal SATA SDD, internal NVMe or a HDD?

I don’t think there are any relevant OS issues but what OS are you using?

I have some cheap USB flash drives that perform worse than external HDDs, not exaggerating at all. I should have known 2x 32GB flash drives for < $15 wouldn’t be great but OMG they are just horrible. Point is, can you try backing up the same data to a different location then test if things get faster or slower?

External HDD.
Using Manjaro.
Might just be the slow CPU, although restore browsing was fast with Vorta.

Is the source drive is also a HDD? If so that could just be as fast as it gets. To me, the time to list doesn’t sound enjoyable but it’s also right inline with what I’d expect for an all HDD system with or without a “slow CPU”.

Are we talking Sempron slow, 3rd gen i5 slow, how old is this computer? While Duplicati doesn’t have any firm hardware requirements, it’s possible this machine may be a pinch sub-optimal if you anticipate having to perform frequent restores. A full spec list for the machine in question would really help in identifying potential bottlenecks.

I’m not sure what Vorta would have been doing differently but surely there is something different. Out of curiosity, do you by chance know just how long it did take Vorta to list that same data when running on the same hardware?

Also keep in mind that one of the first signs of a mechanical drive (HDD) failing is “it’s all of a sudden slower than it used to be”, not to say your external drive currently has an issue but it’s always a possibility. If you have multiple backup drives you can test between them to see if “that’s just how slow it is” or “for some reason this drive is much slower than the other”. Do you have another external drive you could test with?

One more question, is the destination drive used exclusively for Duplicati backups or do you have “other data” in the drive as well?

I’m thinking maybe there should be a stated minimum recommended config for Duplicati. I know I’m asking for a bunch of stuff here but as much feedback as you can provide is appreciated.

How many files altogether? This is in the job log at Source files as the Examined number. Keep reading.

I suspect this is true, although some detailed timing measurements would be needed to find the slow spot.

It looks like it does a list/find operation internally to go through the flat DB file list and find ones in that folder.

Here is me expanding the C:\Users folder: I’ll show a mix of Wireshark traffic and Duplicati profiling log info:

16:18:03 is the query from the browser to Duplicati asking about C:\Users (also see USERS in SQL below)

GET /api/v1/backup/1/files/C%3A%5CUsers%5C?prefix-only=false&folder-contents=true&time=2021-12-18T15%3A50%3A00-05%3A00&filter=%40C%3A%5CUsers%5C HTTP/1.1

2021-12-18 16:18:03 -05 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation List has started
2021-12-18 16:18:03 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Controller-RunList]: Starting - Running List
2021-12-18 16:18:03 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("List", 1639862283); SELECT last_insert_rowid();
2021-12-18 16:18:03 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("List", 1639862283); SELECT last_insert_rowid(); took 0:00:00:00.101
2021-12-18 16:18:03 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC
2021-12-18 16:18:03 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC took 0:00:00:00.001
2021-12-18 16:18:03 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE "Filesets-B21AD6C7775C304CAE727E41E8CA461D" AS SELECT DISTINCT "ID" AS "FilesetID", "IsFullBackup" AS "IsFullBackup" , "Timestamp" AS "Timestamp" FROM "Fileset"  WHERE  "Timestamp" <= 1639860600
2021-12-18 16:18:03 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: CREATE TEMPORARY TABLE "Filesets-B21AD6C7775C304CAE727E41E8CA461D" AS SELECT DISTINCT "ID" AS "FilesetID", "IsFullBackup" AS "IsFullBackup" , "Timestamp" AS "Timestamp" FROM "Fileset"  WHERE  "Timestamp" <= 1639860600 took 0:00:00:00.000
2021-12-18 16:18:03 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: CREATE INDEX "Filesets-B21AD6C7775C304CAE727E41E8CA461D_FilesetIDTimestampIndex" ON "Filesets-B21AD6C7775C304CAE727E41E8CA461D" ("FilesetID", "Timestamp" DESC)
2021-12-18 16:18:03 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: CREATE INDEX "Filesets-B21AD6C7775C304CAE727E41E8CA461D_FilesetIDTimestampIndex" ON "Filesets-B21AD6C7775C304CAE727E41E8CA461D" ("FilesetID", "Timestamp" DESC) took 0:00:00:00.000
2021-12-18 16:18:03 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT DISTINCT "A"."FilesetID", "A"."IsFullBackup", "B"."FileCount", "B"."FileSizes" FROM "Filesets-B21AD6C7775C304CAE727E41E8CA461D" A LEFT OUTER JOIN ( SELECT "A"."FilesetID" AS "FilesetID", COUNT(*) AS "FileCount", SUM("C"."Length") AS "FileSizes" FROM "FilesetEntry" A, "File" B, "Blockset" C WHERE "A"."FileID" = "B"."ID" AND "B"."BlocksetID" = "C"."ID" AND "A"."FilesetID" IN (SELECT DISTINCT "FilesetID" FROM "Filesets-B21AD6C7775C304CAE727E41E8CA461D") GROUP BY "A"."FilesetID"  ) B ON "A"."FilesetID" = "B"."FilesetID" ORDER BY "A"."Timestamp" DESC 
2021-12-18 16:18:04 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT DISTINCT "A"."FilesetID", "A"."IsFullBackup", "B"."FileCount", "B"."FileSizes" FROM "Filesets-B21AD6C7775C304CAE727E41E8CA461D" A LEFT OUTER JOIN ( SELECT "A"."FilesetID" AS "FilesetID", COUNT(*) AS "FileCount", SUM("C"."Length") AS "FileSizes" FROM "FilesetEntry" A, "File" B, "Blockset" C WHERE "A"."FileID" = "B"."ID" AND "B"."BlocksetID" = "C"."ID" AND "A"."FilesetID" IN (SELECT DISTINCT "FilesetID" FROM "Filesets-B21AD6C7775C304CAE727E41E8CA461D") GROUP BY "A"."FilesetID"  ) B ON "A"."FilesetID" = "B"."FilesetID" ORDER BY "A"."Timestamp" DESC  took 0:00:00:00.687
2021-12-18 16:18:04 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE "Filenames-7B6444730BFCDF47A154EB6AA254A6CB" ("Path" TEXT NOT NULL)
2021-12-18 16:18:04 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: CREATE TEMPORARY TABLE "Filenames-7B6444730BFCDF47A154EB6AA254A6CB" ("Path" TEXT NOT NULL) took 0:00:00:00.000
2021-12-18 16:18:04 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: INSERT INTO "Filenames-7B6444730BFCDF47A154EB6AA254A6CB" SELECT DISTINCT "Path" FROM "File" WHERE "Path" LIKE "C:\USERS\%"
2021-12-18 16:18:04 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: INSERT INTO "Filenames-7B6444730BFCDF47A154EB6AA254A6CB" SELECT DISTINCT "Path" FROM "File" WHERE "Path" LIKE "C:\USERS\%" took 0:00:00:00.184
2021-12-18 16:18:04 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "Filenames-7B6444730BFCDF47A154EB6AA254A6CB" WHERE "Path" NOT IN (SELECT DISTINCT "Path" FROM "File", "FilesetEntry" WHERE "FilesetEntry"."FileID" = "File"."ID" AND "FilesetEntry"."FilesetID" IN (SELECT "FilesetID" FROM "Filesets-B21AD6C7775C304CAE727E41E8CA461D") ) 
2021-12-18 16:18:05 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "Filenames-7B6444730BFCDF47A154EB6AA254A6CB" WHERE "Path" NOT IN (SELECT DISTINCT "Path" FROM "File", "FilesetEntry" WHERE "FilesetEntry"."FileID" = "File"."ID" AND "FilesetEntry"."FilesetID" IN (SELECT "FilesetID" FROM "Filesets-B21AD6C7775C304CAE727E41E8CA461D") )  took 0:00:00:01.190
2021-12-18 16:18:06 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE "Filenames-39F2398FB580C9468804FC5ED1B0EAEB" ("Path" TEXT NOT NULL)
2021-12-18 16:18:06 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: CREATE TEMPORARY TABLE "Filenames-39F2398FB580C9468804FC5ED1B0EAEB" ("Path" TEXT NOT NULL) took 0:00:00:00.001
2021-12-18 16:18:06 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT DISTINCT "Path" FROM "Filenames-7B6444730BFCDF47A154EB6AA254A6CB" 
2021-12-18 16:18:06 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT DISTINCT "Path" FROM "Filenames-7B6444730BFCDF47A154EB6AA254A6CB"  took 0:00:00:00.000
2021-12-18 16:18:06 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: CREATE INDEX "Filenames-39F2398FB580C9468804FC5ED1B0EAEB_PathIndex" ON "Filenames-39F2398FB580C9468804FC5ED1B0EAEB" ("Path")
2021-12-18 16:18:06 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: CREATE INDEX "Filenames-39F2398FB580C9468804FC5ED1B0EAEB_PathIndex" ON "Filenames-39F2398FB580C9468804FC5ED1B0EAEB" ("Path") took 0:00:00:00.004
2021-12-18 16:18:06 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT "C"."Path", "D"."Length", "C"."FilesetID" FROM (SELECT "A"."Path", "B"."FilesetID" FROM "Filenames-39F2398FB580C9468804FC5ED1B0EAEB" A, (SELECT "FilesetID", "Timestamp" FROM "Filesets-B21AD6C7775C304CAE727E41E8CA461D" ORDER BY "Timestamp" DESC) B ORDER BY "A"."Path" ASC, "B"."Timestamp" DESC) C LEFT OUTER JOIN (SELECT "Length", "FilesetEntry"."FilesetID", "File"."Path" FROM "Blockset", "FilesetEntry", "File" WHERE "File"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FileID" = "File"."ID" AND FilesetEntry."FilesetID" IN (SELECT DISTINCT "FilesetID" FROM "Filesets-B21AD6C7775C304CAE727E41E8CA461D") ) D ON "C"."FilesetID" = "D"."FilesetID" AND "C"."Path" = "D"."Path"
2021-12-18 16:18:08 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT "C"."Path", "D"."Length", "C"."FilesetID" FROM (SELECT "A"."Path", "B"."FilesetID" FROM "Filenames-39F2398FB580C9468804FC5ED1B0EAEB" A, (SELECT "FilesetID", "Timestamp" FROM "Filesets-B21AD6C7775C304CAE727E41E8CA461D" ORDER BY "Timestamp" DESC) B ORDER BY "A"."Path" ASC, "B"."Timestamp" DESC) C LEFT OUTER JOIN (SELECT "Length", "FilesetEntry"."FilesetID", "File"."Path" FROM "Blockset", "FilesetEntry", "File" WHERE "File"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FileID" = "File"."ID" AND FilesetEntry."FilesetID" IN (SELECT DISTINCT "FilesetID" FROM "Filesets-B21AD6C7775C304CAE727E41E8CA461D") ) D ON "C"."FilesetID" = "D"."FilesetID" AND "C"."Path" = "D"."Path" took 0:00:00:01.680
2021-12-18 16:18:08 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "Filenames-7B6444730BFCDF47A154EB6AA254A6CB" 
2021-12-18 16:18:08 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DROP TABLE IF EXISTS "Filenames-7B6444730BFCDF47A154EB6AA254A6CB"  took 0:00:00:00.000
2021-12-18 16:18:08 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "Filenames-39F2398FB580C9468804FC5ED1B0EAEB"
2021-12-18 16:18:08 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DROP TABLE IF EXISTS "Filenames-39F2398FB580C9468804FC5ED1B0EAEB" took 0:00:00:00.000
2021-12-18 16:18:08 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "Filesets-B21AD6C7775C304CAE727E41E8CA461D" 
2021-12-18 16:18:08 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DROP TABLE IF EXISTS "Filesets-B21AD6C7775C304CAE727E41E8CA461D"  took 0:00:00:00.000
2021-12-18 16:18:08 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: PRAGMA optimize
2021-12-18 16:18:08 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: PRAGMA optimize took 0:00:00:00.001
2021-12-18 16:18:08 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunList]: Running List took 0:00:00:04.559
2021-12-18 16:18:08 -05 - [Information-Duplicati.Library.Main.Controller-CompletedOperation]: The operation List has completed

16:18:08 is when output to the browser began. You can watch this in Wireshark on port 8200 or whatever.
You can probably also watch it on any web browser using web developer tools (often F12) to watch traffic.
You can test to see how fast an isolated list command can run by running that, e.g using Commandline

For a profiling log, you can loook at About → Show log → Live → Profiling while you expand a given folder.
This shows SQL query execution times to the second. Unfortunately timestamp of the lines give minutes.
A more permanent way to get seconds (and a big log file) is log-file=<path> and log-file-log-level=profiling.

To anybody who has a slow enough operation to look at, use your system performance tools, especially for CPU and disk. If it’s a slow SQL query, it will probably only use one core, so best to get per-processor view.
Windows Task Manager’s Performance tab CPU graph can be right-clicked to change view to that, or back.

I’m not sure destination speed matters. To test that, I browsed restore tree of cloud backup with network off. This looks like all local database work, and that takes a certain amount of CPU, and some local drive reads.

Size of database may matter because the SQLite cache can only hold so much. This might shrink DB size.
Command can be run from GUI CommandLine by picking vacuum, clearing `Commandiine arguments, etc.:

Usage: vacuum <storage-URL> [<options>]

  Rebuilds the local database, repacking it into a minimal amount of disk
  space.

That might help if physical size of the Database matters, but won’t shrink the amount of logical data in DB.