First Backup has been running for over a day since upgrade from 2.0.6.1 to 2.0.7.1

Two other (small) backup jobs completed the DB conversion involved with the upgrade, this larger one backing up the system though has not. Profiling indicated that the SQL query is taking ~7 hours to complete:

  • Jun 13, 2023 10:23 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT DISTINCT “Path” FROM ( SELECT “L”.“Path”, “L”.“Lastmodified”, “L”.“Filelength”, “L”.“Filehash”, “L”.“Metahash”, “L”.“Metalength”, “L”.“BlocklistHash”, “L”.“FirstBlockHash”, “L”.“FirstBlockSize”, “L”.“FirstMetaBlockHash”, “L”.“FirstMetaBlockSize”, “M”.“Hash” AS “MetaBlocklistHash” FROM ( SELECT “J”.“Path”, “J”.“Lastmodified”, “J”.“Filelength”, “J”.“Filehash”, “J”.“Metahash”, “J”.“Metalength”, “K”.“Hash” AS “BlocklistHash”, “J”.“FirstBlockHash”, “J”.“FirstBlockSize”, “J”.“FirstMetaBlockHash”, “J”.“FirstMetaBlockSize”, “J”.“MetablocksetID” FROM ( SELECT “A”.“Path” AS “Path”, “D”.“Lastmodified” AS “Lastmodified”, “B”.“Length” AS “Filelength”, “B”.“FullHash” AS “Filehash”, “E”.“FullHash” AS “Metahash”, “E”.“Length” AS “Metalength”, “A”.“BlocksetID” AS “BlocksetID”, “F”.“Hash” AS “FirstBlockHash”, “F”.“Size” AS “FirstBlockSize”, “H”.“Hash” AS “FirstMetaBlockHash”, “H”.“Size” AS “FirstMetaBlockSize”, “C”.“BlocksetID” AS “MetablocksetID” FROM “File” A LEFT JOIN “Blockset” B ON “A”.“BlocksetID” = “B”.“ID” LEFT JOIN “Metadataset” C ON “A”.“MetadataID” = “C”.“ID” LEFT JOIN “FilesetEntry” D ON “A”.“ID” = “D”.“FileID” LEFT JOIN “Blockset” E ON “E”.“ID” = “C”.“BlocksetID” LEFT JOIN “BlocksetEntry” G ON “B”.“ID” = “G”.“BlocksetID” LEFT JOIN “Block” F ON “G”.“BlockID” = “F”.“ID” LEFT JOIN “BlocksetEntry” I ON “E”.“ID” = “I”.“BlocksetID” LEFT JOIN “Block” H ON “I”.“BlockID” = “H”.“ID” WHERE “A”.“BlocksetId” >= 0 AND “D”.“FilesetID” = 335 AND (“I”.“Index” = 0 OR “I”.“Index” IS NULL) AND (“G”.“Index” = 0 OR “G”.“Index” IS NULL) ) J LEFT OUTER JOIN “BlocklistHash” K ON “K”.“BlocksetID” = “J”.“BlocksetID” ORDER BY “J”.“Path”, “K”.“Index” ) L LEFT OUTER JOIN “BlocklistHash” M ON “M”.“BlocksetID” = “L”.“MetablocksetID” ) UNION SELECT DISTINCT “Path” FROM ( SELECT “G”.“BlocksetID”, “G”.“ID”, “G”.“Path”, “G”.“Length”, “G”.“FullHash”, “G”.“Lastmodified”, “G”.“FirstMetaBlockHash”, “H”.“Hash” AS “MetablocklistHash” FROM ( SELECT “B”.“BlocksetID”, “B”.“ID”, “B”.“Path”, “D”.“Length”, “D”.“FullHash”, “A”.“Lastmodified”, “F”.“Hash” AS “FirstMetaBlockHash”, “C”.“BlocksetID” AS “MetaBlocksetID” FROM “FilesetEntry” A, “File” B, “Metadataset” C, “Blockset” D, “BlocksetEntry” E, “Block” F WHERE “A”.“FileID” = “B”.“ID” AND “B”.“MetadataID” = “C”.“ID” AND “C”.“BlocksetID” = “D”.“ID” AND “E”.“BlocksetID” = “C”.“BlocksetID” AND “E”.“BlockID” = “F”.“ID” AND “E”.“Index” = 0 AND (“B”.“BlocksetID” = -100 OR “B”.“BlocksetID” = -200) AND “A”.“FilesetID” = 335 ) G LEFT OUTER JOIN “BlocklistHash” H ON “H”.“BlocksetID” = “G”.“MetaBlocksetID” ORDER BY “G”.“Path”, “H”.“Index” ))

  • Jun 13, 2023 10:23 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM “FilesetEntry” WHERE “FilesetEntry”.“FilesetID” = 296 took 0:00:00:00.052

  • Jun 13, 2023 10:23 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM “FilesetEntry” WHERE “FilesetEntry”.“FilesetID” = 296

  • Jun 13, 2023 10:23 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT DISTINCT “Path” FROM ( SELECT “L”.“Path”, “L”.“Lastmodified”, “L”.“Filelength”, “L”.“Filehash”, “L”.“Metahash”, “L”.“Metalength”, “L”.“BlocklistHash”, “L”.“FirstBlockHash”, “L”.“FirstBlockSize”, “L”.“FirstMetaBlockHash”, “L”.“FirstMetaBlockSize”, “M”.“Hash” AS “MetaBlocklistHash” FROM ( SELECT “J”.“Path”, “J”.“Lastmodified”, “J”.“Filelength”, “J”.“Filehash”, “J”.“Metahash”, “J”.“Metalength”, “K”.“Hash” AS “BlocklistHash”, “J”.“FirstBlockHash”, “J”.“FirstBlockSize”, “J”.“FirstMetaBlockHash”, “J”.“FirstMetaBlockSize”, “J”.“MetablocksetID” FROM ( SELECT “A”.“Path” AS “Path”, “D”.“Lastmodified” AS “Lastmodified”, “B”.“Length” AS “Filelength”, “B”.“FullHash” AS “Filehash”, “E”.“FullHash” AS “Metahash”, “E”.“Length” AS “Metalength”, “A”.“BlocksetID” AS “BlocksetID”, “F”.“Hash” AS “FirstBlockHash”, “F”.“Size” AS “FirstBlockSize”, “H”.“Hash” AS “FirstMetaBlockHash”, “H”.“Size” AS “FirstMetaBlockSize”, “C”.“BlocksetID” AS “MetablocksetID” FROM “File” A LEFT JOIN “Blockset” B ON “A”.“BlocksetID” = “B”.“ID” LEFT JOIN “Metadataset” C ON “A”.“MetadataID” = “C”.“ID” LEFT JOIN “FilesetEntry” D ON “A”.“ID” = “D”.“FileID” LEFT JOIN “Blockset” E ON “E”.“ID” = “C”.“BlocksetID” LEFT JOIN “BlocksetEntry” G ON “B”.“ID” = “G”.“BlocksetID” LEFT JOIN “Block” F ON “G”.“BlockID” = “F”.“ID” LEFT JOIN “BlocksetEntry” I ON “E”.“ID” = “I”.“BlocksetID” LEFT JOIN “Block” H ON “I”.“BlockID” = “H”.“ID” WHERE “A”.“BlocksetId” >= 0 AND “D”.“FilesetID” = 296 AND (“I”.“Index” = 0 OR “I”.“Index” IS NULL) AND (“G”.“Index” = 0 OR “G”.“Index” IS NULL) ) J LEFT OUTER JOIN “BlocklistHash” K ON “K”.“BlocksetID” = “J”.“BlocksetID” ORDER BY “J”.“Path”, “K”.“Index” ) L LEFT OUTER JOIN “BlocklistHash” M ON “M”.“BlocksetID” = “L”.“MetablocksetID” ) UNION SELECT DISTINCT “Path” FROM ( SELECT “G”.“BlocksetID”, “G”.“ID”, “G”.“Path”, “G”.“Length”, “G”.“FullHash”, “G”.“Lastmodified”, “G”.“FirstMetaBlockHash”, “H”.“Hash” AS “MetablocklistHash” FROM ( SELECT “B”.“BlocksetID”, “B”.“ID”, “B”.“Path”, “D”.“Length”, “D”.“FullHash”, “A”.“Lastmodified”, “F”.“Hash” AS “FirstMetaBlockHash”, “C”.“BlocksetID” AS “MetaBlocksetID” FROM “FilesetEntry” A, “File” B, “Metadataset” C, “Blockset” D, “BlocksetEntry” E, “Block” F WHERE “A”.“FileID” = “B”.“ID” AND “B”.“MetadataID” = “C”.“ID” AND “C”.“BlocksetID” = “D”.“ID” AND “E”.“BlocksetID” = “C”.“BlocksetID” AND “E”.“BlockID” = “F”.“ID” AND “E”.“Index” = 0 AND (“B”.“BlocksetID” = -100 OR “B”.“BlocksetID” = -200) AND “A”.“FilesetID” = 296 ) G LEFT OUTER JOIN “BlocklistHash” H ON “H”.“BlocksetID” = “G”.“MetaBlocksetID” ORDER BY “G”.“Path”, “H”.“Index” )) took 0:07:19:54.075

  • Jun 13, 2023 3:03 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT DISTINCT “Path” FROM ( SELECT “L”.“Path”, “L”.“Lastmodified”, “L”.“Filelength”, “L”.“Filehash”, “L”.“Metahash”, “L”.“Metalength”, “L”.“BlocklistHash”, “L”.“FirstBlockHash”, “L”.“FirstBlockSize”, “L”.“FirstMetaBlockHash”, “L”.“FirstMetaBlockSize”, “M”.“Hash” AS “MetaBlocklistHash” FROM ( SELECT “J”.“Path”, “J”.“Lastmodified”, “J”.“Filelength”, “J”.“Filehash”, “J”.“Metahash”, “J”.“Metalength”, “K”.“Hash” AS “BlocklistHash”, “J”.“FirstBlockHash”, “J”.“FirstBlockSize”, “J”.“FirstMetaBlockHash”, “J”.“FirstMetaBlockSize”, “J”.“MetablocksetID” FROM ( SELECT “A”.“Path” AS “Path”, “D”.“Lastmodified” AS “Lastmodified”, “B”.“Length” AS “Filelength”, “B”.“FullHash” AS “Filehash”, “E”.“FullHash” AS “Metahash”, “E”.“Length” AS “Metalength”, “A”.“BlocksetID” AS “BlocksetID”, “F”.“Hash” AS “FirstBlockHash”, “F”.“Size” AS “FirstBlockSize”, “H”.“Hash” AS “FirstMetaBlockHash”, “H”.“Size” AS “FirstMetaBlockSize”, “C”.“BlocksetID” AS “MetablocksetID” FROM “File” A LEFT JOIN “Blockset” B ON “A”.“BlocksetID” = “B”.“ID” LEFT JOIN “Metadataset” C ON “A”.“MetadataID” = “C”.“ID” LEFT JOIN “FilesetEntry” D ON “A”.“ID” = “D”.“FileID” LEFT JOIN “Blockset” E ON “E”.“ID” = “C”.“BlocksetID” LEFT JOIN “BlocksetEntry” G ON “B”.“ID” = “G”.“BlocksetID” LEFT JOIN “Block” F ON “G”.“BlockID” = “F”.“ID” LEFT JOIN “BlocksetEntry” I ON “E”.“ID” = “I”.“BlocksetID” LEFT JOIN “Block” H ON “I”.“BlockID” = “H”.“ID” WHERE “A”.“BlocksetId” >= 0 AND “D”.“FilesetID” = 296 AND (“I”.“Index” = 0 OR “I”.“Index” IS NULL) AND (“G”.“Index” = 0 OR “G”.“Index” IS NULL) ) J LEFT OUTER JOIN “BlocklistHash” K ON “K”.“BlocksetID” = “J”.“BlocksetID” ORDER BY “J”.“Path”, “K”.“Index” ) L LEFT OUTER JOIN “BlocklistHash” M ON “M”.“BlocksetID” = “L”.“MetablocksetID” ) UNION SELECT DISTINCT “Path” FROM ( SELECT “G”.“BlocksetID”, “G”.“ID”, “G”.“Path”, “G”.“Length”, “G”.“FullHash”, “G”.“Lastmodified”, “G”.“FirstMetaBlockHash”, “H”.“Hash” AS “MetablocklistHash” FROM ( SELECT “B”.“BlocksetID”, “B”.“ID”, “B”.“Path”, “D”.“Length”, “D”.“FullHash”, “A”.“Lastmodified”, “F”.“Hash” AS “FirstMetaBlockHash”, “C”.“BlocksetID” AS “MetaBlocksetID” FROM “FilesetEntry” A, “File” B, “Metadataset” C, “Blockset” D, “BlocksetEntry” E, “Block” F WHERE “A”.“FileID” = “B”.“ID” AND “B”.“MetadataID” = “C”.“ID” AND “C”.“BlocksetID” = “D”.“ID” AND “E”.“BlocksetID” = “C”.“BlocksetID” AND “E”.“BlockID” = “F”.“ID” AND “E”.“Index” = 0 AND (“B”.“BlocksetID” = -100 OR “B”.“BlocksetID” = -200) AND “A”.“FilesetID” = 296 ) G LEFT OUTER JOIN “BlocklistHash” H ON “H”.“BlocksetID” = “G”.“MetaBlocksetID” ORDER BY “G”.“Path”, “H”.“Index” ))

  • Jun 13, 2023 3:03 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM “FilesetEntry” WHERE “FilesetEntry”.“FilesetID” = 265 took 0:00:00:00.041

  • Jun 13, 2023 3:03 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM “FilesetEntry” WHERE “FilesetEntry”.“FilesetID” = 265

  • Jun 13, 2023 3:03 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT DISTINCT “Path” FROM ( SELECT “L”.“Path”, “L”.“Lastmodified”, “L”.“Filelength”, “L”.“Filehash”, “L”.“Metahash”, “L”.“Metalength”, “L”.“BlocklistHash”, “L”.“FirstBlockHash”, “L”.“FirstBlockSize”, “L”.“FirstMetaBlockHash”, “L”.“FirstMetaBlockSize”, “M”.“Hash” AS “MetaBlocklistHash” FROM ( SELECT “J”.“Path”, “J”.“Lastmodified”, “J”.“Filelength”, “J”.“Filehash”, “J”.“Metahash”, “J”.“Metalength”, “K”.“Hash” AS “BlocklistHash”, “J”.“FirstBlockHash”, “J”.“FirstBlockSize”, “J”.“FirstMetaBlockHash”, “J”.“FirstMetaBlockSize”, “J”.“MetablocksetID” FROM ( SELECT “A”.“Path” AS “Path”, “D”.“Lastmodified” AS “Lastmodified”, “B”.“Length” AS “Filelength”, “B”.“FullHash” AS “Filehash”, “E”.“FullHash” AS “Metahash”, “E”.“Length” AS “Metalength”, “A”.“BlocksetID” AS “BlocksetID”, “F”.“Hash” AS “FirstBlockHash”, “F”.“Size” AS “FirstBlockSize”, “H”.“Hash” AS “FirstMetaBlockHash”, “H”.“Size” AS “FirstMetaBlockSize”, “C”.“BlocksetID” AS “MetablocksetID” FROM “File” A LEFT JOIN “Blockset” B ON “A”.“BlocksetID” = “B”.“ID” LEFT JOIN “Metadataset” C ON “A”.“MetadataID” = “C”.“ID” LEFT JOIN “FilesetEntry” D ON “A”.“ID” = “D”.“FileID” LEFT JOIN “Blockset” E ON “E”.“ID” = “C”.“BlocksetID” LEFT JOIN “BlocksetEntry” G ON “B”.“ID” = “G”.“BlocksetID” LEFT JOIN “Block” F ON “G”.“BlockID” = “F”.“ID” LEFT JOIN “BlocksetEntry” I ON “E”.“ID” = “I”.“BlocksetID” LEFT JOIN “Block” H ON “I”.“BlockID” = “H”.“ID” WHERE “A”.“BlocksetId” >= 0 AND “D”.“FilesetID” = 265 AND (“I”.“Index” = 0 OR “I”.“Index” IS NULL) AND (“G”.“Index” = 0 OR “G”.“Index” IS NULL) ) J LEFT OUTER JOIN “BlocklistHash” K ON “K”.“BlocksetID” = “J”.“BlocksetID” ORDER BY “J”.“Path”, “K”.“Index” ) L LEFT OUTER JOIN “BlocklistHash” M ON “M”.“BlocksetID” = “L”.“MetablocksetID” ) UNION SELECT DISTINCT “Path” FROM ( SELECT “G”.“BlocksetID”, “G”.“ID”, “G”.“Path”, “G”.“Length”, “G”.“FullHash”, “G”.“Lastmodified”, “G”.“FirstMetaBlockHash”, “H”.“Hash” AS “MetablocklistHash” FROM ( SELECT “B”.“BlocksetID”, “B”.“ID”, “B”.“Path”, “D”.“Length”, “D”.“FullHash”, “A”.“Lastmodified”, “F”.“Hash” AS “FirstMetaBlockHash”, “C”.“BlocksetID” AS “MetaBlocksetID” FROM “FilesetEntry” A, “File” B, “Metadataset” C, “Blockset” D, “BlocksetEntry” E, “Block” F WHERE “A”.“FileID” = “B”.“ID” AND “B”.“MetadataID” = “C”.“ID” AND “C”.“BlocksetID” = “D”.“ID” AND “E”.“BlocksetID” = “C”.“BlocksetID” AND “E”.“BlockID” = “F”.“ID” AND “E”.“Index” = 0 AND (“B”.“BlocksetID” = -100 OR “B”.“BlocksetID” = -200) AND “A”.“FilesetID” = 265 ) G LEFT OUTER JOIN “BlocklistHash” H ON “H”.“BlocksetID” = “G”.“MetaBlocksetID” ORDER BY “G”.“Path”, “H”.“Index” )) took 0:06:59:27.341

  • Jun 12, 2023 8:04 PM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT DISTINCT “Path” FROM ( SELECT “L”.“Path”, “L”.“Lastmodified”, “L”.“Filelength”, “L”.“Filehash”, “L”.“Metahash”, “L”.“Metalength”, “L”.“BlocklistHash”, “L”.“FirstBlockHash”, “L”.“FirstBlockSize”, “L”.“FirstMetaBlockHash”, “L”.“FirstMetaBlockSize”, “M”.“Hash” AS “MetaBlocklistHash” FROM ( SELECT “J”.“Path”, “J”.“Lastmodified”, “J”.“Filelength”, “J”.“Filehash”, “J”.“Metahash”, “J”.“Metalength”, “K”.“Hash” AS “BlocklistHash”, “J”.“FirstBlockHash”, “J”.“FirstBlockSize”, “J”.“FirstMetaBlockHash”, “J”.“FirstMetaBlockSize”, “J”.“MetablocksetID” FROM ( SELECT “A”.“Path” AS “Path”, “D”.“Lastmodified” AS “Lastmodified”, “B”.“Length” AS “Filelength”, “B”.“FullHash” AS “Filehash”, “E”.“FullHash” AS “Metahash”, “E”.“Length” AS “Metalength”, “A”.“BlocksetID” AS “BlocksetID”, “F”.“Hash” AS “FirstBlockHash”, “F”.“Size” AS “FirstBlockSize”, “H”.“Hash” AS “FirstMetaBlockHash”, “H”.“Size” AS “FirstMetaBlockSize”, “C”.“BlocksetID” AS “MetablocksetID” FROM “File” A LEFT JOIN “Blockset” B ON “A”.“BlocksetID” = “B”.“ID” LEFT JOIN “Metadataset” C ON “A”.“MetadataID” = “C”.“ID” LEFT JOIN “FilesetEntry” D ON “A”.“ID” = “D”.“FileID” LEFT JOIN “Blockset” E ON “E”.“ID” = “C”.“BlocksetID” LEFT JOIN “BlocksetEntry” G ON “B”.“ID” = “G”.“BlocksetID” LEFT JOIN “Block” F ON “G”.“BlockID” = “F”.“ID” LEFT JOIN “BlocksetEntry” I ON “E”.“ID” = “I”.“BlocksetID” LEFT JOIN “Block” H ON “I”.“BlockID” = “H”.“ID” WHERE “A”.“BlocksetId” >= 0 AND “D”.“FilesetID” = 265 AND (“I”.“Index” = 0 OR “I”.“Index” IS NULL) AND (“G”.“Index” = 0 OR “G”.“Index” IS NULL) ) J LEFT OUTER JOIN “BlocklistHash” K ON “K”.“BlocksetID” = “J”.“BlocksetID” ORDER BY “J”.“Path”, “K”.“Index” ) L LEFT OUTER JOIN “BlocklistHash” M ON “M”.“BlocksetID” = “L”.“MetablocksetID” ) UNION SELECT DISTINCT “Path” FROM ( SELECT “G”.“BlocksetID”, “G”.“ID”, “G”.“Path”, “G”.“Length”, “G”.“FullHash”, “G”.“Lastmodified”, “G”.“FirstMetaBlockHash”, “H”.“Hash” AS “MetablocklistHash” FROM ( SELECT “B”.“BlocksetID”, “B”.“ID”, “B”.“Path”, “D”.“Length”, “D”.“FullHash”, “A”.“Lastmodified”, “F”.“Hash” AS “FirstMetaBlockHash”, “C”.“BlocksetID” AS “MetaBlocksetID” FROM “FilesetEntry” A, “File” B, “Metadataset” C, “Blockset” D, “BlocksetEntry” E, “Block” F WHERE “A”.“FileID” = “B”.“ID” AND “B”.“MetadataID” = “C”.“ID” AND “C”.“BlocksetID” = “D”.“ID” AND “E”.“BlocksetID” = “C”.“BlocksetID” AND “E”.“BlockID” = “F”.“ID” AND “E”.“Index” = 0 AND (“B”.“BlocksetID” = -100 OR “B”.“BlocksetID” = -200) AND “A”.“FilesetID” = 265 ) G LEFT OUTER JOIN “BlocklistHash” H ON “H”.“BlocksetID” = “G”.“MetaBlocksetID” ORDER BY “G”.“Path”, “H”.“Index” ))

  • Jun 12, 2023 8:04 PM: ExecuteScalarInt64: SELECT COUNT(*) FROM “FilesetEntry” WHERE “FilesetEntry”.“FilesetID” = 233 took 0:00:00:00.059

  • Jun 12, 2023 8:04 PM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM “FilesetEntry” WHERE “FilesetEntry”.“FilesetID” = 233

  • Jun 12, 2023 8:04 PM: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT DISTINCT “Path” FROM ( SELECT “L”.“Path”, “L”.“Lastmodified”, “L”.“Filelength”, “L”.“Filehash”, “L”.“Metahash”, “L”.“Metalength”, “L”.“BlocklistHash”, “L”.“FirstBlockHash”, “L”.“FirstBlockSize”, “L”.“FirstMetaBlockHash”, “L”.“FirstMetaBlockSize”, “M”.“Hash” AS “MetaBlocklistHash” FROM ( SELECT “J”.“Path”, “J”.“Lastmodified”, “J”.“Filelength”, “J”.“Filehash”, “J”.“Metahash”, “J”.“Metalength”, “K”.“Hash” AS “BlocklistHash”, “J”.“FirstBlockHash”, “J”.“FirstBlockSize”, “J”.“FirstMetaBlockHash”, “J”.“FirstMetaBlockSize”, “J”.“MetablocksetID” FROM ( SELECT “A”.“Path” AS “Path”, “D”.“Lastmodified” AS “Lastmodified”, “B”.“Length” AS “Filelength”, “B”.“FullHash” AS “Filehash”, “E”.“FullHash” AS “Metahash”, “E”.“Length” AS “Metalength”, “A”.“BlocksetID” AS “BlocksetID”, “F”.“Hash” AS “FirstBlockHash”, “F”.“Size” AS “FirstBlockSize”, “H”.“Hash” AS “FirstMetaBlockHash”, “H”.“Size” AS “FirstMetaBlockSize”, “C”.“BlocksetID” AS “MetablocksetID” FROM “File” A LEFT JOIN “Blockset” B ON “A”.“BlocksetID” = “B”.“ID” LEFT JOIN “Metadataset” C ON “A”.“MetadataID” = “C”.“ID” LEFT JOIN “FilesetEntry” D ON “A”.“ID” = “D”.“FileID” LEFT JOIN “Blockset” E ON “E”.“ID” = “C”.“BlocksetID” LEFT JOIN “BlocksetEntry” G ON “B”.“ID” = “G”.“BlocksetID” LEFT JOIN “Block” F ON “G”.“BlockID” = “F”.“ID” LEFT JOIN “BlocksetEntry” I ON “E”.“ID” = “I”.“BlocksetID” LEFT JOIN “Block” H ON “I”.“BlockID” = “H”.“ID” WHERE “A”.“BlocksetId” >= 0 AND “D”.“FilesetID” = 233 AND (“I”.“Index” = 0 OR “I”.“Index” IS NULL) AND (“G”.“Index” = 0 OR “G”.“Index” IS NULL) ) J LEFT OUTER JOIN “BlocklistHash” K ON “K”.“BlocksetID” = “J”.“BlocksetID” ORDER BY “J”.“Path”, “K”.“Index” ) L LEFT OUTER JOIN “BlocklistHash” M ON “M”.“BlocksetID” = “L”.“MetablocksetID” ) UNION SELECT DISTINCT “Path” FROM ( SELECT “G”.“BlocksetID”, “G”.“ID”, “G”.“Path”, “G”.“Length”, “G”.“FullHash”, “G”.“Lastmodified”, “G”.“FirstMetaBlockHash”, “H”.“Hash” AS “MetablocklistHash” FROM ( SELECT “B”.“BlocksetID”, “B”.“ID”, “B”.“Path”, “D”.“Length”, “D”.“FullHash”, “A”.“Lastmodified”, “F”.“Hash” AS “FirstMetaBlockHash”, “C”.“BlocksetID” AS “MetaBlocksetID” FROM “FilesetEntry” A, “File” B, “Metadataset” C, “Blockset” D, “BlocksetEntry” E, “Block” F WHERE “A”.“FileID” = “B”.“ID” AND “B”.“MetadataID” = “C”.“ID” AND “C”.“BlocksetID” = “D”.“ID” AND “E”.“BlocksetID” = “C”.“BlocksetID” AND “E”.“BlockID” = “F”.“ID” AND “E”.“Index” = 0 AND (“B”.“BlocksetID” = -100 OR “B”.“BlocksetID” = -200) AND “A”.“FilesetID” = 233 ) G LEFT OUTER JOIN “BlocklistHash” H ON “H”.“BlocksetID” = “G”.“MetaBlocksetID” ORDER BY “G”.“Path”, “H”.“Index” )) took 0:07:02:49.088

I know this query !
That’s the one I tried to optimize in experimental faster db rebuild by gpatel-fr · Pull Request #4955 · duplicati/duplicati · GitHub
This change is not currently in Duplicati.
Basically Duplicati has tried to rebuild your Db. No idea why, an upgrade is not supposed to do that. Maybe this particular upgrade path has hit a snag; if it was really 2.0.6.1, it’s old.

Although it’ll take a whole log of reading to confirm, working from the more unique parts led me below:

then tacking on the front part looked like:

neither of which has changed in a very long time, so it’s puzzling why things would have gotten slower.
Add indexes to improve backup query performance #4687 is new, but was intended to increase speed.
As a wild guess, maybe SQLite query optimizer has been thrown off temporarily and doing bad things?
There’s a PRAGMA optimize at database close, but we might not have gotten to that point at this point.

So why exactly did the DB rebuild itself? The same thing appears to be happening for another backup set - this one though is far larger and has been running for 10 days now on just the first query. At this rate, it might not complete this year. (The source files are local, but the destination is over a webdav which is likely contributing to the slow rebuild. Since I can’t improve that aspect…)

What is the best way to trash the prior data and start with a new backup? Or should I download a version between 2.0.6.1 and 2.0.7.1?

I’m disagreeing with the idea that it did, although it’s not certain. See the end of my post above yours.
Original post looks to be in the backup already (meaning database update finished), doing verifying…
Odd thing is it does that at the start of every backup (there’s an Advanced option to stop that though).

Got any stats, e.g. what size is it? If over 100 GB, does Options screen 5 raise blocksize accordingly?
Any idea how many files and how many versions (seen on home page, or you can count dlist files)?

Far larger than the first larger one? If it’s different, stats on that too please.

It’s easy to see backend operations. About → Show log → Live → Information. If none, it’s irrelevant.

How big are the job databases for these? Would you be willing to run a slow query (or pieces thereof) manually, e.g. on a copy of the database using DB Browser for SQLite, a.k.a. Linux sqlitebrowser?

EDIT:

Saying what it shows on the status bar at the top of the screen would also help locate where it is now.

Which is what exactly? It makes an analysis even harder when one identifies a query only as “first”.
Does it look identical (except for FilesetID, which identifies the version of the backup) to ones here:

Backup gets stuck executing a query #4645 where top (latest) live log line looks like the OPs latest.
Add indexes to improve performance #4706 came from that, with additional discussions about what
I was wondering about – whether adding an INDEX should be followed up by PRAGMA analyze or
PRAGMA optimize. Optimize is done at database close, but maybe it should be done after an index addition, or maybe even any database upgrade change, for example what has been proposed here.

If somehow this is the key, you can probably force a database open and close by looking at job logs.
If that fixes the issue, great (although still a surprise). If not, any start-again still needs sizing clues…

The DB of this backup is this:

[root@voyageur Duplicati]# ls -halt ATQ
-rw------- 1 root root 2.2G Jun 17 22:54 ATQJGACUOV.sqlite
-rw------- 1 root root 2.0G Jun 11 07:26 backup ATQJGACUOV 20230617105022.sqlite

The one which took nearly a week to complete was this one:

[root@voyageur Duplicati]# ls -halt VQA
-rw------- 1 root root 817M Jun 17 22:36 VQAOCWDRRT.sqlite
-rw------- 1 root root 721M Jun 11 02:23 backup VQAOCWDRRT 20230611040357.sqlite

When I view the Live Log at Information level, I see only one line:

Jun 17, 2023 10:50 PM: The operation Backup has started

when I switch from information to profiling, I get this:


* Jun 17, 2023 10:59 PM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT DISTINCT "Path" FROM ( SELECT "L"."Path", "L"."Lastmodified", "L"."Filelength", "L"."Filehash", "L"."Metahash", "L"."Metalength", "L"."BlocklistHash", "L"."FirstBlockHash", "L"."FirstBlockSize", "L"."FirstMetaBlockHash", "L"."FirstMetaBlockSize", "M"."Hash" AS "MetaBlocklistHash" FROM ( SELECT "J"."Path", "J"."Lastmodified", "J"."Filelength", "J"."Filehash", "J"."Metahash", "J"."Metalength", "K"."Hash" AS "BlocklistHash", "J"."FirstBlockHash", "J"."FirstBlockSize", "J"."FirstMetaBlockHash", "J"."FirstMetaBlockSize", "J"."MetablocksetID" FROM ( SELECT "A"."Path" AS "Path", "D"."Lastmodified" AS "Lastmodified", "B"."Length" AS "Filelength", "B"."FullHash" AS "Filehash", "E"."FullHash" AS "Metahash", "E"."Length" AS "Metalength", "A"."BlocksetID" AS "BlocksetID", "F"."Hash" AS "FirstBlockHash", "F"."Size" AS "FirstBlockSize", "H"."Hash" AS "FirstMetaBlockHash", "H"."Size" AS "FirstMetaBlockSize", "C"."BlocksetID" AS "MetablocksetID" FROM "File" A LEFT JOIN "Blockset" B ON "A"."BlocksetID" = "B"."ID" LEFT JOIN "Metadataset" C ON "A"."MetadataID" = "C"."ID" LEFT JOIN "FilesetEntry" D ON "A"."ID" = "D"."FileID" LEFT JOIN "Blockset" E ON "E"."ID" = "C"."BlocksetID" LEFT JOIN "BlocksetEntry" G ON "B"."ID" = "G"."BlocksetID" LEFT JOIN "Block" F ON "G"."BlockID" = "F"."ID" LEFT JOIN "BlocksetEntry" I ON "E"."ID" = "I"."BlocksetID" LEFT JOIN "Block" H ON "I"."BlockID" = "H"."ID" WHERE "A"."BlocksetId" >= 0 AND "D"."FilesetID" = 46 AND ("I"."Index" = 0 OR "I"."Index" IS NULL) AND ("G"."Index" = 0 OR "G"."Index" IS NULL) ) J LEFT OUTER JOIN "BlocklistHash" K ON "K"."BlocksetID" = "J"."BlocksetID" ORDER BY "J"."Path", "K"."Index" ) L LEFT OUTER JOIN "BlocklistHash" M ON "M"."BlocksetID" = "L"."MetablocksetID" ) UNION SELECT DISTINCT "Path" FROM ( SELECT "G"."BlocksetID", "G"."ID", "G"."Path", "G"."Length", "G"."FullHash", "G"."Lastmodified", "G"."FirstMetaBlockHash", "H"."Hash" AS "MetablocklistHash" FROM ( SELECT "B"."BlocksetID", "B"."ID", "B"."Path", "D"."Length", "D"."FullHash", "A"."Lastmodified", "F"."Hash" AS "FirstMetaBlockHash", "C"."BlocksetID" AS "MetaBlocksetID" FROM "FilesetEntry" A, "File" B, "Metadataset" C, "Blockset" D, "BlocksetEntry" E, "Block" F WHERE "A"."FileID" = "B"."ID" AND "B"."MetadataID" = "C"."ID" AND "C"."BlocksetID" = "D"."ID" AND "E"."BlocksetID" = "C"."BlocksetID" AND "E"."BlockID" = "F"."ID" AND "E"."Index" = 0 AND ("B"."BlocksetID" = -100 OR "B"."BlocksetID" = -200) AND "A"."FilesetID" = 46 ) G LEFT OUTER JOIN "BlocklistHash" H ON "H"."BlocksetID" = "G"."MetaBlocksetID" ORDER BY "G"."Path", "H"."Index" ))

* Jun 17, 2023 10:59 PM: ExecuteReader: SELECT "ID" FROM "Fileset" took 0:00:00:00.001

* Jun 17, 2023 10:59 PM: Starting - ExecuteReader: SELECT "ID" FROM "Fileset"

* Jun 17, 2023 10:59 PM: ExecuteScalarInt64: SELECT COUNT(*) FROM "FileLookup" WHERE "BlocksetID" != -100 AND "BlocksetID" != -200 AND NOT "BlocksetID" IN (SELECT "ID" FROM "Blockset") took 0:00:00:21.215

* Jun 17, 2023 10:59 PM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "FileLookup" WHERE "BlocksetID" != -100 AND "BlocksetID" != -200 AND NOT "BlocksetID" IN (SELECT "ID" FROM "Blockset")

* Jun 17, 2023 10:59 PM: ExecuteScalarInt64: SELECT COUNT(*) FROM "Blockset" WHERE "Length" > 0 AND "ID" NOT IN (SELECT "BlocksetId" FROM "BlocksetEntry") took 0:00:00:34.587

* Jun 17, 2023 10:58 PM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "Blockset" WHERE "Length" > 0 AND "ID" NOT IN (SELECT "BlocksetId" FROM "BlocksetEntry")

* Jun 17, 2023 10:58 PM: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual") took 0:00:00:13.020

* Jun 17, 2023 10:58 PM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual")

* Jun 17, 2023 10:58 PM: ExecuteScalarInt64: SELECT Count(*) FROM (SELECT DISTINCT "BlocksetID", "Index" FROM "BlocklistHash") took 0:00:00:00.038

* Jun 17, 2023 10:58 PM: Starting - ExecuteScalarInt64: SELECT Count(*) FROM (SELECT DISTINCT "BlocksetID", "Index" FROM "BlocklistHash")

* Jun 17, 2023 10:58 PM: ExecuteScalarInt64: SELECT Count(*) FROM "BlocklistHash" took 0:00:00:01.357

* Jun 17, 2023 10:58 PM: Starting - ExecuteScalarInt64: SELECT Count(*) FROM "BlocklistHash"

* Jun 17, 2023 10:58 PM: ExecuteReader: SELECT "CalcLen", "Length", "A"."BlocksetID", "File"."Path" FROM ( SELECT "A"."ID" AS "BlocksetID", IFNULL("B"."CalcLen", 0) AS "CalcLen", "A"."Length" FROM "Blockset" A LEFT OUTER JOIN ( SELECT "BlocksetEntry"."BlocksetID", SUM("Block"."Size") AS "CalcLen" FROM "BlocksetEntry" LEFT OUTER JOIN "Block" ON "Block"."ID" = "BlocksetEntry"."BlockID" GROUP BY "BlocksetEntry"."BlocksetID" ) B ON "A"."ID" = "B"."BlocksetID" ) A, "File" WHERE "A"."BlocksetID" = "File"."BlocksetID" AND "A"."CalcLen" != "A"."Length" took 0:00:03:30.673

* Jun 17, 2023 10:54 PM: Starting - ExecuteReader: SELECT "CalcLen", "Length", "A"."BlocksetID", "File"."Path" FROM ( SELECT "A"."ID" AS "BlocksetID", IFNULL("B"."CalcLen", 0) AS "CalcLen", "A"."Length" FROM "Blockset" A LEFT OUTER JOIN ( SELECT "BlocksetEntry"."BlocksetID", SUM("Block"."Size") AS "CalcLen" FROM "BlocksetEntry" LEFT OUTER JOIN "Block" ON "Block"."ID" = "BlocksetEntry"."BlockID" GROUP BY "BlocksetEntry"."BlocksetID" ) B ON "A"."ID" = "B"."BlocksetID" ) A, "File" WHERE "A"."BlocksetID" = "File"."BlocksetID" AND "A"."CalcLen" != "A"."Length"

* Jun 17, 2023 10:54 PM: ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 0:00:00:00.000

* Jun 17, 2023 10:54 PM: Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"

* Jun 17, 2023 10:54 PM: ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 0:00:00:00.000

* Jun 17, 2023 10:54 PM: Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"

* Jun 17, 2023 10:54 PM: ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > 102400 took 0:00:00:00.004

* Jun 17, 2023 10:54 PM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > 102400

* Jun 17, 2023 10:54 PM: ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 0:00:00:00.000

* Jun 17, 2023 10:54 PM: Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"

* Jun 17, 2023 10:54 PM: ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 0:00:00:00.000

* Jun 17, 2023 10:54 PM: Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"

* Jun 17, 2023 10:54 PM: ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Backup", 1687060474); SELECT last_insert_rowid(); took 0:00:00:00.023

* Jun 17, 2023 10:54 PM: Starting - ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Backup", 1687060474); SELECT last_insert_rowid();

* Jun 17, 2023 10:50 PM: Starting - Running Backup

* Jun 17, 2023 10:50 PM: The operation Backup has started

Ok, that’s for the ATQ backup. I’m going to kill off mono and see if I can analyze the DB from the command line.

analyze; – took about 10 minutes
pragma optimize; – returned immediately.

Starting Duplicati again and it is now running through the other smaller backups for this system. It now reached the VQA* backup and it is now “stuck” executing what appears to be the same complex query:

Jun 28, 2023 6:46 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT DISTINCT "Path" FROM ( SELECT "L"."Path", "L"."Lastmodified", "L"."Filelength", "L"."Filehash", "L"."Metahash", "L"."Metalength", "L"."BlocklistHash", "L"."FirstBlockHash", "L"."FirstBlockSize", "L"."FirstMetaBlockHash", "L"."FirstMetaBlockSize", "M"."Hash" AS "MetaBlocklistHash" FROM ( SELECT "J"."Path", "J"."Lastmodified", "J"."Filelength", "J"."Filehash", "J"."Metahash", "J"."Metalength", "K"."Hash" AS "BlocklistHash", "J"."FirstBlockHash", "J"."FirstBlockSize", "J"."FirstMetaBlockHash", "J"."FirstMetaBlockSize", "J"."MetablocksetID" FROM ( SELECT "A"."Path" AS "Path", "D"."Lastmodified" AS "Lastmodified", "B"."Length" AS "Filelength", "B"."FullHash" AS "Filehash", "E"."FullHash" AS "Metahash", "E"."Length" AS "Metalength", "A"."BlocksetID" AS "BlocksetID", "F"."Hash" AS "FirstBlockHash", "F"."Size" AS "FirstBlockSize", "H"."Hash" AS "FirstMetaBlockHash", "H"."Size" AS "FirstMetaBlockSize", "C"."BlocksetID" AS "MetablocksetID" FROM "File" A LEFT JOIN "Blockset" B ON "A"."BlocksetID" = "B"."ID" LEFT JOIN "Metadataset" C ON "A"."MetadataID" = "C"."ID" LEFT JOIN "FilesetEntry" D ON "A"."ID" = "D"."FileID" LEFT JOIN "Blockset" E ON "E"."ID" = "C"."BlocksetID" LEFT JOIN "BlocksetEntry" G ON "B"."ID" = "G"."BlocksetID" LEFT JOIN "Block" F ON "G"."BlockID" = "F"."ID" LEFT JOIN "BlocksetEntry" I ON "E"."ID" = "I"."BlocksetID" LEFT JOIN "Block" H ON "I"."BlockID" = "H"."ID" WHERE "A"."BlocksetId" >= 0 AND "D"."FilesetID" = 202 AND ("I"."Index" = 0 OR "I"."Index" IS NULL) AND ("G"."Index" = 0 OR "G"."Index" IS NULL) ) J LEFT OUTER JOIN "BlocklistHash" K ON "K"."BlocksetID" = "J"."BlocksetID" ORDER BY "J"."Path", "K"."Index" ) L LEFT OUTER JOIN "BlocklistHash" M ON "M"."BlocksetID" = "L"."MetablocksetID" ) UNION SELECT DISTINCT "Path" FROM ( SELECT "G"."BlocksetID", "G"."ID", "G"."Path", "G"."Length", "G"."FullHash", "G"."Lastmodified", "G"."FirstMetaBlockHash", "H"."Hash" AS "MetablocklistHash" FROM ( SELECT "B"."BlocksetID", "B"."ID", "B"."Path", "D"."Length", "D"."FullHash", "A"."Lastmodified", "F"."Hash" AS "FirstMetaBlockHash", "C"."BlocksetID" AS "MetaBlocksetID" FROM "FilesetEntry" A, "File" B, "Metadataset" C, "Blockset" D, "BlocksetEntry" E, "Block" F WHERE "A"."FileID" = "B"."ID" AND "B"."MetadataID" = "C"."ID" AND "C"."BlocksetID" = "D"."ID" AND "E"."BlocksetID" = "C"."BlocksetID" AND "E"."BlockID" = "F"."ID" AND "E"."Index" = 0 AND ("B"."BlocksetID" = -100 OR "B"."BlocksetID" = -200) AND "A"."FilesetID" = 202 ) G LEFT OUTER JOIN "BlocklistHash" H ON "H"."BlocksetID" = "G"."MetaBlocksetID" ORDER BY "G"."Path", "H"."Index" )) took 0:06:51:41.696
Jun 27, 2023 11:54 PM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT DISTINCT "Path" FROM ( SELECT "L"."Path", "L"."Lastmodified", "L"."Filelength", "L"."Filehash", "L"."Metahash", "L"."Metalength", "L"."BlocklistHash", "L"."FirstBlockHash", "L"."FirstBlockSize", "L"."FirstMetaBlockHash", "L"."FirstMetaBlockSize", "M"."Hash" AS "MetaBlocklistHash" FROM ( SELECT "J"."Path", "J"."Lastmodified", "J"."Filelength", "J"."Filehash", "J"."Metahash", "J"."Metalength", "K"."Hash" AS "BlocklistHash", "J"."FirstBlockHash", "J"."FirstBlockSize", "J"."FirstMetaBlockHash", "J"."FirstMetaBlockSize", "J"."MetablocksetID" FROM ( SELECT "A"."Path" AS "Path", "D"."Lastmodified" AS "Lastmodified", "B"."Length" AS "Filelength", "B"."FullHash" AS "Filehash", "E"."FullHash" AS "Metahash", "E"."Length" AS "Metalength", "A"."BlocksetID" AS "BlocksetID", "F"."Hash" AS "FirstBlockHash", "F"."Size" AS "FirstBlockSize", "H"."Hash" AS "FirstMetaBlockHash", "H"."Size" AS "FirstMetaBlockSize", "C"."BlocksetID" AS "MetablocksetID" FROM "File" A LEFT JOIN "Blockset" B ON "A"."BlocksetID" = "B"."ID" LEFT JOIN "Metadataset" C ON "A"."MetadataID" = "C"."ID" LEFT JOIN "FilesetEntry" D ON "A"."ID" = "D"."FileID" LEFT JOIN "Blockset" E ON "E"."ID" = "C"."BlocksetID" LEFT JOIN "BlocksetEntry" G ON "B"."ID" = "G"."BlocksetID" LEFT JOIN "Block" F ON "G"."BlockID" = "F"."ID" LEFT JOIN "BlocksetEntry" I ON "E"."ID" = "I"."BlocksetID" LEFT JOIN "Block" H ON "I"."BlockID" = "H"."ID" WHERE "A"."BlocksetId" >= 0 AND "D"."FilesetID" = 202 AND ("I"."Index" = 0 OR "I"."Index" IS NULL) AND ("G"."Index" = 0 OR "G"."Index" IS NULL) ) J LEFT OUTER JOIN "BlocklistHash" K ON "K"."BlocksetID" = "J"."BlocksetID" ORDER BY "J"."Path", "K"."Index" ) L LEFT OUTER JOIN "BlocklistHash" M ON "M"."BlocksetID" = "L"."MetablocksetID" ) UNION SELECT DISTINCT "Path" FROM ( SELECT "G"."BlocksetID", "G"."ID", "G"."Path", "G"."Length", "G"."FullHash", "G"."Lastmodified", "G"."FirstMetaBlockHash", "H"."Hash" AS "MetablocklistHash" FROM ( SELECT "B"."BlocksetID", "B"."ID", "B"."Path", "D"."Length", "D"."FullHash", "A"."Lastmodified", "F"."Hash" AS "FirstMetaBlockHash", "C"."BlocksetID" AS "MetaBlocksetID" FROM "FilesetEntry" A, "File" B, "Metadataset" C, "Blockset" D, "BlocksetEntry" E, "Block" F WHERE "A"."FileID" = "B"."ID" AND "B"."MetadataID" = "C"."ID" AND "C"."BlocksetID" = "D"."ID" AND "E"."BlocksetID" = "C"."BlocksetID" AND "E"."BlockID" = "F"."ID" AND "E"."Index" = 0 AND ("B"."BlocksetID" = -100 OR "B"."BlocksetID" = -200) AND "A"."FilesetID" = 202 ) G LEFT OUTER JOIN "BlocklistHash" H ON "H"."BlocksetID" = "G"."MetaBlocksetID" ORDER BY "G"."Path", "H"."Index" ))

I’m going to kill off duplicati again and attempt the analyze and optimize on the VQA* DB like I did to the ATQ* one - I don’t know if it will have any effect since I haven’t started the ATQ* backup again, but it can’t hurt at this point right?

Ok… killed it, analyze took 2-3 minutes. Afterwards the backup started and appeared to run some of the same queries again, but they returned within a minute or two this time and its’ actually backing up data now. This is the system/VQA* backup. Once this is normally done in ~30 minutes the email one should start and I’ll know if analyze makes a difference for the ATQ* backup.

I know this update is chaotic but I’m trying to include as much information as I can this sure looks like a bug which others are going to hit and possibly keep on hitting - I say this because the post-upgrade “system” backup took 5 days and 16 hours to complete. When it started again last night, it was on pace to take a similar amount of time. Since manually doing the analyze, things are looking good for this backup, but I’m not confident that it will not erode or revert over time.

It looks kind of large. Generally rough rule of thumb is set blocksize proportionately larger if > 100 GB. Default 100 KB is too small, causing lots of blocks, causing things to slow (and database to get large). Increase in default is proposed, but changing the blocksize of an existing backup is sadly not possible.

Assuming the time is plausible (I’m not sure how well one can pick up history when log level changes), first line shown is actually the last one done (reverse chronological) and looks like the expected culprit.

the Blocksize is the default for all my databases :frowning:

the backup performance though still seems to be fast (well, back to ‘normal’ levels.) The large backup only runs on the weekend but I did run it a second time and it ran normally - completed in under 4 hours. The system backup is not very large though:

System backup - Source: 12.69GB
Email backup - Source: 95.83GB

the system backup last night took 15 minutes, but before I manually ran analyze; on the database, the backup took 5 days and 16 hours. (I can share the email from the backup competition if that would be useful.) Are you saying that this can be attributed entirely to not having increased my blocksize? This smells like a bug to me and possibly a workaround to #4645

Looking over the command line options, there doesn’t seem to be a way to perform an analyze - should there be one? Or should an analyze be done whenever the database version changes - when tables or indices are added?

No. Especially seeing your source sizes, you’re in at least what’s thought to be reasonable.

The database change with 2.0.7.1 was to add four indexes, making DB a bit bigger but also
maybe making the query slow temporarily until it had enough analysis information to do well.

One suspect is that the new indexes are somehow making the SQL query plan worse until
information is set up with one of the PRAGMA such as you did manually. If you get another
one that starts slow, it’d be nice if you could try doing logs view as mentioned which can be
done by less SQL-savvy people. SQLite site’s PRAGMA optimize documention sounds like
it’s intended to run analyze when it’s needed, and in your case it seems like it was needed.

Determination Of When To Run Analyze

One or more indexes of the table are currently unanalyzed

might have run if you’d done some light weight operation first. This wasn’t meant to happen.

was proven to help in at least some cases, but sometimes these things vary with the situation.
Somehow 2.0.7.1 managed to interact badly with Unraid FUSE in a similar slowdown found at

Backup Runtime after 2.0.7.1 update long thread, so linked to the happier conclusion, however
details of how things managed to get worse needed more investigation than could be provided.

If your system was still slow, I’d have asked for system performance info, e.g. is the disk busy?
SQLite can’t fill up all the CPU cores, but it’s very able to occupy a drive, especially a hard disk.
Typically you’d be able to spot sluggishness in some other thing you run live. Did you see any?