Untraceable long operation (query?) during backup

Hello to everyone, as this is my first post.

Duplicati version : 2.0.5.1_beta_2020-01-18
Operating system : Windows 7 Pro SP1
Backend : Backblaze
Backup size : source 351GB, backup 320GB, 13 versions

I was on 2.0.3.3 beta for a long time. I’ve recently upgraded to 2.0.5.1 beta. The backup job used to take less than one hour usually, as there isn’t much difference between jobs. Now it takes more than 12.

Things seem to be running as normal up to a (more or less random, it happens at different places/times/filenames) point during the “XXX files (YYY GB) to go at ZZ KB/s” stage. There, Duplicati seems to get stuck for several hours.
During this time:

  • “Current action” is shown as “Backup_ProcessingFiles”.
  • The mentioned display (“XXX files (YYY GB) to go at ZZ KB/s”) doesn’t update.
  • The “lastPgEvent” in About->Server state properties doesn’t update.
  • The log in About->Show log->Live doesn’t update, even in Profiling.
  • Process Hacker shows Duplicati.Server.exe being continually fully busy in one thread (25% with 4 cores available) at a very high I/O rate (~700MB/sec)
  • Process Monitor shows Duplicati.Server.exe reading continually from a “etilqs_XXXX” file in the temp directory at a very high rate. It seems to read the file again, and again, and again. There are some (much, much fewer) occasional writes to the main .sqlite database file in between.

Now, according to what I’ve read in these forums, these symptoms seem to point to some slow, inefficient query running in sqlite. The problem is that I can’t pinpoint that. There is nothing in the log indicating that anything is happening during that time. I’m using Profiling in both console and log file (no difference between them), but nothing shows a query or anything else being started before, or finishing afterwards. In fact, profiling information doesn’t show anything running for an excessively long time. I’ve even activated “profile-all-database-queries”, to no avail. The only indication that much time has elapsed is the “BackupMainOperation took” message which shows 15 hours or so, but I believe this is just a grand total.

Any ideas? Is there something I could do to trace what Duplicati (or sqlite) is doing during that time?
Thanks in advance!

Welcome to the forum @droutsis

With what you’ve got so far (nice job BTW) you’ve already chased debugging further than most would.

Profiling log with --profile-all-database-queries=true “should” show you the start and end of a slow one, however you’re not seeing that. SQLite doesn’t “seem” like the kind of DB that does background tasks.

I’m not sure if an SQLite expert (etilqs is an SQLite temporary file) would have anything to say to help.

I guess I’ll ask if @Ingmyv (who just helped someone else) has ideas. My only next idea is debugger, however it would be more to figure out where Duplicati was than to be able to look at SQLite internals.

dnSpy .NET debugger and assembly editor and Making an Image Easier to Debug might give clues for why Duplicati is reading the etilqs file, however it might require developers to make sense of findings…

OTOH the seemingly random nature of the problem is another obstacle. Moving targets can be difficult.

It is super weird that nothing is appearing even on profiling; but perhaps the issue isn’t slow queries. What actually does happen ‘just before’ the hang?

EITLQS = SQLITE backwards and they’re obliquely mentioned here:

The prefix on temporary filenames on Windows is changed from “sqlite” to “etilqs”.
SQLite Release 3.3.9 On 2007-01-04

Apparently it was a ploy to cut down on nuisance phone calls to the developers by making them harder to Google.

I hesitate to stab-in-the-dark here but based on what those files are used for, if I must, then my suspicion is that we could be seeing so many reads from this file because it is something like a file used to materialize a subquery, or a file used due to an operation like a aggregate (group by x, window x etc) operation or sort (order by x, limit x) that couldn’t fit in the alloted amount of RAM. According to the SQLITE documentation:

More complex queries may or may not be able to employ query flattening to avoid the temporary table. Whether or not the query can be flattened depends on such factors as whether or not the subquery or outer query contain aggregate functions, ORDER BY or GROUP BY clauses, LIMIT clauses, and so forth. The rules for when a query can and cannot be flattened are very complex and are beyond the scope of this document.
Temporary Files Used By SQLite

So it could even be both!

But we’d strongly anticipate that to appear in the log and the absense of it is a big problem with this theory.

When you say that it is reading from an etilqs_XXXX file, are you meaning that it’s reading from various files named like that, or from a single one?

EDIT: @ts678 does .net have an equivalent to Java Threaddumps?, I regret that I am not familiar with that and googling seems inconclusive. If so then taking 3-5 thread dumps about 1 minute apart may reveal more clues about WHAT is “stuck”

I’m not either, but my Google search didn’t turn up anything. I’m also not a .NET or Windows developer.

The brute-force equivalent with a debugger would be to get a break every now and then, and see what happens to be on the stack of each thread at the time. If it’s inside SQLite though, that’s not .NET code, and so might require something that can deal with native Windows code – plus lots of SQLite expertise.

Thanks for the replies, everyone.

Yes, that’s what I gathered too, that the EITLQS file is an sqlite temporary file. I can confirm that during the “long query” phase it is just one single file that is read again and again.

As for what happens just before and just after, the log as I’ve said isn’t helpful. An example I’ve kept:

> 2020-01-27 02:41:28 +02 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: XXXXXXXXXXXXXXXXX
> 2020-01-27 18:15:25 +02 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: INSERT INTO "ChangeJournalData" ("FilesetID", "VolumeName", "JournalID", "NextUSN", "ConfigHash") VALUES (1801, "C:\", 129894229788396217, 129619460528, "F0AE63E76D8FBA8C6FFE9E0457438164712FEF3A25771FE3B861BAF06D0008BF");
> 2020-01-27 18:15:25 +02 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: INSERT INTO "ChangeJournalData" ("FilesetID", "VolumeName", "JournalID", "NextUSN", "ConfigHash") VALUES (1801, "C:\", 129894229788396217, 129619460528, "F0AE63E76D8FBA8C6FFE9E0457438164712FEF3A25771FE3B861BAF06D0008BF"); took 0:00:00:00.001

Not the time difference between the first and second lines.

The exact things logged before and after are not always the same, as I’ve said, so I don’t think they are important. I don’t know whether multi-threading may explain this: Different threads go through the file list, each doing their own stuff at different rhythms, and what you see is just the last thing one of them got logged before the ‘something’ happened…

It’s always possible (we’re only human), that some profiling statement is forgotten somewhere, and so nothing appears in the log.
Also - whether this is possible or not depends on the way communication with the database is coded, so I may be totally wrong - could it be a “spontaneous” query is generated somewhere? (What I have in mind is what happens when, instead of communication with the DB getting coded completely by hand, some framework is used, like Hibernate in Java. In that case, you may get queries emitted sort of automatically at some points. Someone familiar with the coding can probably shed light on this.)

Unfortunately, I’m not familiar with .NET either, so going into debugging that would be a rather steep curve. But some questions, if you have any ideas / pointers:

  • Is there some way / tool to open the offending .eitlsq file and make some sense of what is in it, or is it an opaque sqlite thing? It might give some hint, if doable.
  • Is it possible somehow to activate some trace logging on the sqlite side, so sqlite logs everything it does, regardless of reason?
  • Alternatively, is it possible to get some logging of the communication between Duplicati and sqlite so we can see everything that goes back and forth? (From what I gather this is probably done with plain C-style function calls, so that would probably be impossible, but it doesn’t hurt to ask.)

Thanks again.

Lacking solid info by web search, best thing is to try. If file will allow it, try to get a copy of it, then try DB Browser for SQLite and see how it does. Alternatively use a hex viewer and Database File Format to at least see if 1.3.1. Magic Header String says “SQLite format 3”, if so, it might just be a database using a special name. The source code seems to have an open routine that either takes a name or invents one if a name is not given. In addition, 2.6. TEMP Databases and 2.1. Record Format hint at normal format. While looking at how page-oriented SQLite is, are the reads at even boundaries of the 4096 page size?

I’m not an SQLite expert, however it looks like in theory it is possible. I’m not spotting signs in Duplicati.

Add verbose logging to System.Data.SQLite

Again, this is not an area I’m expert in, but I think System.Data.SQLite provides a .NET interface to C. Regardless, a .NET Framework debugger should be able to breakpoint the run, but logs may be hard.

Someone proficient in .NET Framework profiling, perhaps with Visual Studio, might also be able to get some sense of what areas are active to what extent. I don’t have Visual Studio and so can’t guide you.

Even though you see the slow spot move around, it might be useful to get a higher-level view of it, e.g. using –log-file-log-level=Retry for a top-level, Verbose for a bit more. Just seeing the tiny snippet says

may have run, and it seems odd to see it there, because I think that’s near the end of backup, whereas FileEnumerationProcess is at the start of processing which you can read about here. Where’s the rest?

You can also try turning off –usn-policy which is a completely different implementation. Is it compatible? Possibly a glimpse of all your settings (with confidential info removed) would help, e.g. start with Export.

That log trace doesn’t (clearly) show an SQL problem; which of course doesn’t prove it doesn’t exist. That busy-ness with the temp file doesraise questions

Looking at the source code, which I’m not a programmer so is always an exercise in misunderstandings it seems from line 1000-1001 that records from the outside windows USN journal are, one-at-a-time (foreach (var entry in data)) being inserted. That is a potential bottlneck; (details:SQLite Frequently Asked Questions) but It does not seem like it’s what’s at issue here, we only see this insert one time; and the huge gap doesn’t appear to coincide with it, but rather it follows it

Is there some kind of Windows API that has to be called (surely) that gets those record/records and that could be something that needs to do an expensive external windows action?

It looked to me like it precedes it. There’s actually temp table work at end of backup, but the big gap in log (corresponding to the big gap in time) isn’t showing it. I’d still like to see a log without the big gap…

Here’s my leadup to the ChangeJournalData update:

2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: CREATE TEMP TABLE "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ("ID" INTEGER PRIMARY KEY)
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: CREATE TEMP TABLE "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ("ID" INTEGER PRIMARY KEY) took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Remotevolume" WHERE "Name" = "duplicati-b18c95e3e366d4d8698b2f770e14518fa.dblock.zip.aes"
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "Remotevolume" WHERE "Name" = "duplicati-b18c95e3e366d4d8698b2f770e14518fa.dblock.zip.aes" took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "IndexBlockLink" WHERE "BlockVolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ) OR "IndexVolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" )
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "IndexBlockLink" WHERE "BlockVolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ) OR "IndexVolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ) took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "FilesetEntry" WHERE "FilesetID" IN (SELECT "ID" FROM "Fileset" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ))
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "FilesetEntry" WHERE "FilesetID" IN (SELECT "ID" FROM "Fileset" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" )) took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "ChangeJournalData" WHERE "FilesetID" IN (SELECT "ID" FROM "Fileset" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ))
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "ChangeJournalData" WHERE "FilesetID" IN (SELECT "ID" FROM "Fileset" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" )) took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "Fileset" WHERE "VolumeID"  IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" )
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "Fileset" WHERE "VolumeID"  IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ) took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: CREATE TEMP TABLE "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" ("ID" INTEGER PRIMARY KEY)
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: CREATE TEMP TABLE "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" ("ID" INTEGER PRIMARY KEY) took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: INSERT OR IGNORE INTO "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" ("ID") SELECT "BlocksetEntry"."BlocksetID" FROM "BlocksetEntry", "Block"  WHERE "BlocksetEntry"."BlockID" = "Block"."ID" AND "Block"."VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ) UNION ALL SELECT "BlocksetID" FROM "BlocklistHash" WHERE "Hash" IN (SELECT "Hash" FROM "Block" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ))
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: INSERT OR IGNORE INTO "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" ("ID") SELECT "BlocksetEntry"."BlocksetID" FROM "BlocksetEntry", "Block"  WHERE "BlocksetEntry"."BlockID" = "Block"."ID" AND "Block"."VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ) UNION ALL SELECT "BlocksetID" FROM "BlocklistHash" WHERE "Hash" IN (SELECT "Hash" FROM "Block" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" )) took 0:00:00:00.003
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "FileLookup" WHERE "BlocksetID" IN (SELECT "ID" FROM "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" ) OR "MetadataID" IN (SELECT "ID" FROM "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" )
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "FileLookup" WHERE "BlocksetID" IN (SELECT "ID" FROM "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" ) OR "MetadataID" IN (SELECT "ID" FROM "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" ) took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "Metadataset" WHERE "BlocksetID" IN (SELECT "ID" FROM "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" )
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "Metadataset" WHERE "BlocksetID" IN (SELECT "ID" FROM "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" ) took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "Blockset" WHERE "ID" IN (SELECT "ID" FROM "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" )
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "Blockset" WHERE "ID" IN (SELECT "ID" FROM "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" ) took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "BlocksetEntry" WHERE "BlocksetID" IN (SELECT "ID" FROM "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" )
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "BlocksetEntry" WHERE "BlocksetID" IN (SELECT "ID" FROM "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" ) took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "BlocklistHash" WHERE "Hash" IN (SELECT "Hash" FROM "Block" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ))
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "BlocklistHash" WHERE "Hash" IN (SELECT "Hash" FROM "Block" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" )) took 0:00:00:00.004
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "Block" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" )
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "Block" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ) took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "DeletedBlock" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" )
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "DeletedBlock" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" ) took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" 
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7"  took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" 
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7"  took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7" 
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DROP TABLE IF EXISTS "DelBlockSetIds-611938B22C66B241A26A16B29C8F45F7"  took 0:00:00:00.000
2020-02-05 10:07:13 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7" 
2020-02-05 10:07:13 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DROP TABLE IF EXISTS "DelVolSetIds-611938B22C66B241A26A16B29C8F45F7"  took 0:00:00:00.000
2020-02-05 10:07:14 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Fileset" WHERE "Timestamp" < 1580915184 AND "ID" != 448 ORDER BY "Timestamp" DESC 
2020-02-05 10:07:14 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "Fileset" WHERE "Timestamp" < 1580915184 AND "ID" != 448 ORDER BY "Timestamp" DESC  took 0:00:00:00.000
2020-02-05 10:07:14 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT
	                      f."Path", fs."FileID", fs."Lastmodified", COALESCE(bs."Length", -1)
                      FROM (  SELECT DISTINCT "FileID", "Lastmodified"
		                      FROM "FilesetEntry"
		                      WHERE "FilesetID" = 447
		                      AND "FileID" NOT IN (
			                      SELECT "FileID"
			                      FROM "FilesetEntry"
			                      WHERE "FilesetID" = 448
		                      )) AS fs
                      LEFT JOIN "File" AS f ON fs."FileID" = f."ID"
                      LEFT JOIN "Blockset" AS bs ON f."BlocksetID" = bs."ID";
2020-02-05 10:07:14 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT
	                      f."Path", fs."FileID", fs."Lastmodified", COALESCE(bs."Length", -1)
                      FROM (  SELECT DISTINCT "FileID", "Lastmodified"
		                      FROM "FilesetEntry"
		                      WHERE "FilesetID" = 447
		                      AND "FileID" NOT IN (
			                      SELECT "FileID"
			                      FROM "FilesetEntry"
			                      WHERE "FilesetID" = 448
		                      )) AS fs
                      LEFT JOIN "File" AS f ON fs."FileID" = f."ID"
                      LEFT JOIN "Blockset" AS bs ON f."BlocksetID" = bs."ID"; took 0:00:00:00.002
2020-02-05 10:07:14 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: INSERT INTO "ChangeJournalData" ("FilesetID", "VolumeName", "JournalID", "NextUSN", "ConfigHash") VALUES (448, "C:\", 131110814199265407, 84945930160, "783E587246E3188CCF7092EB931721AC");
2020-02-05 10:07:14 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: INSERT INTO "ChangeJournalData" ("FilesetID", "VolumeName", "JournalID", "NextUSN", "ConfigHash") VALUES (448, "C:\", 131110814199265407, 84945930160, "783E587246E3188CCF7092EB931721AC"); took 0:00:00:00.000

https://github.com/duplicati/duplicati/blob/master/Duplicati/Library/Main/Database/LocalDatabase.cs

is possibly where the CREATE TEMP TABLE are from. I don’t know if this is relevant to original issue, because there’s no log where a huge amount of work should be. I wonder if this is just a logging bug?

As a side note, Process Monitor never found an etilqs file from Duplicati, but two other apps did them.
Access pattern did have lots of 4096 byte operations, but not starting at 4096 bit offsets from file start.

OK, here I am again with what I have. Long backups make for slow debugging! :slight_smile:

The big news is that I tried your suggestion, @ts678, and tried with –usn-policy off, and it actually works! Backup times are back to 1 hour or less, which is what they were in 2.0.3.3 beta. In fact, the average I’ve seen so far may be even shorter than it was. I’ve run the backup now several times, both by hand and by schedule and I’m confident that it works consistently. So yes, it looks like it is something in the USN code that caused this.
(Incidentally, usn was on in 2.0.3.3 as well, I didn’t change anything in the job definition when I migrated to 2.0.5.1.)

Some other things I’ve managed to gather about things you were asking:

Yes, reading of the etilqs seems to be always in 4096-byte chunks aligned to 4096-byte multiples. I don’t know whether this has to do with sqlite or the cluster size though, cluster size of the disk holding the temporary file is 4096 as well.

I’ve tried looking at the etilqs file by making a copy of it and opening it in a binary editor. It doesn’t look like a regular .sqlite file format. The “SQLite format 3” magic header is missing. In fact the whole first page (4096 bytes) is binary 0. After that there’s a whole lot of what looks like file and directory names from the backup, separated by some binary bytes. (The file is certainly not 100% text.) Here’s a screenshot of the start of it, I don’t mind the little information that appears:

This goes on like this until the end of the file. Is it every file and directory included in the backup? Could be, the number of entries is certainly in the hundreds of thousands, with the number of files in the backup being around 450.000 currently.
So it looks like this list of files is being read again and again. Why?

Here’s a full log at Retry level of a slow backup:

2020-02-04 18:40:01 +02 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2020-02-04 18:41:28 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()
2020-02-04 18:42:17 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (13.00 KB)
2020-02-05 11:00:18 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b640a0199fd6b484cbb0e377922ab334e.dblock.zip.aes (9.94 MB)
2020-02-05 11:01:16 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-20200204T164001Z.dlist.zip.aes (38.96 MB)
2020-02-05 11:06:50 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b640a0199fd6b484cbb0e377922ab334e.dblock.zip.aes (9.94 MB)
2020-02-05 11:06:50 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-i87da65374429405c8fcc4511d31f96d6.dindex.zip.aes (112.92 KB)
2020-02-05 11:06:57 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-i87da65374429405c8fcc4511d31f96d6.dindex.zip.aes (112.92 KB)
2020-02-05 11:18:56 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-20200204T164001Z.dlist.zip.aes (38.96 MB)
2020-02-05 11:18:56 +02 - [Information-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-StartCheck]: Start checking if backups can be removed
2020-02-05 11:18:56 +02 - [Information-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-FramesAndIntervals]: Time frames and intervals pairs: 7.00:00:00 / 1.00:00:00, 28.00:00:00 / 7.00:00:00, 365.00:00:00 / 31.00:00:00
2020-02-05 11:18:56 +02 - [Information-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-BackupList]: Backups to consider: 4/2/2020 4:36:28 μμ, 4/2/2020 1:41:36 μμ, 3/2/2020 1:13:23 μμ, 24/1/2020 12:38:47 πμ, 15/1/2020 9:00:00 μμ, 8/1/2020 9:00:00 μμ, 3/12/2019 12:00:00 μμ, 2/11/2019 8:09:07 πμ, 26/9/2019 6:00:00 μμ, 20/8/2019 6:00:00 πμ, 15/7/2019 6:00:00 πμ, 8/6/2019 4:13:22 μμ
2020-02-05 11:18:56 +02 - [Information-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-BackupsToDelete]: Backups outside of all time frames and thus getting deleted: 
2020-02-05 11:18:56 +02 - [Information-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-AllBackupsToDelete]: All backups to delete: 4/2/2020 4:36:28 μμ
2020-02-05 11:18:56 +02 - [Information-Duplicati.Library.Main.Operation.DeleteHandler-DeleteRemoteFileset]: Deleting 1 remote fileset(s) ...
2020-02-05 11:19:42 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-20200204T143628Z.dlist.zip.aes (38.95 MB)
2020-02-05 11:19:45 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Completed: duplicati-20200204T143628Z.dlist.zip.aes (38.95 MB)
2020-02-05 11:19:45 +02 - [Information-Duplicati.Library.Main.Operation.DeleteHandler-DeleteResults]: Deleted 1 remote fileset(s)
2020-02-05 11:20:20 +02 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting not required
2020-02-05 11:20:20 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()
2020-02-05 11:21:08 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (13.00 KB)
2020-02-05 11:21:09 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20200204T164001Z.dlist.zip.aes (38.96 MB)
2020-02-05 11:23:18 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20200204T164001Z.dlist.zip.aes (38.96 MB)
2020-02-05 11:23:18 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-i53d744acc00c410e866f17c34f286945.dindex.zip.aes (39.39 KB)
2020-02-05 11:23:19 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-i53d744acc00c410e866f17c34f286945.dindex.zip.aes (39.39 KB)
2020-02-05 11:23:20 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-be5f64ec2952b4f74a1b4b7df488aac4b.dblock.zip.aes (50.00 MB)
2020-02-05 11:24:43 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-be5f64ec2952b4f74a1b4b7df488aac4b.dblock.zip.aes (50.00 MB)

I don’t think this gives any useful information, it all looks nice and straight… until you notice how much time there is between lines 3 and 4.

So there we are. I don’t follow everything else you’re talking about too well, not being a .NET programmer and not being familiar with the code. I tried the dnSpy debugger, but can’t make easy progress with it. Like I said, not being familiar with any of this stuff would make for a very long learning stage for me and right now I have neither the time nor the inclination for it. Given that with no USN things seem to be running fine, I consider this solved for me, albeit with a workaround rather than a clean solution. So thanks, everyone, for the time you took to look into this and offering your ideas, suggestions and expertise, I couldn’t have progressed without you!

Still, don’t hesitate to ask if there is any additional information I may be able to gather. This is easy to replicate (just turn usn back on), so I could do some more digging, if needed.

I noticed the same issue (the backup taking 10 hours to complete) after updating to 2.0.5.1 from the previous beta. After turning usn policy off, the time is back down to 2 hours.

1 Like

Sorry I have been away.

Yeah I wasn’t very clear with the word ‘it’ , sorry about that :slight_smile: I have done a bit more research, and it looks like we are looking at a rollback journal

https://www.sqlite.org/cgi/src/artifact?udc=1&ln=on&name=b1e79698f3903e64

497 ** Whether or not a journal file contains a master-journal pointer affects
498 ** the way in which the journal file is finalized after the transaction is
499 ** committed or rolled back when running in “journal_mode=PERSIST” mode.
500 ** If a journal file does not contain a master-journal pointer, it is
501 ** finalized by overwriting the first journal header with zeroes. If
502 ** it does contain a master-journal pointer the journal file is finalized
503 ** by truncating it to zero bytes, just as if the connection were
504 ** running in “journal_mode=truncate” mode.
505 **
506 ** Journal files that contain master journal pointers cannot be finalized
507 ** simply by overwriting the first journal-header with zeroes, as the
508 ** master journal pointer could interfere with hot-journal rollback of any
509 ** subsequently interrupted transaction that reuses the journal file.
510 **
511 ** The flag is cleared as soon as the journal file is finalized (either
512 ** by PagerCommitPhaseTwo or PagerRollback). If an IO error prevents the
513 ** journal file from being successfully finalized, the setMaster flag
514 ** is cleared anyway (and the pager will move to ERROR state).

1374 ** If doTruncate is non-zero or the Pager.journalSizeLimit variable is
1375 ** set to 0, then truncate the journal file to zero bytes in size. Otherwise,
1376 ** zero the 28-byte header at the start of the journal file. In either case,
1377 ** if the pager is not in no-sync mode, sync the journal file immediately
1378 ** after writing or truncating it.

It seems I’m having the exact same issue. Just before the USN data is written to the DB, Duplicati hangs for hours. Any news on this? Or is there anything I could do to help debug this further? I’m a developer, but not familiar with .NET or C#.

I would like to get USN working well for my backups, since I backup the same source to multiple destinations and scanning takes a lot of time for each of them.

What version of Duplicati are you using?
Can you watch the Live Log set to Verbose to see if it provides any clues as to exactly where it’s hanging?

@drwtsn32 I’m currently testing Duplicati 2.0.5.106_canary_2020-05-11 on Windows 10.

A backup is running right now (22:23 local time) and it’s hanging since 20:11. The last log entries show database activity, in previous instances where it got over this hang (could take hours), the next log entries would be for the queries to insert the USN data.

Last log items:

21 mei 2020 20:11: ExecuteReader: SELECT f."Path", fs."FileID", fs."Lastmodified", COALESCE(bs."Length", -1) FROM ( SELECT DISTINCT "FileID", "Lastmodified" FROM "FilesetEntry" WHERE "FilesetID" = 24 AND "FileID" NOT IN ( SELECT "FileID" FROM "FilesetEntry" WHERE "FilesetID" = 25 )) AS fs LEFT JOIN "File" AS f ON fs."FileID" = f."ID" LEFT JOIN "Blockset" AS bs ON f."BlocksetID" = bs."ID"; took 0:00:00:05.455
21 mei 2020 20:11: Starting - ExecuteReader: SELECT f."Path", fs."FileID", fs."Lastmodified", COALESCE(bs."Length", -1) FROM ( SELECT DISTINCT "FileID", "Lastmodified" FROM "FilesetEntry" WHERE "FilesetID" = 24 AND "FileID" NOT IN ( SELECT "FileID" FROM "FilesetEntry" WHERE "FilesetID" = 25 )) AS fs LEFT JOIN "File" AS f ON fs."FileID" = f."ID" LEFT JOIN "Blockset" AS bs ON f."BlocksetID" = bs."ID";
21 mei 2020 20:11: ExecuteScalarInt64: SELECT "ID" FROM "Fileset" WHERE "Timestamp" < 1590083555 AND "ID" != 25 ORDER BY "Timestamp" DESC took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Fileset" WHERE "Timestamp" < 1590083555 AND "ID" != 25 ORDER BY "Timestamp" DESC
21 mei 2020 20:11: ExecuteNonQuery: DROP TABLE IF EXISTS "DelMetadataFilesetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "DelMetadataFilesetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9"
21 mei 2020 20:11: ExecuteNonQuery: DROP TABLE IF EXISTS "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9"
21 mei 2020 20:11: ExecuteNonQuery: DROP TABLE IF EXISTS "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9"
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9"
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9"
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM Fileset WHERE Fileset.ID IN (SELECT DISTINCT FilesetID FROM "DelMetadataFilesetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9") AND Fileset.ID NOT IN (SELECT DISTINCT FilesetID FROM FilesetEntry) took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM Fileset WHERE Fileset.ID IN (SELECT DISTINCT FilesetID FROM "DelMetadataFilesetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9") AND Fileset.ID NOT IN (SELECT DISTINCT FilesetID FROM FilesetEntry)
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM Fileset WHERE VolumeID IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ) took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM Fileset WHERE VolumeID IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" )
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM FilesetEntry WHERE FilesetID IN (SELECT ID FROM Fileset WHERE VolumeID IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" )) took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM FilesetEntry WHERE FilesetID IN (SELECT ID FROM Fileset WHERE VolumeID IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ))
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM "ChangeJournalData" WHERE "FilesetID" IN (SELECT "ID" FROM "Fileset" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" )) took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM "ChangeJournalData" WHERE "FilesetID" IN (SELECT "ID" FROM "Fileset" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ))
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM "DeletedBlock" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ) took 0:00:00:00.046
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM "DeletedBlock" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" )
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM "Block" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ) took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM "Block" WHERE "VolumeID" IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" )
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM "IndexBlockLink" WHERE "BlockVolumeID" IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ) OR "IndexVolumeID" IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ) took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM "IndexBlockLink" WHERE "BlockVolumeID" IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ) OR "IndexVolumeID" IN (SELECT "ID" FROM "DelVolSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" )
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM "BlocklistHash" WHERE "BlocklistHash"."BlocksetID" IN (SELECT DISTINCT "ID" FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ) took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM "BlocklistHash" WHERE "BlocklistHash"."BlocksetID" IN (SELECT DISTINCT "ID" FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" )
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM "BlocksetEntry" WHERE "BlocksetID" IN (SELECT DISTINCT "ID" FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ) took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM "BlocksetEntry" WHERE "BlocksetID" IN (SELECT DISTINCT "ID" FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" )
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM "Blockset" WHERE "ID" IN (SELECT DISTINCT "ID" FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ) took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM "Blockset" WHERE "ID" IN (SELECT DISTINCT "ID" FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" )
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM "FileLookup" WHERE "BlocksetID" IN (SELECT DISTINCT "ID" FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ) took 0:00:00:00.497
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM "FileLookup" WHERE "BlocksetID" IN (SELECT DISTINCT "ID" FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" )
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM "Metadataset" WHERE "BlocksetID" IN (SELECT DISTINCT "ID" FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ) took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM "Metadataset" WHERE "BlocksetID" IN (SELECT DISTINCT "ID" FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" )
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM FileLookup WHERE FileLookup.MetadataID IN (SELECT MetadataID FROM "DelMetadataFilesetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9") took 0:00:00:00.411
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM FileLookup WHERE FileLookup.MetadataID IN (SELECT MetadataID FROM "DelMetadataFilesetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9")
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM FilesetEntry WHERE FilesetEntry.FileID IN ( SELECT ID FROM FileLookup WHERE FileLookup.BlocksetID IN (SELECT DISTINCT "ID" FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" )) took 0:00:00:00.516
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM FilesetEntry WHERE FilesetEntry.FileID IN ( SELECT ID FROM FileLookup WHERE FileLookup.BlocksetID IN (SELECT DISTINCT "ID" FROM "DelBlockSetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9" ))
21 mei 2020 20:11: ExecuteNonQuery: DELETE FROM FilesetEntry WHERE FilesetEntry.FilesetID IN (SELECT DISTINCT FilesetID FROM "DelMetadataFilesetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9") AND FilesetEntry.FileID IN ( SELECT FilesetEntry.FileID FROM FilesetEntry INNER JOIN FileLookup ON FileLookup.ID = FilesetEntry.FileID WHERE FileLookup.MetadataID IN (SELECT MetadataID FROM "DelMetadataFilesetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9")) took 0:00:00:00.000
21 mei 2020 20:11: Starting - ExecuteNonQuery: DELETE FROM FilesetEntry WHERE FilesetEntry.FilesetID IN (SELECT DISTINCT FilesetID FROM "DelMetadataFilesetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9") AND FilesetEntry.FileID IN ( SELECT FilesetEntry.FileID FROM FilesetEntry INNER JOIN FileLookup ON FileLookup.ID = FilesetEntry.FileID WHERE FileLookup.MetadataID IN (SELECT MetadataID FROM "DelMetadataFilesetIds-E7F1FC34C5AE844E99A8EDCA07F87EF9"))

How large is your local database (the job specific one)? I’m also curious what the next line in the logging is once the hang passes, but that may be difficult to catch without logging to a file.

I’ll leave the live profiling log open for now and hope to catch it when it continues. I think the DB is fairly large at 1.93 GB. There is a corresponding sqlite-journal file that is now 47.6 MB.

2GB isn’t too bad, I’ve seen people with much larger ones!

Will be interesting to see if you spot anything in the Live Log.

Out of curiosity do you use the --auto-vacuum option? If not it might help to perform a vacuum operation on the database. Some people have noted drastic improvements with some database operations by doing a vacuum if it’s not being done regularly.

I’m not sure if auto-vacuum is on, should check that after the backup is finished. I did a vacuum recently after discovering backups take hours some times to see if it fixed that.

might be the same as where my log looks like below, except I don’t have the huge gap in between…

2020-05-20 16:41:08 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT
	                      f."Path", fs."FileID", fs."Lastmodified", COALESCE(bs."Length", -1)
                      FROM (  SELECT DISTINCT "FileID", "Lastmodified"
		                      FROM "FilesetEntry"
		                      WHERE "FilesetID" = 874
		                      AND "FileID" NOT IN (
			                      SELECT "FileID"
			                      FROM "FilesetEntry"
			                      WHERE "FilesetID" = 875
		                      )) AS fs
                      LEFT JOIN "File" AS f ON fs."FileID" = f."ID"
                      LEFT JOIN "Blockset" AS bs ON f."BlocksetID" = bs."ID"; took 0:00:00:00.002
2020-05-20 16:41:08 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: INSERT INTO "ChangeJournalData" ("FilesetID", "VolumeName", "JournalID", "NextUSN", "ConfigHash") VALUES (875, "C:\", 131110814199265407, 96771819752, "410D0B5054C6796CE70A2CCC6B5191EF");
2020-05-20 16:41:08 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: 
INSERT INTO "ChangeJournalData" ("FilesetID", "VolumeName", "JournalID", "NextUSN", "ConfigHash") VALUES (875, "C:\", 131110814199265407, 96771819752, "410D0B5054C6796CE70A2CCC6B5191EF"); took 0:00:00:00.001

I’m thinking the source code might be below. You see a SELECT logged. You don’t see all its INSERT:

Ordinarily one can add the following option to have Duplicati spend time on frequent DB operations:

  --profile-all-database-queries (Boolean): Activates logging of all database
    queries
    To improve performance of the backups, frequent database queries are not
    logged by default. Enable this option to log all database queries, and
    remember to set either --console-log-level=Profiling or
    --log-file-log-level=Profiling to report the additional log data
    * default value: false

I’m not sure ir someone forgot to pass in m_logQueries here (like in others) or if it meant to not log.

This backup is quite small though, and maybe a large backup spends a lot more time doing INSERT.
“Large” in this case probably means the number of source files. Got a log value for ExaminedFiles?

While FilesetEntry table always gets a row per file per backup, another place does all in one INSERT:

was the same worry about an INSERT loop (but in a different code spot, based on the stated context).

The reason USN is using the maybe-slower AppendFilesFromPreviousSetWithPredicate and not AppendFilesFromPreviousSet might be below.

EDIT 1:

Although it won’t directly show SQL activity, it might be possible to run Sysinternals Process Monitor during the slow spot to see whether the child Duplicati process is doing lots of work on an sqlite file. Sometimes filename might contain a backwards sqlite, i.e. etilqs instead of the ordinary sqlite which might not be the very end of the file, because suffixes are used for the temporary journal files.

EDIT 2:

supports the idea that a large number of files causes a slowdown when USN is in use. Experiments could also be done, e.g. to see if the “could take hours” hang scales up and down with file numbers.

I missed the log entries after the hang yesterday. I started a new backup this morning using a verbose log output to file. Unfortunately, I later read the suggestion to log all queries, so unfortunately I didn’t enable that during this run. It’s currently hanging on the same place again and Process Monitor shows it’s doing many reads on a temporary file starting with etilqs. So, it seems some DB activity is going on there.

I also cloned Duplicati from source and was able to build that using Visual Studio (didn’t run it yet). I could also add some logging and run with that build to hopefully get more info about what it is doing. SO, if you have suggestions of where some more logging might be beneficial for this issue, please let me know.