New log system with filters

With canary 2.0.3.2 there is a new log system that improves on the previous system in a number of ways.

The previous version uses a single --log-level to control the shared log level, and the option --verbose as a mix of setting --log-level=verbose and --full-results.

The new log system removes the --verbose option and deprecates the --log-level. Instead there are two new options --console-log-level and --log-file-log-level which gives control over which messages are shown where. For now, --log-level sets both these, but the option will eventually be removed.

Internally the log system has been updated to include the source of each log message, as well as a (mostly) unique ID for each log message.

An example of log contents is shown here (--log-file-log-level=Profiling):

2018-03-16 09:34:01 +01 - [Verbose-Duplicati.Library.Main.Operation.BackupHandler.FileEntry-NewFile]: New file /source/file1
2018-03-16 09:34:01 +01 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Block" WHERE "Hash" = ? AND "Size" = ?
2018-03-16 09:34:01 +01 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "Block" WHERE "Hash" = ? AND "Size" = ? took 0:00:00:00.000
2018-03-16 09:34:01 +01 - [Verbose-Duplicati.Library.Main.Operation.BackupHandler.FileEntry-AddDirectory]: Adding directory /source/subfolder/
2018-03-16 09:34:01 +01 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Block" WHERE "Hash" = ? AND "Size" = ?
2018-03-16 09:34:01 +01 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "Block" WHERE "Hash" = ? AND "Size" = ? took 0:00:00:00.000
2018-03-16 09:34:01 +01 - [Verbose-Duplicati.Library.Main.Operation.BackupHandler.FileEntry-CheckFileForChanges]: Checking file for changes /source/subfolder/file2, new: True, timestamp changed: True, size changed: True, metadatachanged: True, 9/18/2017 8:24:19 AM vs 1/1/0001 12:00:00 AM
2018-03-16 09:34:01 +01 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Block" WHERE "Hash" = ? AND "Size" = ?
2018-03-16 09:34:01 +01 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "Block" WHERE "Hash" = ? AND "Size" = ? took 0:00:00:00.000
2018-03-16 09:34:01 +01 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Block" ("Hash", "VolumeID", "Size") VALUES (?, ?, ?); SELECT last_insert_rowid();
2018-03-16 09:34:01 +01 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Block" ("Hash", "VolumeID", "Size") VALUES (?, ?, ?); SELECT last_insert_rowid(); took 0:00:00:00.000
2018-03-16 09:34:01 +01 - [Verbose-Duplicati.Library.Main.Operation.BackupHandler.FileEntry-NewFile]: New file /source/subfolder/file2

This quickly adds up to a large number of lines, even though we are only looking at a few files in this example. To solve this there are now two new log-filter options: --console-log-filter and --log-file-log-filter that operate on the log-tag, which is what you see in the example between the hard brackets as [<level>-<source>-<id>].

These filters take precedence over the log-level and can be used to suppress or include individual log messages.

If you would like to only see what new files are added (second and last line in the example above) you can set the filter to --log-file-log-filter=+*-NewFile, meaning, always include log messages where the log-tag ends with -NewFile. If you add this filter, it ignores the log-level setting for messages that match the filter, allowing you to selectively include messages from other log levels.

For an example of the reverse situation, where you have a log-level but want to exclude certain messages, consider the above output, where you don’t want the messages for the timer started, but do want the time the commands took (respectively lines 2 and 3 from the example).
In this case you can keep the --log-file-log-level=Profiling and add --log-file-log-filter=-Profiling-Timer.Begin-* (notice the leading minus -in the filter). This will remove all the Timer.Begin messages, but otherwise keep the output as-is.

The filters support multiple wildcards * as well as multiple single character wildcards ?. For advanced uses, regular expressions are also supported by enclosing the filter in hard brackets, for example: --log-file-log-filter=+[.*-NewFile]. Multiple filters can be supplied using : or ; as separators, and individual filters can be applied to console and log-file simultaneously.

An example of multiple mixed filters:

--log-file-log-filter=-[[^-]*-Timer\.Begin];+*-NewFile;-*-AddDirectory;+[.*]

Happy logging!

3 Likes

Great idea!

Just to clarify, the filters only apply to the log-tag (the stuff in the hard brackets) correct?

In other words, I can’t make a filter to only log entries that end with “took #:##:##:##.###”?

Yes, the filters only apply to the log-tag (stuff in the hard brackets).
I considered applying it to the entire text, but did not see a use-case for it.

Hello @kenkendk thank you for filter option!
One question: Can I filter out email and send-http-url output?

If I backup MS SQL, log in email message is extra long if I have many databases.
Email message has 150 KB and if I want use https://www.duplicati-monitoring.com for monitoring, I get error because message size.
Failed to send http message: System.UriFormatException: Invalid URI: The Uri string is too long.
I tried use console-log-filter and log-file-log-filter but it does not seem to affect emails and http.

Thanks:)

1 Like

Yeah, the log filters apply only to console and the log file.
But it should actually be easy to set up a new log scope for email or the http part as well, so there can be separate filters for those as well.

1 Like

Same problem for me, any hints?

Hello, we need to wait for new version or create issue on github. Maybe :slight_smile:

Hi all,

Just wanted to post a +1 here for this issue. I’m having the same exact thing with not being able to use certain systems anymore with the http-output. The backups run, but the failure email is full of a lot of extra stuff and the http report is “too long” like earlier mentioned.

Sounds like it’s also an issue here:

2 Likes

I have just pushed an update that adds a number of features to the http/jabber/email reporting, including limiting the number of log lines, setting log-level for each destination and also supports log filters for each destination.

The update also has JSON support, for those wanting to send a JSON formatted result to a logging server or similar.

2 Likes

Much of the confusion could have been avoided if Advanced Options - Duplicati 2 User's Manual was updated to state the differences between the new and old options and clarify the deprecations.