USN option working inconsistently


I’m using Duplicati beta on Windows 10, running 18 backups (9 sources, each to two different destinations: local and remote). Duplicati runs as a service.

I set the “usn-policy” option to “required” in the global settings, however I recently noticed that Duplicati takes a significant amount of time to scan some of the sources even if there has been no change to the files. When I checked the databases of the affected backups, I noticed that the ChangeJournalData table is empty.

Deleting and re-creating the db does not help, and also adding the “usn-policy” option to the individual backup settings does not improve the situation.

It is interesting that in at least one case, one backup of one file set (the local one) does not use USN while the second (remote) does: these are exactly the same files, so the problem does not seem to be Windows. Additionally, all the sources are on the same filesystem, they are simply different directories.

Finally, my understanding is that setting “usn-policy” to “required” would cause a backup to abort if USN is not available, but this does not happen either.

Has someone a clue of what is happening? Suggestions to troubleshoot the issue?


In the backup logs, I noticed that Duplicati doesn’t do a full scan of the source files, when it doesn’t use USN, but scans ALWAYS the same number of files (e.g. 13065 of 44078 in one case) even if nothing has changed between backups. This is completely inexplicable.

Strange, that should definitely be happening. I was messing around with USN recently and saw that it would fail the job if the changelog couldn’t be used. I was triggering this failure by running Duplicati non-elevated (where it doesn’t have permission necessary to utilize changelog). Are you running Duplicati elevated?

Is it possible that no files have changed in your backup set since you enabled the option? I’m not sure if this is intended behavior, but it appears Duplicati will not write to this table until the first backup is performed after enabling USN --AND-- the backup version was actually retained (eg - there was at least one file that changed).

Thanks for the tip. I added a file to the affected directories, run the backups, and Duplicati dutifully filled the ChangeJournalData table.

This is not the behavior one would expect, though. I will create a bug report.

Probably the backups do not fail because Duplicati cannot use USN as long as the ChangeJournalData table is empty, so it will just ignore it. It is not a USN failure.

It’s not a hard failure that will cause --usn-policy=required to error. There are a few soft failures that can cause full scans to happen. You can see them in the log at Information level. An example of such log is:

2020-02-19 06:38:16 -05 - [Information-Duplicati.Library.Main.Operation.BackupHandler-SkipUsnForVolume]: Performing full scan for volume “C:”

and possibly this case is:

Then the question someone can look at is how it managed to stay empty despite post-backup work at:

I’d like to find a way to tie this into your P.S. issue on repeating opens, but I can’t explain 30% reopens:

Duplicati detects spurious timestamp changes

The first await doesn’t stick around if the fileset gets dropped (no files are changed).
The second await doesn’t work if the ChangeJournal table is empty - there is no previous entry.

So this looks like a special edge case that isn’t handled correctly: first backup after enabling USN that ends up getting dropped if no files are changed.

I wrote about a problem with USN before:

On one Win 2016 server, I had to downgrade from to because the backup took about 15 hours. I back up about 800,000 small files on the server with a total size of 250 GB. After the downgrade the backup takes only 30-60 minutes, the changed files are always only a few thousand.
I also tried to split one backup job into several smaller jobs, but that didn’t help, the total backup time would still be around 15 hours, because the newer Duplicati skipped USN almost always. But sometimes USN has been used, but I don’t know why.
Now I have the new version only on 3 computers, elsewhere I stayed on the old beta.

Duplicati will skip USN if it detects any changes to selection list or filtering options, if the previous backup was interrupted, or the reason I outlined in my post above. Also it could be skipped if too many filesystem changes have happened since the last backup and the USN journal wrapped.

Do you think any of these apply to you?

I do not think so.
Is it possible to set logging to record the reason for skipping USN? I only see this in the log:
2020-02-25 21:06:41 +01 - [Information-Duplicati.Library.Main.Operation.BackupHandler-SkipUsnForVolume]: Performing full scan for volume "C: \",

Good idea. Currently the USN soft failure reason isn’t sent to the logging system. I could look at adding that information to a future pull request. May take a little while before it makes it into to a Canary release.

Ok I’ve submitted a code revision that will add descriptive info as to why a USN soft failure occurs. The only one I know how to trigger on demand is when a previous backup wasn’t using USN, so the journal information was not recorded in the database:

I have installed version on one machine and tried backup with USN. The following log appears:

"2020-03-25 20:09:38 +01 - [Information-Duplicati.Library.Main.Operation.BackupHandler-SkipUsnForVolume]: Performing full scan for volume \"C:\\\": Nedaří se zjistit úplný popis umístění souboru pro USN položku",

I will turn on verbose logging.

Is Google Translate to English accurate? “Failed to find full description of file location for USN entry”

Can you try running this command from an elevated prompt:

fsutil usn queryjournal c:

I found the original text on Github:

Unable to determine full file path for USN entry

and here is command:

C:\WINDOWS\system32>fsutil usn queryjournal c:
Usn Journal ID : 0x01d5514a8c828d25
First Usn : 0x000000015b800000
Next Usn : 0x000000015df13738
Lowest Valid Usn : 0x0000000000000000
Max Usn : 0x7fffffffffff0000
Maximum Size : 0x0000000002000000
Allocation Delta : 0x0000000000800000
Minimum record version supported : 2
Maximum record version supported : 4
Write range tracking: Disabled

Ok so this is what is resulting in the soft fail for you:

I don’t understand USN enough to be able to comment.
Maybe @Daniel_Gehriger is still around and can provide some ideas?

Hey @roto I didn’t see a bug report and I am having the same issue so I entered one:

@drwtsn32 The USN journal only records file and folder names, not full paths. But each name is also associated with a parent entry, which is the containing folder name. So this code resolves this chain all the way up to the root, building up the full path name.

But since the USN has only limited size, parent entries get ejected, and it becomes impossible to generate the full path. This happens if backup jobs happen too infrequently, in which case the USN journal size doesn’t contain the entire history data since the last time it was checked.

Except for increasing journal size, I don’t see how this can be fixed.

But: instead of reverting to a full scan, Duplicati could just perform the full scan for the affected parts, and still USN where it was able to obtain the change information. Actually, I don’t remember if I already did that, but I don’t believe so. In this implementation, the slightest USN failure triggers a full scan.

I tried removing the USN journal and creating a new one using fsutil and it seems to have helped.

  • Delete an active USN journal:

fsutil usn deletejournal /D d:

  • Create new USN journal with 2GB maximum size:

fsutil usn createjournal m=2147483648 a=1 d:

Good tip! I take it you either have a busy filesystem or a long time between backups…