USN option working inconsistently

FWIW discussion pushback in the @surlyhacker bug report (GitHub issue) is:

Oh in my case it’s just a few minutes in between tests with either no changes or one or two files changed. Still sometimes USN isn’t used.

so maybe there are some different causes…

I usually back up once a day. On the server four times a day. The server is busy, changing several thousand files, but I also have a station with Win 10 where nothing happens but the USN is an error:

Unable to determine full file path for USN entry

and the following backup is error-free.
So from my point of view, USN is behaving strangely :slight_smile:

Interesting… I am using the USN option on about 8 machines and am not having any issues.

I was getting the message in my logfile:

2020-05-03 09:22:49 -07 - [Information-Duplicati.Library.Main.Operation.BackupHandler-SkipUsnForVolume]: Performing full scan for volume "C:\": Unable to determine full file path for USN entry

Following https://binaryguru.wordpress.com/2017/01/05/increasing-the-size-of-the-ntfs-change-journal

Ran this:

dir C:\ /s /a /w

Got this:

 Total Files Listed:
       634678 File(s) 207,048,192,450 bytes
       341574 Dir(s)  42,513,285,120 bytes free

Ran this:

fsutil usn queryjournal C:

Got this:

Usn Journal ID   : 0x01d************
First Usn        : 0x00000000ca800000
Next Usn         : 0x00000000cc8bc538
Lowest Valid Usn : 0x0000000000000000
Max Usn          : 0x7fffffffffff0000
Maximum Size     : 0x0000000002000000
Allocation Delta : 0x0000000000800000
Minimum record version supported : 2
Maximum record version supported : 4
Write range tracking: Disabled

…which, looking it up in this table, tells me that the journal’s maximum size value of 20000000 indicates that this is only good for 400,000 files. I have 634,678 files.

|Files|Maximum Size (m) in bytes|Allocation Delta (a) in bytes|m (MB)|m (hex)|a (hex)|
|400,000|536,870,912|67,108,864|512|20000000|4000000|
|600,000|805,306,368|100,663,296|768|30000000|6000000|
|800,000|1,073,741,824|134,217,728|1,024|40000000|8000000|
|1,000,000|1,342,177,280|167,772,160|1,280|50000000|A000000|
|1,200,000|1,610,612,736|201,326,592|1,536|60000000|C000000|
|1,400,000|1,879,048,192|234,881,024|1,792|70000000|E000000|
|1,600,000|2,147,483,648|268,435,456|2,048|80000000|10000000|
|1,800,000|2,415,919,104|301,989,888|2,304|90000000|12000000|
|2,000,000|2,684,354,560|335,544,320|2,560|A0000000|14000000|
|2,200,000|2,952,790,016|369,098,752|2,816|B0000000|16000000|
|2,400,000|3,221,225,472|402,653,184|3,072|C0000000|18000000|
|2,600,000|3,489,660,928|436,207,616|3,328|D0000000|1A000000|
|2,800,000|3,758,096,384|469,762,048|3,584|E0000000|1C000000|
|3,000,000|4,026,531,840|503,316,480|3,840|F0000000|1E000000|
|3,200,000|4,294,967,296|536,870,912|4,096|100000000|20000000|

I decide to give it a ton of space just because I can.

Ran this:

fsutil usn createjournal m=2415919104 a=301989888 C:
fsutil usn queryjournal C:

Got this:

Usn Journal ID   : 0x01************
First Usn        : 0x00000000ca800000
Next Usn         : 0x00000000cc8c6fd8
Lowest Valid Usn : 0x0000000000000000
Max Usn          : 0x7fffffffffff0000
Maximum Size     : 0x0000000090000000
Allocation Delta : 0x0000000012000000
Minimum record version supported : 2
Maximum record version supported : 4
Write range tracking: Disabled

As you might imagine, this resulted in sharp decline in backup times - from about 20 minutes to 2 minutes.

Thanks for your post!

I thought the journal size limited how many file operations could be recorded. The number of files on your disk are not directly relevant. But maybe that’s not the case.

Thanks for posting your tip!

Ah, I see. I was just going by that page I linked to, which based the size on the number of files. Probably the reason it’s working now is just that it has more room to breathe. At this point, I have given my USN journal log a lot more room to breathe. So my idea worked anyhow, even though I see now that I clearly misunderstood how it works.

For others interested in how this works, here’s an interesting description of what a USN journal log entry contains: Another Forensics Blog: USN Journal: Where have you been all my life

Hi all – I’m the original author of the USN feature, and I’m also observing failures on Windows 10. After some investigation, I believe it’s due to a new Windows 10 feature, the “$Extend$Deleted” folder (The “\$Extend\$Deleted” directory – My DFIR Blog).

I just want to let you know that I’m looking into this, trying to add support for files that have been “moved” to that folder.

2 Likes

Implemented fix: Enabled support for \$Extend\$Deleted special MTF folder when using USN by dgehri · Pull Request #4277 · duplicati/duplicati · GitHub

The previous PR wasn’t enough to catch all failed USN cases. Here’s a follow-up: Feature/improve usn 2 by dgehri · Pull Request #4287 · duplicati/duplicati · GitHub.

Wow–two nights now with Duplicati relying on the USN instead of launching full scans. Nice!

Ironically, even though Duplicati’s no longer launching a full scan, my backup times nevertheless became much, much longer for whatever reason. Weird. I do see you merged another change two days ago that might resolve this issue, so I’ll look forward to testing it with the next Canary release.

Thanks for your work on this!

Yes, they became longer because now that USN is working, the following issue is affecting you: Untraceable long operation (query?) during backup. The updated PR indeed fixes this and should result in very short backup times.

1 Like

I can tell you now after having the newest update that the backups are consistently less than 4 minutes per night whereas previously the backups took 30-50 minutes. And in the log I see that it is using USN instead of scanning. Thank you so much for all of your work.

2 Likes

I can also confirm that the use of USN now works well. Thanks

1 Like

I installed 2.0.5.111 yesterday and I can confirm the USN now works as expected.

Thanks!!!