Metadata was reported as not changed, but still requires being added?

That might explain why you get it on every file…I don’t use filetime-only and I used to get it occasionally, but the last few days it seems to happen every run - but only once.

I’ll try to update once logging catches it.

@Chipmaster nailed it. :+1:

For kicks, I removed check-filetime-only from all of my backups, and every single one of them is working just fine now.

I am getting a lot of 503 errors on HubiC, but I am assuming that it is on their end. I’ll give it a few days before I start hollering. :grin:

@Chipmaster, FYI, my backups are faster now than there were before the metadata issues. Something that used to take 30 minutes, takes 8 now on Windows, although I have not seen much improvements on Linux.

Thanks for letting us know that improved things for you.

Guess we’ll have to review the --check-filetime-only code to figure out where things went wrong. :blush:

It might not be the only problem. I started a new backup on pCloud using WebDav, and on the second run the metadata errors happened for all files in the backup.

I’m also getting some error about not being able to load the Microsoft Azure dll, which seems to correlate with the backup pausing (no activity) for at least 30 minutes. Not sure if that’s related or not though.

Jul 16, 2018 2:31 AM: Failed to load assembly C:\ProgramData\Duplicati\updates\2.0.3.9\Microsoft.WindowsAzure.Storage.dll, error message: Could not load file or assembly 'Microsoft.Data.Services.Client, Version=5.8.1.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

I have other backups starting on pCloud, we’ll see soon enough if the same is happening or not.

I’ve also seen performance improvements by disabling check-filetime-only . On the order of 13hrs -> between 30min and 1hr.

FWIW - I strangely have this issue on most of my backups every Monday (i keep for 30d so unsure why), I back up again and they go away.

Still to try the options mentioned in this thread though!

TL;DR:
Experiencing the same issue after migrating from User to Service on Windows 10 for VSS.
Problem appears both with check-filetime-only enabled (basically every file) and disabled (a few files).
All mismatches occur with size changed: True in the log. All other metadata is the same, but the files actually haven’t changed at all.
Disabling VSS reduces the number of Warnings but still doesn’t eliminate them.
The issue persists on new backup jobs.


I have the same issue with “Metadata was reported as not changed, but still requires being added?” warnings on 2.0.3.9_canary_2018-06-30

I am doing a backup of a lot of files on Windows 10 (most current updates) with VSS activated and set to required. Duplicati was recently migrated from user to service to use VSS, and thats when this problem started to appear. The backup happens completely locally from an SSD (C:) and HDD (D:) to another local HDD (E:).

I am seeing the issue with check-filetime-only either enabled or disabled:
check-filetime-only=true
It happens with basically every file… I got more than 100k Warnings on my backup.
check-filetime-only=false
It happens with a few files, generally <100 files affected.

All metadata mismatches show up with

new: False, timestamp changed: False, size changed: True, metadatachanged: False

in the log on Verbose log-level. Unfortunately I haven’t found a way for the Log to show me the actual file sizes it compares, neither the stored nor the newly seen ones, to do a sanity check on them.

Finally I tried to disable VSS which reduced the number of Warnings, but didn’t elimate them.

So my guess would be that something screws with the file sizes that Duplicati sees when running it as a Service (perhabs being an Admin is also involved?) because none of these Problems happened with a User installation.

Edit:
I just made a new backup job to eliminate any weird behaviour that might have come from the switch from User to Service and the result is the same.
The initial backup runs through without and issue but all following passes have the “Metadata was reported as not changed, but still requires being added?” warnings with size changed: True

Yes, that turned out to be the problem. I made a much faster query in the database that only fetches the LastModified value. But the logic then checked BOTH LastModified and LastSize causing it to think the size changed (current size vs missing size) and then submitted the file for full scanning.

Also, since the metadata was not previously fetched, it would think the metadata was changed as well and instruct Duplicati to re-store the metadata, causing the warning message about unchanged metadata.

Additionally, there was a wrong flag check that caused the OP warning to be triggered on most (all?) files.

For this error, at least the “… but still requires being added?” part should go away.

Not sure how to deal with the “invalid” time. Problem is we cannot read it into .Net, so we are stuck with a missing “current” time. This repeatedly causes Duplicati to thing the metadata has changed, and then it tries to generate new metatdata, only to find that it can’t. Next run we have not stored a valid timestamp, so it repeats.

With the changes from @mnaiman, we can at least see the problematic path. Is it acceptable to require users to fix the timestamps?

2 Likes

If I understood @mikaitech correctly then his files really had a completely invalid timestamp and he also mentioned that this was probably because of an external factor:

So IMHO this has nothing to do with Duplicati and can’t really be fixed by it in any way. Therefore I would argue that if the user has a corrupted timestamp like this, he should get an appropriate warning informing him of the exact issue, but he has to resolve it himself.

I agree. If Duplicati found invalid timestamp (never happened to me) or missing “current” time it should log warning with info “Correct your files timestamp - current time missing” but not to touch files at all. User should correct files.

The “Not a valid Win32 FileTime” seemed a self-contradictory error about a Windows file, but possibly it’s just an approximation when trying to complain about a time beyond the smaller (about 10000 year) range C#/.NET uses. Odd times may be legit (though maybe unintended) in terms of source System time, so message wording counts.

DateTime.MinValue Field

DateTime.MaxValue Field

might be good guidance, but I haven’t actually tested. Duplicati could also offer an option to range-limit file times.

The timestamp issue is something that really is an odd one. Not sure if it was Duplicati or external program, or some combination messing with it. I had to find a 3rd party program to correct the timestamps, and after that the error went away. It is just odd that the error showed up the same day I upgraded to a newer version of Duplicati, so my assumption was it being Duplicati related. It may have been, it may not be, still not sure at this point, but it was resolved by other means.

To avoid making people find 3rd party tools, Duplicati could provide an option to range-limit times in backups, under the theory that backup with range-limited times are better than leaving such files totally without backup, however burying the problem for all future cases may invite bad results, and tracking sign-off on specific files and issues isn’t done yet (I believe). In the special case of file times though, possibly storing the smallest and largest possible times could be used as a specific marker to suppress complaints for “fixed” files (think of it as infinity). Users could then turn the option off to see if any more files show up whose times need similar range-limiting. For odd historical accidents, all will be quiet. For ongoing problems, go have a talk with the file maker.

I’d note there’s discussion about locked files here, including concerns of under-notifying for all cases, forever. One can also see the slippery-slope problem possible if Duplicati gets too helpful with tracking file time fixes…

While I’m commenting, the option-induced and time-induced reports appear understood. What of the random?

I think using MinDate had been proposed before but we then run the risk of a restored file being “different” than the original - at least in that tmestamp.

I don’t recall, but are broken timestamp files actually being backed up (just with associated warnings / errors)? If so, are they restorable (as in what happens when Duplicati restores a file and tries to set the timestamp to something that’s technically invalid)?

They seem to be backed up at least in that the file (whether new or changed) is visible in the backup version. Data presumably is there as well, although mine would have gotten deduplicated so it wasn’t the perfect test:

I changed the creation timestamp to year 21345 and modification to year 22345, confirmed in command.exe, noticed that File Explorer apparently gave up and put blanks where those time columns would have been :grin:, then did a backup, source deletion, restore, and got back my deleted file except now with current dates on it. That actually seems to me like a pretty reasonable round-trip treatment of the file and its out-of-range dates.

It was a job finding something to set a five-digit year. I wound up using http://www.stevemiller.net/apps/ touch.

Thanks for the tests!

Considering the bad date situation being fed to it I agree that Duplicati handled it about as well as could be expected.

Note that this appears to have been fixed and should show up in the next release:

[quote=“kenkendk, post:1, topic:4283”]
AFTER deleting all dlists seems to resolve it)

So now will there be something we have to do as for the timestamps, on user side and if so, what? I don’t see weird years on my files, but e.g. “Created: … 2017” - “Last change: 2012”.

Hi @Tapio

Unless you’re getting the “Failed to read timestamp” warnings, there’s nothing to do.
Note that this is a different issue than the one --check-filetime-only currently causes.

Windows updates created time when a file is copied, so it can get ahead of modified.
I’m not sure what you’re using, but you could probably do a web search for your OS.

The restore test I just did set created and modified times to exactly the original ones.
That seems like good backup/restore behavior to me. I hope you’re also seeing that.

Was able to do a successful backup without the filetime option, it took a few hours. And I can say subsequent runs do not do full file reading, which is good.

I can confirm that in new 2.0.3.10_canary metadata warning is not longer displayed.

2 Likes