dateTime is invalid and Kind is Local


#1

Hi,

I setup a backup system with 2 Netgear ReadyNAS Ultra 2 OS6, running Debian 8 + minio + duplicati for S3 like backup. One device is based in US, the second one in France; for cross backup.

I noticed today the US device isn’t able to end backup task properly to the FR device, and failed with:

dateTime is invalid and Kind is Local

Fatal error
System.ArgumentException: dateTime is invalid and Kind is Local
  at System.TimeZoneInfo.IsDaylightSavingTime (DateTime dateTime) [0x00000] in <filename unknown>:0 
  at System.TimeZoneInfo.GetUtcOffset (DateTime dateTime) [0x00000] in <filename unknown>:0 
  ...

Seems that Duplicati didn’t like the DST which occurred in US this weekend, and which will happen only in 2 weeks in France.

What does it imply ?

Running 2.0.2.1_beta_2017-08-01.

I’m pretty sure that the issue will be solved in 2 weeks.

Regards,


#2

Hi @glaurent, welcome to the forum!

That’s a new error for me. I’m pretty sure there are others backing up to destinations “across the pond” so I’m a bit surprised this hasn’t for to before.

I do know some updates happened with date / time handling in versions newer than yours… Would you be willing to update to one of them and test if the error still happens? I believe a new Experimental version is coming quite soon.


#3

Thanks :grin:

I tried to update to Canary, but failed to use more recent than 2.0.2.10_canary_2017-10-11.

Anyway, it doesn’t solve at all:

Full stack:

Fatal error
System.ArgumentException: dateTime is invalid and Kind is Local
  at System.TimeZoneInfo.IsDaylightSavingTime (DateTime dateTime) [0x00000] in <filename unknown>:0 
  at System.TimeZoneInfo.GetUtcOffset (DateTime dateTime) [0x00000] in <filename unknown>:0 
  at Newtonsoft.Json.Utilities.DateTimeUtils.GetUtcOffset (DateTime d) [0x00000] in <filename unknown>:0 
  at Newtonsoft.Json.Utilities.DateTimeUtils.WriteDateTimeString (System.Char[] chars, Int32 start, DateTime value, Nullable`1 offset, DateTimeKind kind, DateFormatHandling format) [0x00000] in <filename unknown>:0 
  at Newtonsoft.Json.JsonTextWriter.WriteValueToBuffer (DateTime value) [0x00000] in <filename unknown>:0 
  at Newtonsoft.Json.JsonTextWriter.WriteValue (DateTime value) [0x00000] in <filename unknown>:0 
  at Newtonsoft.Json.JsonWriter.WriteValue (Newtonsoft.Json.JsonWriter writer, PrimitiveTypeCode typeCode, System.Object value) [0x00000] in <filename unknown>:0 
  at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializePrimitive (Newtonsoft.Json.JsonWriter writer, System.Object value, Newtonsoft.Json.Serialization.JsonPrimitiveContract contract, Newtonsoft.Json.Serialization.JsonProperty member, Newtonsoft.Json.Serialization.JsonContainerContract containerContract, Newtonsoft.Json.Serialization.JsonProperty containerProperty) [0x00000] in <filename unknown>:0 
  at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeValue (Newtonsoft.Json.JsonWriter writer, System.Object value, Newtonsoft.Json.Serialization.JsonContract valueContract, Newtonsoft.Json.Serialization.JsonProperty member, Newtonsoft.Json.Serialization.JsonContainerContract containerContract, Newtonsoft.Json.Serialization.JsonProperty containerProperty) [0x00000] in <filename unknown>:0 
  at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeObject (Newtonsoft.Json.JsonWriter writer, System.Object value, Newtonsoft.Json.Serialization.JsonObjectContract contract, Newtonsoft.Json.Serialization.JsonProperty member, Newtonsoft.Json.Serialization.JsonContainerContract collectionContract, Newtonsoft.Json.Serialization.JsonProperty containerProperty) [0x00000] in <filename unknown>:0  

I’m just wondering if the backup wasn’t running at the time of the DST change, and if it mess up some files.

I will try to upgrade to latest canary, but I have another issue to open for that…


#4

Sorry to hear that - and for the delay in my response.

I don’t suppose this (backup or update) magically started working for you while I was away, did it?


#5

Nope, still looping on this error…

Any way to try to fix / redo the backup process ? I even not able to know which step is failing.

For now, I’m waiting DST in France on March 25 to see if it fix by magic.


#6

Does this error only happen on backups or does it happen with other commands or a test restore of even a single file?


#7

Backup and Verify files.

Restore seems still working.


#8

Just as a stupid test, have you tried adjusting your computer click (or even DST settings) to see if the problem goes away?


#9

I changed my local time to another timezone to get back to initial 6h gap, but still get the same issue.

Well, the issue speak about IsDaylightSavingTime method, which will have the same behavior with almost all America TZ. Will try other combinaison later.


#10

Also tried to go back in the past prior the DST, still same issue.

Also open trying a Fix database.


#11

Well, DST is done now on the remote server too; but I still have the issue.

Any idea on how to fix it ?


#12

I guess we should assume DST was just a coincidence…

Do you know for how long the US -> FR backup was working before the initial dateTime issue happened?

Where there any other changes around the time of the error - or example was there a ReadyNAS or Debian update?


#13

The backup is pretty slow, the bandwidth of the FR device is 2-3Mbps.
The last backup increment (I add data step by step) was around 40GB, which takes ~3 days.

No system change / update during the backup process.

I tried a delete + fix DB, but still failing.

I’m not enougth trained on Duplicati mecanism for now; no idea on how to fix it or just rollback/delete current backup process to restart it.


#14

I’m not sure where else to look for the issue, but before deleting the current backup I’d suggest making a second one with just a few files (and pointing to a different destination folder / bucket) and see if the error happens there as well.

Since you’ve already verified restores work, let’s not be too hasty to delete your existing backup job which we know can be used for restores if any are needed while a new backup is “building up” it’s history.


#15

Well, I do not like this " solution ", but I ended by deleting the remote bucket to redo the whole backup (I have another destination for the same set).

Quite a shame, because I will spend another 3-4 weeks to backup everything :sweat_smile:, due to the remote bandwidth.

I succeed to update to latest 2.0.3.5_canary_2018-04-13, but I still have the issue.

And I do not like to let unexplained issues… I hope it won’t happen again.


#16

Agreed.