Clean-up failing with "size was expected"

Welcome to the forum @jshank

What’s interesting about the size is that in hexadecimal a 3217633 became 3220000.
Generally when wrong sizes are so even, I wonder if a filesystem or SMB has done it.

It would be interesting to see what the extra data is, for example if it’s NUL characters.
Linux has better built-in alternatives to a hex editor, e.g. dd or hexdump could show…

What does a .zip tool think about these files? Does trimming it to correct size help it?

The other interesting thing about the length error is Duplicati checks it numerous times.
What length shows using other tools? Might it list one length but give another to reads?

Checking from both Linux and Windows might also find something, as may timestamps. Possibly you’ll also see some pattern, e.g. is there a Windows restart anywhere nearby?

I don’t know if this is related to your unhandled exception, but the stack looks kind of like:

Duplicati crashing when starting backups
which seemed at first to be only on Linux backing up to Storj, but later reports vary more.

The developers are trying (although mostly on Storj), but I don’t think this one got solved,

or maybe I’m totally off here. Regardless, dev help would be helpful for both these issues.

The SMB unreliability problem is longstanding. The last attempt to handle it was with this:

  --disable-length-verification (Boolean): Disable length verification
    As an extra precaution the uploaded file length will be checked against the local source length.
    * default value: false

That’s supposed to check the length right after upload by default. So how’d it go wrong?
There are also length checks before and after backup, but actual downloads are heavy enough that one can’t download every file every time. One can certainly test as needed.

The TEST command

1 Like