Failed: The response ended prematurely, with at least 53457981 additional bytes expected. (ResponseEnded)
Details: System.Net.Http.HttpIOException: The response ended prematurely, with at least 53457981 additional bytes expected. (ResponseEnded)
Getting this error on many backups, running a repair and rerunning the task (sometimes 3 or 4 times) will get past the error for a successful task execution.
How do we stop this error occurring? Backups are supposed to be entirely automated and having to manually rerun when this error occurs obviously ruins that.
To be excact & give more info this particular error is unique to 2.1.x.x, historically I could not say if it has worked more reliably as this version has only been tested in my systems for a couple weeks now.
2.0.7.1 was used before for many years but that is having another type of new problem (documented in another thread) which is the main reason I have updated to see if it would help.
It can absolutely work though, 20-30 other configurations (1 per server) just like this are active currently for testing, only a handful of them fail with this type of error, and it is extremely random which ones do, ones that have worked before can randomly return this error, and be fine the next day. In most instances simply running it manually a couple of times will get past the error, but with the scale I have it is not ideal & very time consuming.
All are configured to use the same remote, identical local configuration which has again been used for many years (same one used for 2.0.7.1 and prior versions), so no issues that can be seen or even changes that could have triggered the error, many live services are ran so I would know immediately if there was a larger network problem (but this issue happens across different datacenters either way so it would not be the case). and more confusing because of the fact that as soon as the email comes through with the error we can go onto the server to run it manually and chances are it will finish fine.
If you have any ideas I’d be happy to test them across a few instances.
If I understand this, you are saying that you have been running a similar setup for a long time, and never seen the error with prior versions, but you are seeing the error now on multiple machines with 2.1.0.2?
Correct, probably over 8/9 years at this point with this configuration IIRC, only very minor revisions to suit requirements. This is the first time we are seeing the “response ended prematurely” error following the 2.1.x.x testing, never had that on 2.0.7.1/prior.
There was always the occasional issue with 2.0.7.1 and earlier, happens with any software, but only ever on a very small scale and nothing that cannot be fixed very easily + permanently. We were recently getting the file length is invalid error documented in my other thread, an error we had never seen before, on a large scale with 2.0.7.1 which is why we’re trialling the new version, but it seems equally problematic for us.
At a loss on why exactly, we’ve been trying to work on it for over a month now and our typical list of fixes are not 100% effective, and if they do work chances are it just breaks again the next day. I’ve been testing the smaller volume sizes from 25MB to 100MB but so far it seems similarly problematic.
Is this with a new backup? Changing the size on an existing one will only change new volumes.
What is the message pattern anyway? Topic title talks of about 53MB left to go. Was that usual?
Certainly on a 25MB new backup, I would expect smaller values. What’s the pattern of the size?
There are quite a few test tools available that maybe could be applied.
BackendTool is manually driven, e.g. ask for a get of a file, maybe one that showed a problem.
BackendTester will do an automated test to an empty folder. It has a whole lot you can configure.
Default is a reasonable starter config. IIRC it finishes in 10 or 20 minutes.
For a suitable URL, you can start with one from Export As Command-line.
Both, most are already setup for quicker testing so as you say would have the larger volumes on the remote still, they continue to fail likely because of that. Two others are being tested, one with 100MB volume size and the other on 25MB, both on a fresh remote folder currently, they worked but I will need to see how it goes after some more time. I’ll likely check them on Monday to see if they were reliable over the weekend and if it was roll it out to some more servers to test larger scale.
Exact figure can vary if this is what you mean, just on 3 different ones today for example I’m seeing 157354301, 417302317 & 295089325 so no pattern to note.
That does point a fairly damning finger towards the 2.1.x.x update I think.
But… could the problem with 2.0.7.1 having invalid length be two sides of the same problem? If 2.1.x.x detects the premature closing and 2.0.7.1 happily returns a partial file, then you would see the two different error messages for the same underlying problem: data is not transferred completely.
A detail is that iDrive e2 is implemented through AWS S3 so the same library is used to connect to other S3 destinations, without this error being reported.
Have you been in contact with iDrive to hear if anything on their side could explain it?