Is there any secret way of getting a lot of log data on the web page without clicking the get more data button at the bottom of the page nnnnn times?
From the GUI:
11 dec 2018 03:17: put duplicati-b59269cd903f3457fa4d0a28c5a78298c.dblock.zip.aes
{“Size”:51882765,“Hash”:“5rQRpnDSqc5kr501o/3RO/I+NZsrnZM++wZ+m+VAlHk=”}
That is the only mention of the file in the remote log in the GUI from dec 9 to today
With keep all it didn’t crash and also uploaded the verify file. Running the verification tool right now. Looks like that will take a bit of time.
EDIT: Can I ask the verification tool to only verify one single backup file?
EDIT: I edited the python file to just verify one file, the result:
Verifying file: duplicati-verification.json
Verifying file duplicati-b59269cd903f3457fa4d0a28c5a78298c.dblock.zip.aes
Traceback (most recent call last):
File "./DuplicatiVerify.py", line 97, in <module>
verifyHashes(os.path.join(argument, f))
File "./DuplicatiVerify.py", line 58, in verifyHashes
for b in bytes_from_file(fullpath):
File "./DuplicatiVerify.py", line 23, in bytes_from_file
chunk = f.read(chunksize)
IOError: [Errno 5] Input/output error
ls -l duplicati-b59269cd903f3457fa4d0a28c5a78298c.dblock.zip.aes
-rw-r--r-- 1 root root 51882765 Dec 11 03:17 duplicati-b59269cd903f3457fa4d0a28c5a78298c.dblock.zip.aes
Hmmm… that “simple”. The file is… well:
$ sudo cp duplicati-b59269cd903f3457fa4d0a28c5a78298c.dblock.zip.aes test.tmp
cp: error reading 'duplicati-b59269cd903f3457fa4d0a28c5a78298c.dblock.zip.aes': Input/output error
I stopped minio, just to see if it was locking the file but that didn’t change anything.
I’m really really sorry for taking up your time with this. This looks like disk problems. Odd since the computer is running very stable apart from this single file. That’s computers for you, tricks up their sleeve at every corner…
Not that I know of, but an “export full log” feature has been discussed.
The Retention Policy code is relatively new so you may be running into a bug we haven’t addressed yet. I’m guess if you look at your Restore list of backups all the ones that “failed” with “File length is invalid” will actually show up as restoreable.
I’ve tested the whole 7TB disk now and this single Duplicati file is the only unreadable file on the whole disk. Odd but that’s computers for you. But it is a filesystem error and not anything to do with Duplicati. Sorry for taking up your time!
But maybe an idea to, at some time, check why Duplicati crashes and dissapears due to a file not being readable on an S3 style server?
Thanks again for all help locating the problem!
You’re welcome, and I’m glad you tracked it down. Often it’s not clear what happened. This time we at least can end with an assumption that Minio sent something down that Duplicati rejected, maybe a truncated file, however an error such as cp reported would have been nicer. Minio uses S3 and for that Duplicati uses an Amazon library, so tracking it through all that to figure out how/if error reporting took place is beyond me…
If you like, you could check if Duplicati.CommandLine.BackendTool.exe gets a partial file, an error, or other.
This leaves you the awkward question of what to do with the drive. Internal hard-drive defect management describes how bad sectors get remapped. Sometimes vendors also have special tools for their own drives.
smartctl can supply generic information that might help to determine if any additional problems are arising.
How to force a remap of sectors reported in S.M.A.R.T C5 (Current Pending Sector Count)? suggests how sector remapping can be forced by an overwrite. That file is seemingly lost, and you can (if you like) get an enhanced view of how where the error starts by wc -c
or redirect into dd bs=1
to see what count you get.
There’s no way to rebuild the lost dblock, but you can use the affected command to see what was affected, then delete the bad dblock and run list-broken-files and purge-broken-files. How to list / purge broken files
A fresh backup (e.g. export/import/modify) would also work if old versions on this backup are not important.