Error: The socket has been shut down

Verification succeeded!

Getting rid of the files in B2 ended up being a bit of a pain. I moved them to a new folder, but I was still failing verification on the same 32 files. So I moved the folder into another bucket and tried again… STILL failing on the 32 files. I ended up searching the bucket for the file name using Cyberduck, and in each case it found 2 (?!) files with that name in the bucket, one with a meaningful file size and the correct timestamp, and another with a file size of 0 and a timestamp from today. Maybe it’s some sort of “recycle bin” type of functionality in B2? In any case, I deleted each of those files, one by one, until they were all deleted. I then re-ran Verify and got the successful result.

Next up, I’m going to test restores (using no-local-blocks). I’ll report back the results!

EDIT: Where do I set no-local-blocks on a restore? This is all I see in the UI:

Advanced options cannot be set when restoring files #2185

the workaround is to Edit the Advanced options into the backup before asking the backup to do a Restore.

On Direct restore from backup files, which has no job to edit, it gives a spot to type options in.

So I edit the backup definition and add no-local-blocks to the advanced options there? And then when I run the restore, it will obey the advanced options set in the backup definition?

Yes. Advanced options in the job are not just for backup, but not all make sense for a given operation.

So 32 duplicates somehow? B2 does allow File Versions by default. I don’t know if that may be related.

Sorry if I am pointing something that has already been said in this very overwhelming thread, but this kind of message is most likely to happen with upload errors: Duplicati does (by default) 5 retries and each time it renames the failed file. So it seems to confirm that the initial cause of the problem was linked to backend trouble.

My restore test was successful… I restored a folder of 38 photo files, all of which work as expected and match the source data.

Is it reasonable for me to conclude that this backup job is functional for restore purposes, or are there any more tests I should conduct?

If you still recall when and where it was, I suppose you could test that log file that got trashed somehow.
If not, it didn’t sound like that’s one whose loss would worry you. As always, more restore can be better.

There aren’t a whole lot of test buttons and commands. Repair will probably say files look as expected, although that’s just by looking at directory. The LIST-BROKEN-FILES command similarly may be happy.

The TEST command which you can run from GUI Commandline is mostly meant for downloading files from the destination to actually check (default is just a hash) the contents are good. List can’t show that.

Verifying backend files does this too, but sample size is one set, usually 3 files. It can be raised, e.g. by backup-test-samples or backup-test-percentage, and that can do like test command, but spread out…

Don’t try to Recreate, as those probably bad dindex files are still there. Do you want to work on them, or maintain two backups (one historical, one new) or something else? Two backups is not a bad idea if the files are important enough, however even better would be for the second backup to be with another tool.

I was able to restore it from the 5/31 backup set. It was readable and seemed complete. It doesn’t match the live version of the file, but that’s expected since it’s being handled by logrotate. (I believe the general idea is that it takes the contents of *.log, copies it to *.log.1 on a weekly basis, and then the following week it takes the contents of *.log.1 and moves it to *.log.2 and then zips it up.) edit: I did compare the restored file against *.log.2 and it does indeed match.

But you’re right, that file isn’t important.

I think I’m happy to keep this particular backup job as a historical, and just proceed with my new backup job as the current data. That is to say, I think I’m satisfied at this point.

I want to thank everybody who participated in this effort… it’s greatly appreciated!! I’ve learned a lot. But most importantly, my files are safe!

1 Like

I agree, it’s definitely been overwhelming!

However, in this case, I believe you did miss something. These errors came after restoring the database backup from 5/31, before the initial error happened. The extra files were expected because they were put there by duplicati on 6/1 before the error occurred.

Simply for the sake of clarity for anybody that lands on this thread experiencing the same error, the original error was some sort of communication problem between my system and Backblaze B2. The communication error was temporary, and resumed working the following day.

The rest of this thread was twofold:

  1. Fixing the database inconsistencies created by the communication error
  2. Understanding why the “rebuild database” needed to use the dblock files instead of the dlist/dindex files
1 Like