Backups from Linux/Mono to AWS S3 fails "The socket has been shut down"

Duplicati 2.0.5.1 is installed on Linux Mint 20. The Linux admin installed the software, with Mono, on the machine. The Linux user has no admin rights on the Linux machine, and simply wants to back up its data into its AWS S3 account.

The user has buckets in AWS S3. After filling out the form in Duplicati, Duplicati prepares the data fine, then displays “Waiting for upload to finish”. Ominously, it sits there for more than a few seconds. But the upload consistently fails, returning the error message:

“One or more errors occurred. (Unable to read data from the transport connection: The socket has been shut down. (Unable to read data from the transport connection: The socket has been shut down.) (One or more errors occurred. (Unable to read data from the transport connection: The socket has been shut down.)))”

The AWS S3 agent user is assigned AWS standard policy “AmazonS3FullAccess”.

On AWS, the user’s key choices are:

  • no versioning (default for AWS S3);
  • server side encrypting using AWS KMS (default would be no server-side encryption);
  • no public access (default policy for AWS S3);
  • no specific bucket policy;
  • only the bucket owner (the AWS account) has list/write & read/write access to the bucket (default for AWS S3?);
  • bucket storage class is set to Standard (Duplicati points to “default”).

In Duplicati, the user’s key choices are:

  • backup job has a passphrase;
  • use SSL = yes;
  • client library = Amazon AWS SDK;
  • remote volume size = 50Mb;
  • smart retention = yes.

Duplicati’s test connection still replies with the weird error message:
"User: arn:aws:iam::XXXX:user/YYYY is not authorized to perform: iam:GetUser on resource: user YYYY "

Duplicati’s logs in GUI shed no clue on the subject. The general log is blank. The remote log shows one “list” command and six “put” commands.

How do I debug this issue?

Is there an IAM policy applied to the bucket that grants this user necessary access? Your 4th bullet point makes it sound like there is not one. I assume this user is not the AWS account owner, so your 5th bullet point doesn’t apply.

I no longer use S3 with Duplicati, but when I did this is the policy I used:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation",
                "s3:ListBucketMultipartUploads"
            ],
            "Resource": "arn:aws:s3:::bucketname",
            "Condition": {}
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:DeleteObjectVersion",
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:GetObjectVersion",
                "s3:GetObjectVersionAcl",
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:PutObjectVersionAcl"
            ],
            "Resource": "arn:aws:s3:::bucketname/*",
            "Condition": {}
        },
        {
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*",
            "Condition": {}
        }
    ]
}

Make sure you replace the two instances of bucketname with your actual bucket name.

Thank you.

To what is this policy applied? The bucket or the IAM user? Or both?

I’m slightly confused: the Duplicati backups that I’ve run for years have never needed such a policy, yet they worked.

There is no IAM policy applied to the bucket. This is in common with existing buckets that Duplicati/Windows have used for years without such policy and without problem.

What is the conflict between the IAM user having “AmazonS3FullAccess” and a separate (duplicate? spurious?) IAM policy on the bucket?

I’ve tried another test of the above issue on a virtual machine of Linux Mint Xfce in a Windows Host.

Same AWS IAM user/agent, two different buckets, both created only for testing purposes. Outcomes:

Test 1, bucket 1: bucket defined by default parameters only. Duplicati worked perfectly (!!!). Compared to the original post, the difference appears to have been server-side encryption used in AWS in the original post.

Test 2, bucket 1: failed. Server-side encryption switched on, use of AWS KMC key. Duplicati said, “One or more errors occurred. (Unable to read data from the transport connection: Connection reset by peer. (Unable to read data from the transport connection: Connection reset by peer.) (One or more errors occurred. (Unable to read data from the transport connection: Connection reset by peer.)))”

Test 3, bucket 1: failed. Server-side encryption still on, this time use of AWS standard S3 key. Duplicati said the same thing.

Test 4, bucket 2: same position as test 1 bucket 1, all defaults, no server-side encryption. But failed! Duplicati said, “One or more errors occurred. (Unable to read data from the transport connection: Connection reset by peer. (Unable to read data from the transport connection: Connection reset by peer.) (One or more errors occurred. (Unable to read data from the transport connection: Connection reset by peer.)))”

Edit: upon clearing up, I tried to delete the test bucket 2. AWS told me it wasn’t empty… Ummm. Hold on. I get an error message that sounds slightly fatal, but the data is in AWS?

Edit #2: this is a can of worms. So:

Test 5, bucket 2: added a few more files. Yes, AWS got them. I can see the new data and new timestamps in the S3 console. But Duplicati still gives the same error message. And Duplicati reports that this test job has never completed a successful backup. So…:

Test 6, bucket 2: attempted to restore one file. Duplicati got the file list, I selected the file, clicked on restore. Duplicati said, “Warning: no files restored.” Hmm. That’s hardly a warning: I’d say that was a fatal error.

Test 7, bucket 2: attempted to repeat test 6. This time, Duplicati failed to show the list of files to restore that it did in test 6.

The policy would be attached to the IAM user.

I logged in to my dormant AWS account to refresh my memory on how this works. If you attached the “AmazonS3FullAccess” policy to the IAM user, that is more than sufficient. It should work. (It’s too broad, in my opinion… I used the policy I showed above because I used a more careful approach of allowing an IAM user limited access to just the needed bucket.)

I set up a test and had no issues with S3 server-side encryption enabled or disabled. It should be transparent to Duplicati.

Your “connection reset by peer” error makes me wonder if you have something more fundamental going on that is interrupting the connection to S3.

I have now completed a series of tests on a physical Linux machine with Duplicati to: i) one local storage; ii) one Google Drive (via OAUTH); iii) 3 S3 services (AWS S3, BackBlaze S3-compatible and Linode S3-compatible).

Only the local storage worked flawlessly.

The Google Drive job did work, but encounted a bug. Specifically, Duplicati failed to find a pre-existing folder. Duplicati then offered to create the folder… and thus created a second folder with the same name. Duh.

None of the three S3 jobs worked. Worse, they all produced different error messages and different outcomes, so obscured any change of further diagnosis from the GUI.

In the case of one S3 job, the error messages suggests that Duplicati’s failure to find pre-existing folders is a root cause, compounded by Duplicati’s choice to create folders where rightfully Duplicati has no right to do so in that particular host (yes, they do appear to be different…). So if the S3 host requires the user to create folders in the host’s console, then Duplicati will fail to find them, then go off at a tangent trying (and failing) to create folders in a host in which Duplicati rightfully has no rights to create folders.

The full testing script - along with screenshots etc - is downloadable from https://drive.google.com/file/d/1w-WX1Y4f0rW8VTABqdT90zgt_esygBcE. The file “findings…” is the start point.

What is the next step for diagnosis?

I must have Duplicati working properly on an S3 host by 30Apr2021.

Which target do you ultimately want to use? We should focus on getting that to work for you.

Amazon S3 works great for me and many other users. Backblaze B2 works great using the native B2 support in Duplicati. I don’t know how many have tested B2’s relatively new S3 support with Duplicati, but if you want to use B2 I don’t see why you couldn’t just use the native B2 support in Duplicati.

Magic words!!

I was so busy testing S3 stuff that I overlooked the entry for “B2 cloud storage”. I didn’t connect it to “Backblaze B2”. Duh.

The backups are now using the B2 cloud storage protocol and are backing-up flawlessly.

Configuring the B2 backups in both BackBlaze and Duplicati was considerably easier than fiddling with the details of S3/S3-compatibles.

Thank you for your brilliant support and for suggesting the solution.

I still can’t explain why AWS S3 was impossible to do from this Linux machine, whereas AWS S3 worked flawlessly on the same machine when it was Windows 10. I think this does require further investigation, because it’s not right that three different S3 providers all fail. It was a surprise to see that S3-compatibles have so many differences (namely, the denial of rights to create remote folders). I’ll leave the link to the test documentation open until 30Apr2021 for others to download and browse.

1 Like

Hi *, I suspect it’s not the precise issue the OP encountered, but since search led me here, I’m bumping/necro’ing this one.

I got this error (“The socket has been shut down”), when setting up duplicati with AWS S3.
After some trial and error I think the problem was the specification of the “subfolder” in the S3 bucket that the backup was directed to.
My configuration was like this (I took over the default here):
[Page 2 “Destination” → Backup Destination → FolderPath]: “/home/alex”

Whereas a copy of the configuration with the empty string instead worked.
(I had to repair database in between, since changing this entry seems to confuse duplicati)
I suspect that object key names in s3 cannot start with a /, but did not find hard evidence so far.