Access Denied when backing up to Amazon S3 with Restricted Users

Hi,
I’m having issue backing up to my Amazon S3 account. It works fine if I use my master Access ID and Key or if I create a new IAM user and give it AdministratorAccess policy but otherise returns Access Denied. I have tried both using Duplicati to create a limited permissions user as well as creating my own policy with the necessary permissions.
Using 2.0.2.4_canary_2017-09-09.

Example of error message.

Amazon.S3.AmazonS3Exception: Access Denied ---> 
Amazon.Runtime.Internal.HttpErrorResponseException: The remote server 
returned an error: (403) Forbidden. ---> System.Net.WebException: The
 remote server returned an error: (403) Forbidden.
   at System.Net.HttpWebRequest.GetResponse()
   at Amazon.Runtime.Internal.HttpRequest.GetResponse()
   --- End of inner exception stack trace ---
   at Amazon.Runtime.Internal.HttpRequest.GetResponse()
   at Amazon.Runtime.Internal.HttpHandler`1.InvokeSync(IExecutionContext
 executionContext)
   at Amazon.Runtime.Internal.RedirectHandler.InvokeSync(IExecutionContext 
executionContext)
   at Amazon.Runtime.Internal.Unmarshaller.InvokeSync(IExecutionContext 
executionContext)
   at Amazon.S3.Internal.AmazonS3ResponseHandler.InvokeSync(IExecutionContext 
executionContext)
   at Amazon.Runtime.Internal.ErrorHandler.InvokeSync(IExecutionContext 
executionContext)
   --- End of inner exception stack trace ---
   at Duplicati.Library.Main.BackendManager.List()
   at 
Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis(BackendManager
 backend, Options options, LocalDatabase database, IBackendWriter log, 
String protectedfile)
   at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager
 backend, Options options, LocalDatabase database, IBackendWriter log, 
String protectedfile)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager
 backend, String protectedfile)
   at Duplicati.Library.Main.Operation.BackupHandler.Run(String[] 
sources, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass16_0.<Backup>b__0(BackupResults
 result)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, 
String[]& paths, IFilter& filter, Action`1 method)
   at Duplicati.Library.Main.Controller.Backup(String[] inputsources, 
IFilter filter)
   at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

The crash happens at the “List” operation, so I am guessing there is a problem with ListBucket.

For reference, the policy that Duplicati creates is this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1390497858034",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::bucket-name-and-path",
                "arn:aws:s3:::bucket-name-and-path/*"
            ]
        }
    ]
}

I remember testing it, and I found it to be working at the time.

I edited your post to improve the formating. (Just added ~~~ before and after the output you pasted, see here for details).

The only way I can get this to work is to set the resource to everything like this.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1390497858034",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket",
                "s3:DeleteObject"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

That indicates that the ARN is wrong.

There is a guide here:
https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html

What about if you set it to this value (for testing):

arn:aws:s3:::*
arn:aws:s3:::*

Works, and I’ve since found that this does too.

"arn:aws:s3:::BucketName",
"arn:aws:s3:::BucketName/*"

Without the folder added.

Ok, I will update the built-in IAM generator to only use the bucket name then.
I probably tested with just a bucket name as the target back then.