Denied DELETE requests for good reason. Pls read 2nd & 3rd posts

The following incident has started as of Nov. 20th. This machine was running backups without incident since Nov 5th. It backs up once a day at 2:00am. No one is using it during this time. Yesterday it threw the same error. The following is what showed up in the email at runtime (2:00am).

Any ideas? Nothing changed on Wasabi’s end from what I can tell. The user still has the same policy applied to them.

Failed: Access Denied
Details: Amazon.S3.AmazonS3Exception: Access Denied —> Amazon.Runtime.Internal.HttpErrorResponseException: The remote server returned an error: (403) Forbidden. —> System.Net.WebException: The remote server returned an error: (403) Forbidden.
at System.Net.HttpWebRequest.GetResponse()
at Amazon.Runtime.Internal.HttpRequest.GetResponse()
— End of inner exception stack trace —
at Amazon.Runtime.Internal.HttpRequest.GetResponse()
at Amazon.Runtime.Internal.HttpHandler1.InvokeSync(IExecutionContext executionContext) at Amazon.Runtime.Internal.RedirectHandler.InvokeSync(IExecutionContext executionContext) at Amazon.Runtime.Internal.Unmarshaller.InvokeSync(IExecutionContext executionContext) at Amazon.S3.Internal.AmazonS3ResponseHandler.InvokeSync(IExecutionContext executionContext) at Amazon.Runtime.Internal.ErrorHandler.InvokeSync(IExecutionContext executionContext) --- End of inner exception stack trace --- at Duplicati.Library.Main.BackendManager.Delete(String remotename, Int64 size, Boolean synchronous) at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, IEnumerable1 protectedFiles)
at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, IEnumerable1 protectedFiles) at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile) at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task) at Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.<Backup>b__0(BackupResults result) at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action1 method)

Log data:
2021-11-21 02:01:33 -05 - [Warning-Duplicati.Library.Main.BackendManager-DeleteFileFailure]: Failed to recover from error deleting file duplicati-bf13e7f43466e42a29dd81ce9ff048ca1.dblock.zip.aes
System.NullReferenceException: Object reference not set to an instance of an object.
at Duplicati.Library.Main.BackendManager.ThreadRun()
2021-11-21 02:01:35 -05 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
Amazon.S3.AmazonS3Exception: Access Denied —> Amazon.Runtime.Internal.HttpErrorResponseException: The remote server returned an error: (403) Forbidden. —> System.Net.WebException: The remote server returned an error: (403) Forbidden.
at System.Net.HttpWebRequest.GetResponse()
at Amazon.Runtime.Internal.HttpRequest.GetResponse()
— End of inner exception stack trace —
at Amazon.Runtime.Internal.HttpRequest.GetResponse()
at Amazon.Runtime.Internal.HttpHandler1.InvokeSync(IExecutionContext executionContext) at Amazon.Runtime.Internal.RedirectHandler.InvokeSync(IExecutionContext executionContext) at Amazon.Runtime.Internal.Unmarshaller.InvokeSync(IExecutionContext executionContext) at Amazon.S3.Internal.AmazonS3ResponseHandler.InvokeSync(IExecutionContext executionContext) at Amazon.Runtime.Internal.ErrorHandler.InvokeSync(IExecutionContext executionContext) --- End of inner exception stack trace --- at Duplicati.Library.Main.BackendManager.Delete(String remotename, Int64 size, Boolean synchronous) at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, IEnumerable1 protectedFiles)
at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, IEnumerable`1 protectedFiles)
at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile)
at Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext()

This is the response from the Wasabi team:

Pat,

Thanks for reaching out to us. I had a chance to look at CDR logs and see 403 Access Denied on DELETE calls on November 5 and on November 20th. This usually happens if Compliance on the bucket is turned on, thus preventing application from removing files. I have looked at the bucket as well, and it shows that bucket has NO compliance enabled which would indicate IAM policy need to be reviewed if there is a DeleteObject permission allowed. PUT calls are processed just fine, but DELETE requests are being denied.

Hope this is helpful, Let me know what you find on your end.

I think what I’m seeing (without being at the client’s machine) is Duplicati is starting to clean up it’s files on Wasabi and I have a policy applied on this particular user, (in Wasabi), that Duplicati is using, that will deny all DELETE requests. I did that until I can figure out how to tell Duplicati to only compact it’s files on the Wasabi end every 90 days to comply with their min of 90 days required rules.

Can this be done? This would be some sort of rolling deletion policy but I’m not sure I’m even wrapping my head around it because it would only apply to each file that Duplicati dumped on Wasabi every 90 days per file. From what I gather this doesn’t make sense to do because that’s not how Duplicati is compressing/compacting/cleaning up it’s files on the Wasabi end. It’s literally compressing whatever files it needs to on the client, then uploading them to the s3 endpoint then deleting the files it doesn’t need anymore. Someone correct me if I’m wrong here.

Does anyone have any clue how to beat this 90 day minimum file thing while still being able to use Duplicati?

Last minute thought, maybe Duplicati could mark a file for deletion 90 days from the last time it uploaded it to Wasabi…idk just thinking out loud.

Thanks.

You’re overthinking it. Wasabi has a minimum charge of 90 days per object, but you can certainly delete objects younger than that (assuming you didn’t set a compliance option that prevents it). It’s just that you will be charged as if the object was there for 90 days.

If you really want to prevent Duplicati from deleting files, you will need to disable automatic compaction and probably use unlimited retention (or perhaps a custom policy that never deletes any version until they are at least 90 days old).

In my opinion you shouldn’t worry about this 90 day thing on Wasabi and just let Duplicati manage data files per your preferred retention settings. Getting Duplicati to not delete the files provides no financial benefit with Wasabi because you’re paying for the stored object in either case.

Alternatively, you might consider other hot storage providers that have no 90 day minimum requirement, like B2. The tradeoff is that B2 has egress fees unlike Wasabi.

1 Like

Durr. I must have read this and mentally responded. Thank you for that information. I have since mastered the Wasabi situation lol.

1 Like