AWS sync errors

Hi Team, we have a job created to backup a 1.4 tb folder structure to AWS for archive storage. The job never completed and the logs show a fatal error. I am having difficulty interpreting the error log:

{“ClassName”:“System.Net.WebException”,“Message”:“The remote server returned an error: (400) Bad Request.”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:" at System.Net.HttpWebRequest.BeginGetResponse(AsyncCallback callback, Object state)\r\n at System.Threading.Tasks.TaskFactory1.FromAsyncImpl(Func3 beginMethod, Func2 endFunction, Action1 endAction, Object state, TaskCreationOptions creationOptions)\r\n at System.Net.WebRequest.b__78_1()\r\n at System.Threading.Tasks.Task1.InnerInvoke()\r\n at System.Threading.Tasks.Task.Execute()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at Amazon.Runtime.Internal.HttpRequest.<GetResponseAsync>d__16.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at Duplicati.Library.Main.Operation.Backup.BackendUploader.<<Run>b__13_0>d.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at Duplicati.Library.Main.Operation.Backup.BackendUploader.<<Run>b__13_0>d.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at CoCoL.AutomationExtensions.<RunTask>d__101.MoveNext()\r\n— End of stack trace from previous location where exception was thrown —\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at Duplicati.Library.Main.Operation.BackupHandler.d__19.MoveNext()\r\n— End of stack trace from previous location where exception was thrown —\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext()",“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:“8\nBeginGetResponse\nSystem, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089\nSystem.Net.HttpWebRequest\nSystem.IAsyncResult BeginGetResponse(System.AsyncCallback, System.Object)”,“HResult”:-2146233079,“Source”:“System”,“WatsonBuckets”:null}

We have been able to randomly select a file and restore it but the status of the job remains as: * Last successful backup:
Never*

Baremetal host running Server 2016. Latest Beta build.

Hello

maybe try to export the job as command line and run from an elevated console to see more messages ?

Just to be sure, you do know that Duplicati isn’t meant to do baremetal backups/restores? Duplicati is suitable for backing up folders/files (non-system) but should not be used to backup the OS or Apps.

The error shows as 400 Bad Request which according to AWS Docs means ACLs are not supported by the bucket.

Hopefully that helps on some level, oh and welcome to the forums @LoganKreft

Cheers Jimbo, I’ll do some digging on the ACL’s.

While the server is baremetal Duplicati is only backing up a single share folder on it. Not touching the OS files at all.

Still no improvement working through the ACL options. I set it up to match this write up: c# - getting "The bucket does not allow ACLs" Error - Stack Overflow.

On a side note we have object lock enabled on the bucket (immutability) which i suspect might be the cause of the error. Is Immutable AWS supported from Duplicati?

The table looks like it has about 55 other things that could give you the 400, but good luck finding the one. Probably some logging (e.g. About → Show log → Live → Information or testing is in order to know what operation was being attempted at the time, then try doing it in Duplicati.CommandLine.BackendTool.exe. Export As Command-line can give you a URL to use, but you might want to edit it to a different test folder.
Duplicati.CommandLine.BackendTester.exe can do a slower but automatic test if you prefer that method.

There is no special support, but isn’t this designed so that your bucket setting (which is what?) can set it?
The program does need to live within the immutability provided. Do you intend to let backup grow forever?
The minimum would be to keep all versions and set no-auto-compact, but maintenance and upload errors might sometimes want to clean things up, and sometimes this may need deletion, if a window can be had.

Here is an example where the contract prevented disabling immutability, but is it administratively possible?

Duplicati is based on the idea of ‘dumb backend’. From the whitepaper:

It supports only 4 operations:

it uses an abstraction to storage, that allows only four commands to be issued: PUT,
GET, LIST and DELETE

so there is no provision for querying the backend to get the retention period, and getting old data that needs to be reuploaded to the backend to preserve deduplication and save it in new blocks. It would be an very serious work to add this capability.

So if you enable immutability on a backend, it will never be compacted and will grow without limit, else when the backend decides by itself to delete old files, it will break Duplicati backups because of deduplicated data.

1 Like

Hopefully that means delete based on a config. It looks like S3 lifecycle rules are able. Don’t tell it to. :wink:
I’m not sure how it connects to the object lock question, but I agree we need to live with what we have.

Bucket configuration makes me ask – what mode and period? Governance mode seems a little looser.
Setting period near Duplicati retention period might create some benefits, but I don’t know if it’s enough.

Any backup will upload a dlist file and some dblock files with changed data, and dindex files for dblocks.
Duplicati retention directly affects deletion of dlist files corresponding to backup versions, and indirectly affects deletion of dblock and dindex because their blocks are still in use as long as dlist file uses them.

was trying to avoid an extra fee from early deletion. With object lock, you’re avoiding early delete failures.
This is far from a solid path, and you can expect some bumps. Ideally set up an AWS escape hatch too.

In addition to trying to find out from logs or test which type of operation is failing, you can simplify.

Have you tried a small backup to a bucket without immutability and with near-default AWS config?
If that works, then something in between makes it not work, so further experiments are in order…

While I think it would be good for someone to pioneer living with object lock, maybe get going first?