Error 400 - bad_request

The error is as follows:

System.Exception: 400 - bad_request: Sha1 did not match data received
   at Duplicati.Library.Main.Operation.BackupHandler.HandleFilesystemEntry(ISnapshotService snapshot, BackendManager backend, String path, FileAttributes attributes)
   at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation(ISnapshotService snapshot, BackendManager backend)
   at Duplicati.Library.Main.Operation.BackupHandler.Run(String[] sources, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass16_0.<Backup>b__0(BackupResults result)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
   at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
   at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

Version: 2.0.2.1_beta_2017-08-01
Running as a service
Destination is BackBlaze B2

Any thoughts?

Are you running Duplicati on a Mac?

If so, then even though it’s not exactly the same error, you may find moving to a canary version newer than 2.0.2.8 OR setting the FH_DISABLE_APPLECC=1 environment variable before starting Duplicati might help (all as described here):

Oh, and I edited your post by putting “~~~” before and after the error message to make it easier to read.

This is not running on a Mac, it’s running on either Server 2008 R2 or Server 2012, (I have this error on approximately 6 machines). It sounds like FH_DISABLE_APPLECC=1 is a Mac only thing?

Yep, I believe you are correct. I incorrectly assumed that since the only other user I found with a similar error was on a Mac that you were as well, sorry about that.

Unfortunately, what I’m seeing in the code makes it look like some problem (in this case the SHA1 error) with your destination (in this case B2) is causing the HandleFilesystemEntry() code to abort assuming the connection to the backend has been lost.

The way the code is written I believe the error is happening elsewhere but being reported as coming from the HandleFilesystemEntry() code block. @kenkendk, assuming I’m reading that correctly is there a reason a 400 - bad_request error like that doesn’t trigger a retry?

You can disable the FasterHashing stuff on Windows (and all other OS’s) with:

export FH_LIBRARY=Managed

But for Windows, it should not make problems, because it would otherwise use the native CryptoNG system calls (part of .Net, so not likely to fail).

I have put up a test tool to verify that FasterHashing actually works:

Where do you see that? The backend.HasDied exception is only thrown after the retries are depleted.

How do I export FH_LIBRARY=Managed on windows when it is run as a service? I don’t see any documentation on that. Thank you so much for all of your help!

:blush: I didn’t see that anywhere - I was being lazy and didn’t follow through all the places that HandleFilesystemEntry might have been used.


Normally I’d say use a --run-script-before parameter to call a batch file that just does a set FH_LIBRARY=Managed, but in this case I’m not sure if that will do the trick.

If it doesn’t, you can run sysdm.cpl from the command line to open the standard Windows “System Properties” window, select the “Advanced” tab, then click on the “Environment Variables…” button in the lower right and from there you can add it to the “System variables” section.


Did you try running the FasterHashingTest.zip file?

Here’s what it said about my machine:

Optimal implementation is: CNG
Performing basic tests
Performing tests with non-zero offsets
Testing performance with 64b block
                 CNG:  1010941 hashes/second
             Managed:  901949 hashes/second
Testing performance with 100kb blocks
                 CNG:  1716639 hashes/second
             Managed:  820466 hashes/second
Testing performance with 64b blocks and 5 byte offset
                 CNG:  1083937 hashes/second
             Managed:  690961 hashes/second
Testing performance with 100kb blocks and 5 byte offset
                 CNG:  1696017 hashes/second
             Managed:  863182 hashes/second
Testing multithreadded execution to wiggle out any shared state problems

Note that on my machine my Avast antivirus flagged it as “a very rare file” and stopped the execution with the following message:
image

Note that even when flagging the file as “trusted” I still couldn’t run it w/out waiting for Avast to get back to me OR disabling my AV.

These are the results of FasterHashingTest:

Optimal implementation is: CNG
Performing basic tests
Performing tests with non-zero offsets
Testing performance with 64b block
                 CNG:  490450 hashes/second
             Managed:  409254 hashes/second
Testing performance with 100kb blocks
                 CNG:  561544 hashes/second
             Managed:  402171 hashes/second
Testing performance with 64b blocks and 5 byte offset
                 CNG:  497450 hashes/second
             Managed:  392568 hashes/second
Testing performance with 100kb blocks and 5 byte offset
                 CNG:  586881 hashes/second
             Managed:  429487 hashes/second
Testing multithreadded execution to wiggle out any shared state problems

I am trying that environment variable now as well.

Thanks for the FasterHashingTest results (by the way, I edited your post by putting “~~~” before and after the results to make them a bit easier to read).

I’m not seeing anything that would indicate an error, but it’s @kenkendk’s tool so he’d know better than I would.

Is it possible that this is the same issue that the program rclone was having?