Restoring fails - fails in downloading files

I’m using a custom S3 endpoint with encryption enabled. I’ve set my block size to 200 MB. The Duplicati service is running on my Debian Buster server.

I start a restore of something, could be anything - even a 10 KB file. But in this case it’s a rather large collection, where it needs to download 32 remote files. Here is a snip of the error it passes on each of the 32 files: https://pastebin.com/raw/fvjXGwbr

I’ve tried removing bandwidth limit and increasing the http timeout to 10 minutes but it doesn’t make any difference. I tried direct restore as well as restore “from” the backup job.

My guess is that it fails to download the files? There’s nothing special about the s3 endpoint. It’s hosted elsewhere any all that. I tried installating Duplicati on my local computer - on a completely different network than the server - and it works fine there.

Welcome to the forum @kocane

Your log does indeed imply problems downloading the files, but these one-line messages don’t say much.

Could you check live log at About → Show log → Live → Retry to see if you can see more, for example:

2019-07-15 12:33:53 -04 - [Retry-Duplicati.Library.Main.BackendManager-RetryGet]: Operation Get with file duplicati-b3c0cc94f1294434c88ebf0296077a1ab.dblock.zip attempt 5 of 5 failed with message: The file duplicati-b3c0cc94f1294434c88ebf0296077a1ab.dblock.zip was downloaded and had size 683 but the size was expected to be 679
System.Exception: The file duplicati-b3c0cc94f1294434c88ebf0296077a1ab.dblock.zip was downloaded and had size 683 but the size was expected to be 679
   at Duplicati.Library.Main.BackendManager.DoGet(FileEntryItem item)
   at Duplicati.Library.Main.BackendManager.ThreadRun()

You made mention later of “hosted elsewhere”. If this isn’t on amazonaws.com, can you say what it’s on?

Amazon S3 Server Side Encryption discusses that, if it’s what you’re using. What’s “encryption enabled”?

The s3 command of the AWS Command Line Interface might be one way to try file download of files in an independent way using cp and whatever encryption options fit. I don’t have S3 so can’t help in much detail.

Duplicati.CommandLine.BackendTool.exe is another way to do a manual download. File format should be AES File Format, so basically if it doesn’t start with AES, then something’s wrong. Testing a dlist or dindex will probably be a shorter file than a dblock, so it will be a bit easier on whatever viewer you use on that file.

What seems odd is that you’re having trouble with restore, yet your backup works. Ordinarily backup does Verifying backend files before it considers the job done, however there are options available to prevent that.

Is this with the same “custom S3 endpoint with encryption enabled”, with only difference being the network and the computer? Do you mean “Direct restore” of Debian Buster server backup to it works? If not, what? Need clarification of “works fine there”, plus any potentially relevant information on differences from server.

Thanks for the reply :slight_smile:

I try to look in the “live” log files like you describe, but it just gives me “disabled” when I try to select from the drop down - do I need to enable something? My default options are like this:

–http-operation-timeout=10m
–log-level=Verbose
–log-file=/var/log/duplicati_global.log

I set the logging to verbose on my backup and look in that specific log file, but it doesn’t seem to provide any more information:

2019-09-04 21:55:58 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Retrying: duplicati-ba31a4874bdea49c0b20a109844ca866e.dblock.zip.aes (199.96 MB)
2019-09-04 21:56:08 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-ba31a4874bdea49c0b20a109844ca866e.dblock.zip.aes (199.96 MB)
2019-09-04 21:56:17 +02 - [Retry-Duplicati.Library.Main.BackendManager-RetryGet]: Operation Get with file duplicati-ba31a4874bdea49c0b20a109844ca866e.dblock.zip.aes attempt 3 of 5 failed with message: Failed to decrypt data (invalid passphrase?): Invalid password or corrupted data
System.Security.Cryptography.CryptographicException: Failed to decrypt data (invalid passphrase?): Invalid password or corrupted data ---> SharpAESCrypt.SharpAESCrypt+WrongPasswordException: Invalid password or corrupted data
  at SharpAESCrypt.SharpAESCrypt.ReadEncryptionHeader (System.String password, System.Boolean skipFileSizeCheck) [0x00217] in <5e494c161b2e4b968957d9a34bd81877>:0
  at SharpAESCrypt.SharpAESCrypt..ctor (System.String password, System.IO.Stream stream, SharpAESCrypt.OperationMode mode, System.Boolean skipFileSizeCheck) [0x001af] in <5e494c161b2e4b968957d9a34bd81877>:0
  at (wrapper remoting-invoke-with-check) SharpAESCrypt.SharpAESCrypt..ctor(string,System.IO.Stream,SharpAESCrypt.OperationMode,bool)
  at Duplicati.Library.Encryption.AESEncryption.Decrypt (System.IO.Stream input) [0x00000] in <38f40f254bb94cb3afc644103e1e7581>:0
  at Duplicati.Library.Encryption.EncryptionBase.Decrypt (System.IO.Stream input, System.IO.Stream output) [0x00000] in <38f40f254bb94cb3afc644103e1e7581>:0
   --- End of inner exception stack trace ---
  at Duplicati.Library.Main.BackendManager.coreDoGetPiping (Duplicati.Library.Main.BackendManager+FileEntryItem item, Duplicati.Library.Interface.IEncryption useDecrypter, System.Int64& retDownloadSize, System.String& retHashcode) [0x002ba] in <d13d696d40bb4d4da88c121875e81b80>:0
  at Duplicati.Library.Main.BackendManager.DoGet (Duplicati.Library.Main.BackendManager+FileEntryItem item) [0x002fd] in <d13d696d40bb4d4da88c121875e81b80>:0
  at Duplicati.Library.Main.BackendManager.ThreadRun () [0x000ff] in <d13d696d40bb4d4da88c121875e81b80>:0

You made mention later of “hosted elsewhere”. If this isn’t on amazonaws.com, can you say what it’s on?

It’s some selfhosted Ceph-based S3 storage.

Is this with the same “custom S3 endpoint with encryption enabled”, with only difference being the network and the computer? Do you mean “Direct restore” of Debian Buster server backup to it works? If not, what? Need clarification of “works fine there”, plus any potentially relevant information on differences from server.

Yep, exact same place I’m trying to restore from. It’s the direct restore that works, where I input the backup S3 url, keys etc etc. I do this from my Windows laptop. What difference on my debian server that could cause this, I don’t know. I mean it’s just regular HTTPS traffic? I guess we wont know before I get some proper logs.

That’s the initial value, but there should be a downward-facing triangle at far right to click on to expand list.
The Verbose log should be good though. That catches more than Retry does (but less than higher levels).

I think I’ve seen this before where it heads right for the decrypt error without hash error stop. Not sure why.

You can maybe find what the hash was supposed to be on Job → Show log → Remote, then click on file.

Other ways to get “should-be” information are upload-verification-file then either download it and a file, or if access to the backup files is directly available to a Python script, run utility-scripts/DuplicatiVerify.py with it.

You got me. One thing you could do out of general precaution is to run mono --version to see if it’s older.