Error: Failed to decrypt data

I’m trying to “adopt” a larger backup archive which I uploaded earlier this year and then abandoned (i.e. deleted the backup job and database because - as @kenkendk might remember - I had been trying for weeks to repair the database and it just wouldn’t work).

But I’m getting this error:

image

I am sure the passphrase is correct (I have used it to access the same archive via “Direct restore from backup files”), select a file for restore and successfully restore it) so I guess my archive is corrupted?

What to do?

If you cannot decrypt it, there is nothing you can do, so that needs to be fixed first.

There is a tool called SharpAESCrypt.exe included with Duplicati that you can use on the commandline to try different passphrases. This is equivalent to the original implementation:
https://www.aescrypt.com/

You seem to be assuming that the passphrase is wrong. But:

In other words: the archive works when I access it via “Direct restore”, but when I want to use it to create a new backup job, it fails.

The error message looks quite clear. There are 2 options:

  • The data is corrupted.
  • The passphrase is incorrect.

If you’re 100% sure that you typed the correct passphrase, then your data is corrupt. However, I suggest to follow @kenkendk’s advice and try to decrypt a few random dblock files.
If data is corrupted, you should at least be able to decrypt some remote files. If none of your remote files can be decypted using the SharpAESCrypt.exe tool, all your backup files are corrupted, which is not very likely, or you have made a mistake in remembering the correct passphrase.

Can you test again if the passphrase works with a “Direct restore from files” operation?

Are there any special characters in the passphrase that could be parsed incorrectly (%, ")?

I don’t get it. If I have successfully restored some files from the archive, what’s the added value of doing the same with another tool?

What’s the purpose? Just to increase the number of tries? If so, what’s the advantage of having more tries? Let’s say I try 5 times, what does it mean if (i) all 5 are successful, (ii) 2 are successfull and 3 fail, (iii) 4 are successful and 2 fail?

I’m asking because every restore takes ages (I think the rebuilding of the database takes at least an hour every time)…

There are several special characters in the passphrase but none of the three you mention.

Would it be possible to try manual decryption of the oldest and newest dblock files (would the smaller dindex files be enough?) just to see if somehow your password has changed (and assumedly not changed back).

When you say you’re trying to “adopt” an archive, what do you mean by that? In CrashPlan terms that would imply the backup was created from files on one computer and now you’re trying to continue using that backup on a different computer.

Yes, coming from Crashplan, that’s exactly what I mean. I don’t want to go through the process of uploading those terrabytes again for weeks, until I have an actually functioning backup setup (because the unfinished backup blocks all others).

It’s certainly possible, but I don’t know when I can find the time. It’s a completely new procedure to me and I’m already spending way too much time with duplicati that I should be doing other things. So the expected benefits of such a purpose would have to be very clear to me…

Very unlikely. I have exactly two passwords for duplicati archives in my password manager, one of which was for a first test archive a long time ago. The only reason I’ve not deleted it is: You never know. So I’m pretty sure I’ve always been using the other one. And, from what I understand, duplicati doesn’t allow me to change passwords by default.

But okay, since I’m not 110% sure, I can see why such a test might make sense. But if we’re trying such rather unlikely things, does this mean, there are no other, more likely reasons for this failure? Why are we focusing on the passphrase part of the error, when the “files corrupted” part seems much more plausible? What do you do if (some of) your files are corrupted?

In general, I try to get what I can out of the corrupted files then delete them. If the Duplicati files are corrupted, then it’s possible the Duplicati.CommandLine.BackendTool.exe might be useful in getting out what it can (though I couldn’t tell you how to use it as I haven’t tried it myself yet).

What would be useful is if we could get a specific file name of a reported corrupt file that we could then try and do a manual decrypt as well as restore-from-destination against.

My assumption is that if an archive can’t be 100% uncompressed, it’s flagged as corrupted - but during a direct restore you might only need to fetch a single block from that same file, and if it’s not in the corrupted portion it might not cause the error?


I’m pretty sure this is unrelated but is it safe to assume the new system is using the same OS as the one the old backup was made from? I seem to recall an issue with file path types when trying to move from a backup set between Linux/MacOS and Windows systems.

In my case, there is no point in rescuing any data out of the corrupted files because I have all the data locally. I just want to build on whatever I have in the cloud when backing up.

Absolutely.

That would be a bit disappointing.

It’s not only the same OS, it’s the same system. However, if paths can be a problem already at this stage, then this might be relevant:

When I look at the archive via Direct Restore, I see that it includes files from four drives (a local one (D) and three mapped network drives:

image

The drive letters are unchanged on the current system, but for unknown reasons, duplicati no longer sees two of the three mapped drives (but it can still access these shares via the UNC path).

So that means I should download some dblock files and then decrypt them locally using SharpAESCrypt.exe?

I have downloaded 10 dblock files and they seem to uncrypt fine. I say “seem to” because when I open the zip file, all I can see is more encrypted files. But since I get those instead of nothing, I suppose that is correct?

So now I have confirmed independent of duplicati that my backup archive seems to be intact, what do I do next?


Just a few hints to anyone trying to use SharpAESCrypt.exe:

  1. You find it in C:\Program Files\Duplicati 2
  2. It doesn’t seem to accept any wildcards so I have to run it manually for each file :dizzy_face:
  3. You have to specify the full output path including the file name (make it end it .zip), it’s not sufficient to just give it the path to the folder where you want the file to be. If you do that, it will tell you “access denied”, which is confusing because it looks like it doesn’t have access to the folder but what it’s trying to say is that it it can’t write a file with the same name of the folder.

Wrong:

C:\Program Files\Duplicati 2>SharpAESCrypt.exe d *********** c:\decrypt\duplicati-b1005acf0b0b84037ac9a6377e3f25ceb.dblock.zip.aes c:\decrypted
Error: Access to the path 'c:\decrypted' is denied.

Right:

C:\Program Files\Duplicati 2>SharpAESCrypt.exe d ********* c:\decrypt\duplicati-b1005acf0b0b84037ac9a6377e3f25ceb.dblock.zip.aes c:\decrypted\5ceb.zip

(The asterisks stand for the archive password)

If there are no errors, your files can probably decrypted successfully. You will not see any names of source files. Instead, you will see a list of files with a long filename ending with an = . Each file has the same size (defined by block size, default 100 KB) and represents a block of data from one (or more) of your source files. The hash of that block is the filename.

Alternatively, you can use AES Crypt to decrypt your files. This tool also has een easy to use GUI.

OK, so we’re back to

Then I’m afraid I don’t have any clue what’s wrong except that you’re pointing to a remote folder that contains files from another backup. I assume that’s not the case.

Well, it depends what you mean by that: as I said in the OP, I am trying to “adopt” this backup archive. If you mean that there are multiple archives in the same folder, no, that is not the case I just noticed that there is actually a subfolder in that folder with an unencrypted folder name, which mean that I must have created it, which means that it may indeed be an archive that was meant to be one folder higher… Let me take a closer look.

I meant that you accidentally could have typed a wrong server- or path name that contains files from another backup that uses a different passphrase. I assume this is not the case, but it doesn’t hurt to (three) double check.
Apart from that, unfortunately I don’t have any other suggestions.

Could you give some more detail about how you “adopt” that backup? Am I correct in assuming that you create a new backup job and supply server url, destination path and passphrase of the backup job that you deleted and point to the remote files that still exist at the remote side?

OK, removing that folder did not change anything. Still getting the same error. Plus, it seems that the duplicati server has crashed or something: the last thing it did was issue the error message and when I tried to “show” it in the logs, the browser just kept “waiting for localhost”:slight_smile:

However, the process is still running though with no activity:

image

When I restarted the browser, it gave me the “Missing XSRF Token” error. Opening the GUI from the trayicon then gave me a “connection lost” message, which seems to confirm that the server is non-responsive. But why did I not get that message before I restarted the browser?

Anyway, as I have learned that the “Missing XSRF Token” error means that I should restart the browser, I did so a second time. This time no error (except for the “failed to decrypt” error above) but I still can neither “dismiss” nor “show” the error message.

:dizzy_face:

I also came across one (edit: now two) block file that is not decryptable. This is a brand new backup set (not adopted or anything) which only had 1 passphrase. Only one block so far appears to have this issue. I was running running --full-remote-verification on a 250 GB backup set to see if any 7z had issues, though this error appears to not be related to 7z. I did manually test decryption on random blocks, including ones directly before/after this block, and all have been decrypted fine, with the exception of the one with this error message.

Operation Get with file duplicati-.dblock.7z.aes attempt 15 of 15 failed with message: Failed to decrypt data (invalid passphrase?): Message has been altered, do not trust content => Failed to decrypt data (invalid passphrase?): Message has been altered, do not trust content

Anything I can provide to help root cause this? I have abruptly closed duplicati dozens of times while testing things, if that could have any effect, though I am using google drive which doesn’t seem to keep partially uploaded files, and no other files have issues so far. Will duplicati do anything with the knowledge of a corrupted file and re-upload a block file to replace it?

The dates for the dblock files with errors are:

  • 2017/12/10 - 9:34 am
  • 2017/12/12 - 6:30 am

Also, this particular backup set is static with no files changing and locally available, so I don’t need any actual data recovery help. This backup set is also just photos so I can upload files if desired. I do also have some dev experience if I need to look into the local database or dindex files nearby in time.

Version: 2.0.2.13_canary_2017-11-22
OS: Ubuntu 16.04 64-bit

How do you know it’s one? In my case, it does not give me that information…

I used the CLI to run the --full-remote-verification. For reference:

Some extra parameters are required, but the easiest way is to just export your backup configuration as command line, and replace the “backup $storage-url $location” part with “verify $storage-url all”. Warning, your encryption password and perhaps others will be used in the command.

2 Likes

Note that --full-remote-verification does not verify everything at the backend, it checks the entire content of a remote file instead of just verifying the file hash.
The default number of remote files to verify after each backup is one (!). By supplying all to the verify command as the number of samples, all remote files will be downloaded and verified.

@enviouselitist’s comment is 100% true, but at least for me the --full-remote-verification caused some confusion. In summary, verify <storage url> all will download all remote files to check the integrity, adding --full-remote-verification will verify the entire contents instead of just checking the file hash.

1 Like