Failed to decrypt data

I’ve been having a lot of trouble recently with backups to Google Drive. Previously it was working fine for months. Then in last few weeks I started getting intermittent failures. Today have been unable to do a successful backup even after repeated attempts.

Details:

  • Duplicati 2.0.5.1 beta on Windows 10.
  • Error message: “Failed to decrypt data (invalid passphrase?): Message has been altered, do not trust content”, occurs near end of the backup job when status is showing “Deleting unwanted files”.
  • Compact Now fails with the same error.
  • Repair and Verify operations still succeed.
  • Redoing OAuth signin to Google didn’t help.
  • Updating to latest canary version 2.0.5.111 didn’t help, still gives same error.
  • When a backup or compact job fails, it does not create an entry in the log.

Any thoughts?

After further searching and playing, I don’t think this is related to Google Drive.

I found some threads saying that password managers can cause this issue by autofilling fields that they shouldn’t. I tried disabling my password manager, but that didn’t help.

I found other threads that talked about using list-broken-files and purge-broken-files to fix issues. For me, list-broken-files doesn’t return any, nor does purge-broken-files.

I found another thread that talked about being able to do at least one successful backup after doing “delete and repair” for the database. I tried this, but the delete and repair failed:

“Warnings”: [
“2020-11-11 01:38:05 -08 - [Warning-Duplicati.Library.Main.Database.LocalRecreateDatabase-MissingVolumesDetected]: Found 1 missing volumes; attempting to replace blocks from existing volumes”,
“2020-11-11 04:28:34 -08 - [Warning-Duplicati.Library.Main.Database.LocalRecreateDatabase-MissingVolumesDetected]: Found 1 missing volumes; attempting to replace blocks from existing volumes”
],
“Errors”: [
“2020-11-11 01:33:14 -08 - [Error-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-b133c6475d0bd46e785218f839d359aae.dblock.zip.aes by duplicati-i6424f872039e40728351d263c0f4fbc0.dindex.zip.aes, but not found in list, registering a missing remote file”
],

So, haven’t found a solution yet.

After the failed database repair my daily backups started succeeding again for some reason. I still didn’t trust it though. I ran a command line test using the full-remote-verification option, and many blocks failed to decrypt, so the remote data was still messed up.

By this point I was getting pretty frustrated and tired of messing around with it, so I just took a sledge hammer to it. I deleted all local and remote data and started over. 26 hours later my backup of about 40 GB finally finished. That was painfully slow. At least I’m back up and running for now. My trust is a bit shaken though.

You might want to raise your backup-test-samples to see if the files stay healthy.
There’s also a backup-test-percentage option if you’d rather watch files that way.

  --backup-test-percentage (Integer): The percentage of samples to test after
    a backup
    After a backup is completed, some (dblock, dindex, dlist) files from the
    remote backend are selected for verification. Use this option to specify
    the percentage (between 0 and 100) of files to test. If the
    backup-test-samples option is also provided, the number of samples tested
    is the maximum implied by the two options. If the no-backend-verification
    option is provided, no remote files are verified.
    * default value: 0

I assume you have Google Drive as your Storage type as below so you don’t rely on its file sync:

image

If you throttled uploads before 2.0.5.1, it can damage files, but I’d have thought you’d spot it sooner.

Thanks @ts678.

First of all, I apologize if my last post seemed “complainy”. I’m fully aware that Duplicati is free open source software developed by volunteers. I meant no disrespect or entitlement, was just venting a bit.

Now, back on topic:

Yes I have Google Drive setup as my storage type. I haven’t used any throttling.

Good idea about increasing the backup-test-samples. I have a capped internet connection though, so I’d like to understand how this would affect data usage. The documentation doesn’t say if it is testing index, list, or block files. I noticed that the Test command by default tests one of each. It looks like backup does likewise with the default backup-test-samples value of 1. So if I set backup-test-samples to 10, then I presume it will it test 10 of each. I’ll give it a try and check the logs.

Lastly, it seems like error logs should have been created when I was experiencing the error. I opened an issue in Github for that:

Details are documented here. The after-backup verification is the same:

The TEST command

Verifies integrity of a backup. A random sample of dlist, dindex, dblock files is downloaded, decrypted and the content is checked against recorded size values and data hashes. <samples> specifies the number of samples to be tested. If “all” is specified, all files in the backup will be tested. This is a rolling check, i.e. when executed another time different samples are verified than in the first run. A sample consists of 1 dlist, 1 dindex, 1 dblock.

I suspect it’s smart enough to not repeat a file, so some types of files may go fully tested before others.
Regardless, this level of testing is going to be rough on your capped internet connection. Beyond really looking at the file contents, there’s always a file listing check (with size), but it wasn’t sufficient before…

Unless you deleted your entire Duplicati databases area, look in About → Show log → Stored for error.
Not very helpfully, I think backups that have no result statistics (because they didn’t finish) don’t make a typical entry in the logs at <backup> → Show log (which is where GUI error Show seems to always go).

There’s nothing special about your particular backup error, thus your log issue is probably the usual one.
How the error came about originally is a good question, but can’t be studied because backup is deleted, however even if it still existed there is possibly a closer look at history needed, and ideally debug logs…

If it can be reproduced, then it can be looked at closer. At this point, there’s probably not much to look at.

Thanks, I had missed that definition of “sample” in the Test documentation. That makes sense, and does match what I see in the logs.

I did delete everything including the local database because I wanted to start with a clean slate. So I have no ability to diagnose at this point. The backup data was old enough that I don’t think diagnosis of of the cause was really possible anyway. It just would have been nice to have some logs to help understand and troubleshoot the problem.

Not sure what you mean by “your log issue is probably the usual one”. If it’s normal that a failed backup job does not create a log entry under “Show log” in the GUI, then that seems like something that should be fixed.

Obviously the command line is capable or providing more info, so I’ll probably just use that if I need to troubleshoot something in the future. I hadn’t used the command line before this issue occurred, so i was only looking at the GUI initially.

Note that the job logs are in the randomly named databases whose paths are the job Database screen.
Server settings for the backup jobs, and job errors that don’t go in job logs are in Duplicati-server.sqlite. Possibly you deleted the whole folder. If you simply deleted the job, then you still have the server errors.

Reasons behind why the errors got split are probably historical, however I’d favor an integrated system…