Restore failing on 3 seperate servers on B2

Hi @ts678 and thanks for answering

Just to clarify then.

  1. The OS being backed up in each case is Ubuntu. One server is 18.04 and two are 16.04

  2. the Recreate (Delete and repair) was done on my development server as it wasn’t important. I wasn’t entirely clear what this was going to do, but I did it in case the issue was a local database one.

  3. I downloaded that file and was able to decrypt it.using the passphrase without issue.

The failure to download things seems the most likely then. I have a homeserver running centos that also has duplicati on it. and It also backs up to B2. I tried a restore on there and it was succefull. I tried a restore using an import file from one of the servers that failed and it too failed so if its an access problem to that file, its affecting my home server too.

I’m completely stuck. To have the same issue on 3 separate servers all going to different buckets, all with different data and all pointing at B2 screams something fundamentally wrong.

As a further update, I deleted the bucket and deleted the backup job on the dev server. I then created a new bucket and a new job replicating the same settings but with less data. I also reduced the chunk size to 50MB from 100MB

I then did a backup and immediately tried to restore a single file and I got the exact same error.

Its looking very much like B2 and duplicati aren’t playing well together.

Next step in the maybe-download-issues direction is probably to get better information than the one-liner in the job summary. In a fuller log there’s usually some detail below that (but still not at the actual HTTP level).

Easiest path is probably from home screen About → Show log → Live → Retry. From the times seen in your prior summary, it seems to take 11 seconds pretty consistently to fail, which seems a little short for a partial download (unless it was very partial, or your link is fast and steady in how far it gets before the fail).

If you prefer, or if you want to have longer-term monitoring, use –log-level and optionally –log-file-log-level.

How to use B2 data caps & alerts would be worth a peek to make sure you haven’t hit any sort of B2 limit.

Looks that way. Let’s track it down. There’s not a wideapread issue, https://usage-reporter.duplicati.com/ shows around 200,000 B2 backups per month, and the only recent issue I recall was about upload speed.

OK, looking in the LIve > Retry log on the last one file resotre I attempted Im seeing first this:

{"ClassName":"System.Security.Cryptography.CryptographicException","Message":"Failed to decrypt data (invalid passphrase?): Invalid password or corrupted data","Data":null,"InnerException":{"ClassName":"SharpAESCrypt.SharpAESCrypt+WrongPasswordException","Message":"Invalid password or corrupted data","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":" at SharpAESCrypt.SharpAESCrypt.ReadEncryptionHeader (System.String password, System.Boolean skipFileSizeCheck) [0x00217] in <5e494c161b2e4b968957d9a34bd81877>:0 \n at SharpAESCrypt.SharpAESCrypt..ctor (System.String password, System.IO.Stream stream, SharpAESCrypt.OperationMode mode, System.Boolean skipFileSizeCheck) [0x001af] in <5e494c161b2e4b968957d9a34bd81877>:0 \n at (wrapper remoting-invoke-with-check) SharpAESCrypt.SharpAESCrypt:.ctor (string,System.IO.Stream,SharpAESCrypt.OperationMode,bool)\n at Duplicati.Library.Encryption.AESEncryption.Decrypt (System.IO.Stream input) [0x00000] in <95ae0702173a408a887772f8c0bf1afd>:0 \n at Duplicati.Library.Encryption.EncryptionBase.Decrypt (System.IO.Stream input, System.IO.Stream output) [0x00000] in <95ae0702173a408a887772f8c0bf1afd>:0 ","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2146233296,"Source":"SharpAESCrypt"},"HelpURL":null,"StackTraceString":" at Duplicati.Library.Main.AsyncDownloader+AsyncDownloaderEnumerator+AsyncDownloadedFile.get_TempFile () [0x00008] in <7207c9f70eaa44febb0fb56ebee39d96>:0 \n at Duplicati.Library.Main.Operation.RestoreHandler.DoRun (Duplicati.Library.Main.Database.LocalDatabase dbparent, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.RestoreResults result) [0x003bb] in <7207c9f70eaa44febb0fb56ebee39d96>:0 ","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2146233296,"Source":"Duplicati.Library.Main"}

And then straight after a message about the file I am trying to restore being missing

{"ClassName":"System.IO.FileNotFoundException","Message":"Could not find file \"/home/runcloud/webapps/oxy-vovi/license.txt\".","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":" at System.IO.FileStream..ctor (System.String path, System.IO.FileMode mode, System.IO.FileAccess access, System.IO.FileShare share, System.Int32 bufferSize, System.Boolean anonymous, System.IO.FileOptions options) [0x0021a] in <8f2c484307284b51944a1a13a14c0266>:0 \n at System.IO.FileStream..ctor (System.String path, System.IO.FileMode mode, System.IO.FileAccess access, System.IO.FileShare share) [0x00000] in <8f2c484307284b51944a1a13a14c0266>:0 \n at (wrapper remoting-invoke-with-check) System.IO.FileStream:.ctor (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare)\n at System.IO.File.OpenRead (System.String path) [0x00000] in <8f2c484307284b51944a1a13a14c0266>:0 \n at Duplicati.Library.Common.IO.SystemIOLinux.FileOpenRead (System.String path) [0x00000] in <83d881c0b0ef488cab5a56c4c84065d6>:0 \n at Duplicati.Library.Main.Operation.RestoreHandler.DoRun (Duplicati.Library.Main.Database.LocalDatabase dbparent, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.RestoreResults result) [0x005ca] in <7207c9f70eaa44febb0fb56ebee39d96>:0 ","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2147024894,"Source":"mscorlib","FileNotFound_FileName":"/home/runcloud/webapps/oxy-vovi/license.txt","FileNotFound_FusionLog":null}

Does that help at all?

BTW I did a file transfer speed test from that server and it can wget files at about 890 Mbits rock solid so I’m guessing the 11 seconds is the complete time it takes to get the 100MB chunk

As a follow up to this I tried doing a backup on the same server but to Mega.nz as a test. Backup was fine but when I went to restore I got the same errror. I then tried Google Drive and it also failed.

What could be going on? So its not B2 thats the problem but my servers perhaps?

Odd, especially with it seemingly happening on many systems and different storage types. Code may be

AES File Format is what it’s processing, starting at the IV line and finishing with HMAC (but getting upset). Getting an empty or totally wrong file would probably have choked by this point, e.g. on the header marker.

There are conflicting clues here. When you did the test decrypt, was it SharpAESCrypt? If not, try that too just in case it behaves differently than AES Crypt. I think it’s also closer to what Duplicati is actually using.

Other suggestion is to keep trimming, or start from scratch with a new simple-as-possible backup (which probably still has to have encryption though), and it could even be something like 1 file to local file storage.

Speaking of storage, I did wonder if your tmp is full, but how would all your systems do that, and wouldn’t it break something else? I think Duplicati uses tmp quite a bit –tempdir explains how to place temporary files.

OK, I’ve fixed it on one of the servers. I had already tried completely fresh back up jobs and an encryption key of 123 backing up just one file and trying to restore with the same result.

So out of desperation on the Dev box for now I uninstalled completely and reinstalled 2.04.5 beta rather than the canary build. I did a delete and repair to rebuild the local database and all my existing backups were there. I firstly did a new backup / restore and it was fine, no errors. I then tried to restore a file from a week ago that would have been backed up using the canary build and this too restored OK.

As I said in my original post the only previous restores I had done were a few months ago and I think it extremely likely that I was on the beta build at this time. Having read a post on here I decided I was safe to start using canary, but I think foolishly I may not have done a trial restore.

With decrypting I used aescrypt on Windows. I will have another go with the one you are referring to on one of the other two servers I’m yet to downgrade…

I will let you know.

And as a another update when I try to decrypt using SharpAESCrypt I’m getting the error

Error: Invalid header marker

Not sure if I’m doing something wrong. I did the same as I did with AESCrypt which worked no problem

Canary can be variable whether the known bug fixes or the unknown new bugs dominate, however if this problem somehow affects only canary (not hearing other reports though), it would be good to fix it quickly. Your usage sounds production enough that canary is probably not advised, as it may break unexpectedly.

This is an example of what I meant here referring to header marker. See earlier in the code starting here. Please double-check usage, try encrypting a file then decrypting the same file. Looking at the start of the file in hex (e.g. with hexdump -C) might help, if there actually seems to be a problem with the file content.

I guess I’ll continue under the assumption that there’s an encryption issue, while waiting for confirmation.

The sharpaescrypt project looks like it hasn’t had a change in a year, so a regression seems unlikely. I’m neither the author nor hugely familiar with the crypto area, however if you’re willing to play with this more, maybe you could backup a non-private file with a non-private passphrase, verify the bug, and post the file somehow, either with a link to your cloud storage, or maybe in a GitHub issue if these directions work out.

How the backup process works explains what should be in it after you decrypt .zip.aes but here an issue possibly prevents decrypt. I suppose you could decrypt with AES Crypt and then make sure it looks safe.

If the Duplicati encryption code ever makes files that it can’t decrypt, that’s bad. The counter-point is that Duplicati decrypts files all the time doing backup in the verification that follows unless you turn it off/down.

Although I don’t know why it should matter, there’s always the “have you tried restarting it?” plan, which is possible somewhat independently for restore tests if you export the job for command line use, then use it. This will not only make it very visible what options are run, but get you a clean set of processes started…

Yes, that is working.

When I try to decrypt one of the duplicati files though I cant when using sharpaescrypt - but it could just be me!

I have backed up a single small image and put it to Google Drive. The Passphrase is Duplicati . When I try to restore it fails as usual. This was backed up with 2.0.4.15_canary_2019-02-06

The link to the backup location is - https://drive.google.com/drive/folders/14jPbGawqd2Uo5rcAVGZk7ooOy6wQufEU

This is affecting 3 separate servers running different versions and in different locations, its not a localised issue.

Hope that helps!

All three decrypt here (and to the same content all ways according to md5 sums) whether I use AES Crypt or SharpAESCrypt.exe on Windows 10, or SharpAESCrypt.exe on Linux Mint 18.1.Surprisingly I don’t need to even say mono before the .exe path. It’s possible to configure that, but I didn’t know mine’s set by default.

$ mono --version
Mono JIT compiler version 5.18.1.0 (tarball Fri Mar 15 20:45:47 UTC 2019)
Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com
	TLS:           __thread
	SIGSEGV:       altstack
	Notifications: epoll
	Architecture:  amd64
	Disabled:      none
	Misc:          softdebug 
	Interpreter:   yes
	LLVM:          yes(600)
	Suspend:       preemptive
	GC:            sgen (concurrent by default)
$ 

and this worries me a bit because 5.18 has a bug. Duplicati does not start up #3572. Ugh, mine’s broken. Possibly picked up the too-new mono inadvertently when I recently had Update Manager update software.

$ /usr/lib/duplicati/SharpAESCrypt.exe d Duplicati duplicati-20190410T145933Z.dlist.zip.aes duplicati-20190410T145933Z.dlist.sharp.zip
$ /usr/lib/duplicati/SharpAESCrypt.exe d Duplicati duplicati-bb6c7345f5ef44a2495c646ccc897a34f.dblock.zip.aes duplicati-bb6c7345f5ef44a2495c646ccc897a34f.dblock.sharp.zip
$ /usr/lib/duplicati/SharpAESCrypt.exe d Duplicati duplicati-i2784feb77d794b89b82b5fedb57b03f1.dindex.zip.aes duplicati-i2784feb77d794b89b82b5fedb57b03f1.dindex.sharp.zip
$ file /usr/lib/duplicati/SharpAESCrypt.exe
/usr/lib/duplicati/SharpAESCrypt.exe: PE32 executable (console) Intel 80386 Mono/.Net assembly, for MS Windows
$ 

What I was hoping for was a failure that I could look at from just the non-decryptable files, but they decrypt. Trying direct restore on Windows with 2.0.4.5 beta gives me dates and contents, and the .jpg restores fine.

Don’t have 2.0.4.16 anywhere, but might have put it on Linux if mono hadn’t hit me. Need to try downgrade. Meanwhile, you have several versions, so might test if direct local file restore from those three destination files only fails on 2.0.4.16 for you (if that’s not the sort of restore you tried already). That’ll narrow it down…

Trying the export and run from command line would also be possible. Syntax needs adjusting from export format which is for backup into what restore expects. Possibly just adjust paths, and pick a –restore-path.

A user has reported a similar issue below. Restore from the command line works, but restore from the UI results in an error:

Yes, that seems like the same problem. I’ve set up a VM and am currently testing every version to see at what point I am unable to restore.

Good to know that I could restore via the CLI … Thanks!

Probably still worth trying that (at least from your small test backup) because this is still rather mysterious. 2.0.4.16 canary wedged (in a contained way) onto my same Windows 10 as 2.0.4.5 beta still restores fine from your 3 remote files copied locally here. Didn’t try CLI because GUI had no oddities except for warning

  • 2019-04-11 10:40:28 -04 - [Warning-Duplicati.Library.Main.Operation.RestoreHandler-NoFilesRestored]: Restore completed without errors but no files were restored

which I think may be due to my restore area already having (from 2.0.4.5 test) the only file I was asking for.

The GitHub issue was also not reproducible. What would help a fix would be related issues the developers can readily reproduce and examine without special equipment (e.g. backends) or impacts on your privacy.

For a specific lingering question, if SharpAESCrypt run the way I ran it fails for you, that would be an issue.

The GitHub issue mentions that Duplicati downgrade fixed it (as it did for you, at least once), and Duplicati was running as a service, which I’m not doing on my cobbled-up setup. Do you have a non-service install? I’m not sure why it would matter, but if a service is critical, a CLI success was probably not run as service.

I’m seeing the exact same issue on the latest canary (2.0.4.22) running on Windows 7 x64. I can easily recreate it by creating a local backup from C:\Folder1 to C:\Folder2 and then trying to restore anything. I see the Failed to decrypt data (invalid passphrase?): Invalid password or corrupted data message immediately in the live log.

This happens with both AES and GPG encryption, even using a passphrase as simple as “test”. I’ve completely uninstalled Duplicati and started from a fresh configuration with the same result.

I started moving back versions to see where this started and it seems like 2.0.4.5 is the last version to work OK, so it looks like 2.0.4.6 introduced this… I can’t believe more people haven’t reported it, unless it’s something specific to my setup. Note that verifying the backup or manually decrypting with SharpAESCrypt works fine, so it looks to be something wrong with the way it’s attempting to decrypt a file during a restore operation.

Any suggestions? I’ve had my backups running on 2.0.4.22 now so I’m very wary of moving back as far as 2.0.4.5.

I just did a couple tests on my PC running 2.0.4.22. Restores completed fine.

I restored from the most recent backup set… first test was a file modified recently and a second test of a file that was not modified for over a year.

No issues… strange…

Please note that the run-up to 2.0.4.5 looks like it began at v2.0.4.1-2.0.4.1_experimental_2018-11-08. While experimental was turning into beta, canary kept moving, so 2.0.4.3 and 2.0.4.4 might have this. Did you test? Yes, I’d also be surprised if something broke and it wasn’t noticed, but I suppose there’s always a possibility. Please don’t move your whole backup yet, but your simple test of a few more old versions might be helpful…

Some other things that can tried when doing testing (that might not be advisable for regular-data backups):

If you leave encryption off entirely, does the problem show up at the zip level? If so, maybe download failed, although I use the term loosely when it’s a local file backup. Still, I think there’s a step that transfers to tmp, which for Windows might be C:\Users\<user>\AppData\Local\Temp. File names typically are dup-<GUID>. Example: dup-e45b067d-caa5-48e8-b0b5-968437d01853. You can maybe glimpse them flowing through (more obvious if from a slow network) in File Explorer or maybe Command Prompt doing dir dup-* /od

Next step would be to see if one can be grabbed out of Temp for inspection. I’ve previously used a .bat file simply to run a loop of copy commands to a different folder. Eventually Temp file leaves, but copy remains.

As a side note (although maybe very relevant) throttling via either –throttle-upload –throttle-download or the GUI control at the top of the screen (to the far right of the status box) should be avoided as it has problems.

That was it! I had a global upload throttle set… turned it off and restores work perfectly! - very strange how that affected downloads but it looks like it might have been corrupting them somehow

--throttle-upload affects restore download performance #3272 is the open issue which seems to be suspecting a parameters-are-backwards problem, but is hung up in some confusion. One reason is that throttle code does Read and Write, so how that translates to upload and download might not be obvious.

Hash mismatch download corruption, likely from throttle buffer offset being reused #3782 covers a Write bug corrupting download. My B2 test seemed to be doing OK but OneDrive had issues. I suppose YMMV.

The defect itself is real old, but appears subtle enough that conditions must be right for it to cause issues.

Fix implementation of ThrottledStream.Write #3811 didn’t catch v2.0.4.22-2.0.4.22_canary_2019-06-30 so will presumably be out in whatever next canary is, and I hope that it can then flow into beta. Plan unknown, however since you’re already on 2.0.4.22, you might be the best person to try next canary to see if fix fixes.