Help with oauth failure

I’m a little confused, I don’t see any mention of removing mono or a spelling problem in your previous post.

You mentioned trying to run the backup again, but I thought you were trying to do a restore - did you try using the main menu Restore item with “Direct restore from backup files …” (which means you don’t need to create a backup job if all you want to do is a restore)?

Federico Poloni responded to my support request on this thread:

I’m not sure how the discussion of my issue got shifted over there, but I responded to what he said.

I meant ‘restore’, not backup. And yes, I chose ‘direct restore’.

OK, now I’m following things.

Focusing specifically on your restore needs, I’m not sure if there’s an “oauth log” that @kenkendk can look at that might give us more detail.

Alternatively, if you’re goal is to restore pretty much everything from your backup, then if you have the space you could pull the Duplicati files from your Google Drive to a local drive and do the restore from there.

Again IF you’re going to restore “everything” then (assuming the oath issue is resolved) Duplicati would just be downloading *most of those files from Google Drive anyway. (*most fits if you have few versions of files, but if you have a LOT of versions or are only planning to restore a few files then Duplicati would only download the archive files it needs for those versions / files.)

I want to do a complete restore. If there’s an oauth log, I’m not sure where I’d find that. I’ll try downloading the whole backup folder and see how it goes.

I’m not sure if this is related to the oauth problem but I keep getting security alert messages from Google saying a new Linux device has signed in to my account and asking for confirmation that it’s me. I do the confirmation every time, but oauth still fails.

[Sigh.] It didn’t work. I got an error message:
failed to connect: No filesets found on remote target

IT WORKED! I hadn’t copied all the files before. Thanks JMIV!

Now I have to try backing it up again to Google Drive.


Yes and no. There is a log for the Duplicati side of the OAuth handshake, but it just reveals if the client request actually made it to the server or not. Any error messages should also be reported to the client and displayed locally.

Those who reported errors before, did make a successful request, but then the request failed later when contacting Google’s servers, and I have no logs for that.

For this problem, it clearly states that it is a certificate error, so that would need to be fixed before the request goes through.

But it seems that the OP has solved the problem somehow.

That’s likely just a normal Google thing - especially if you have a frequently changing IP address.

Glad to hear the “manual” restore worked, good luck with the new backup! Note that if it’s possible to keep having your existing destination you can back up your new computer to it and it should find most of the files you restored from the destination as already backed up (even if the paths have changed). This means your new-machine backup wouldn’t have to re-upload everything.

I can’t backup any more on Ubuntu and have the same errors as you mentioned. I am trying to understand this thread as it seems you have found the solution.

I am thinking of switching to Duplicity if I can’t get this resolved. Can you please tell a fellow Linux user what you did to get it running again?

I have not found a solution to the oauth failure problem. I needed to do a restore, and at JonMikeIV’s suggestion, I was able to do so by manually downloading all the backup files from Google Drive onto my hard drive and then doing the restore locally. But now I want to back up my new machine, and I can’t get through to my Google Drive space with Duplicati.

Oh sorry, I misunderstood you then. Suppose it’s a Linux or Mono thing.

I have just set up a few Duplicity test folders to Drive with GPG. So far it was pretty easy to do even though I’m no expert. Going to practice first before switching to it. I just don’t like it that I can’t make any online backups, makes me nervous.

Still waiting for a solution…

For the Failed to connect: The server certificate had the error RemoteCertificateChainErrors and the hash error?

I think kenkendk got it right when he said:

What ServerVersionName and MonoVersion are shown on your About -> “System Info” page?

How is the certificate error to be fixed? And if it’s a certificate error, why does the process still fail if I opt to ‘accept and ssl certificate’?

I believe you either need to use the --accept-specified-ssl-hash parameter as mentioned in the first post?

If that isn’t working did you try cert-sync?

The --accept-specified-ssl-hash I tried before, and tried it again just to be sure, and it didn’t solve the problem. I then installed ca-certificates-mono and ran cert-sync, and tried Duplicati again. This time I got a ‘GetResponse timed out’ error.

That you’re progressing to a new error is a good sign that the original issue has been resolved (likely by the ca-certificates-mono and cert-sync steps).

I think I saw that you’re running Duplicati 2.0, but can you provide the rest of the version number? I suspect you’re running into something like the issue below, but how to deal with it can vary depending on if you’re using beta or a later canary:

It’s the beta, from the gdebi package for Debian/Ubuntu on the main Duplicati download site. My OS is Linux Mint Mate 18.3. I’ve been using a 50 MB block size on the backup.

Thanks for the details. So the oauth issue seems to be resolved but now you’re seeing the GetResponse Timed Out message.

If I follow correctly you’re setting up a new computer to use a pre-existing (for a year and a half) backup set. (I guess what CrashPlan would have called “adopting”?)

If that’s correct, I expect you’re running into having so many existing backup files that the “see what files are at the destination” call is timing out. Does the timeout error happen about 10 minutes into the backup process or near the end?

Some things to try include:

  • turn on --no-backend-verification (this disables the automatic checking of some of your backup files, so it’s not a good thing to leave on)
  • upgrade to an experimental version so you can try setting the new --http-operation-timeout parameter to something like 20m (it defaults to 10m)