I’m a little confused, I don’t see any mention of removing mono or a spelling problem in your previous post.
You mentioned trying to run the backup again, but I thought you were trying to do a restore - did you try using the main menu Restore item with “Direct restore from backup files …” (which means you don’t need to create a backup job if all you want to do is a restore)?
Focusing specifically on your restore needs, I’m not sure if there’s an “oauth log” that @kenkendk can look at that might give us more detail.
Alternatively, if you’re goal is to restore pretty much everything from your backup, then if you have the space you could pull the Duplicati files from your Google Drive to a local drive and do the restore from there.
Again IF you’re going to restore “everything” then (assuming the oath issue is resolved) Duplicati would just be downloading *most of those files from Google Drive anyway. (*most fits if you have few versions of files, but if you have a LOT of versions or are only planning to restore a few files then Duplicati would only download the archive files it needs for those versions / files.)
I’m not sure if this is related to the oauth problem but I keep getting security alert messages from Google saying a new Linux device has signed in to my account and asking for confirmation that it’s me. I do the confirmation every time, but oauth still fails.
Yes and no. There is a log for the Duplicati side of the OAuth handshake, but it just reveals if the client request actually made it to the server or not. Any error messages should also be reported to the client and displayed locally.
Those who reported errors before, did make a successful request, but then the request failed later when contacting Google’s servers, and I have no logs for that.
For this problem, it clearly states that it is a certificate error, so that would need to be fixed before the request goes through.
But it seems that the OP has solved the problem somehow.
That’s likely just a normal Google thing - especially if you have a frequently changing IP address.
Glad to hear the “manual” restore worked, good luck with the new backup! Note that if it’s possible to keep having your existing destination you can back up your new computer to it and it should find most of the files you restored from the destination as already backed up (even if the paths have changed). This means your new-machine backup wouldn’t have to re-upload everything.
I have not found a solution to the oauth failure problem. I needed to do a restore, and at JonMikeIV’s suggestion, I was able to do so by manually downloading all the backup files from Google Drive onto my hard drive and then doing the restore locally. But now I want to back up my new machine, and I can’t get through to my Google Drive space with Duplicati.
Oh sorry, I misunderstood you then. Suppose it’s a Linux or Mono thing.
I have just set up a few Duplicity test folders to Drive with GPG. So far it was pretty easy to do even though I’m no expert. Going to practice first before switching to it. I just don’t like it that I can’t make any online backups, makes me nervous.
The --accept-specified-ssl-hash I tried before, and tried it again just to be sure, and it didn’t solve the problem. I then installed ca-certificates-mono and ran cert-sync, and tried Duplicati again. This time I got a ‘GetResponse timed out’ error.
That you’re progressing to a new error is a good sign that the original issue has been resolved (likely by the ca-certificates-mono and cert-sync steps).
I think I saw that you’re running Duplicati 2.0, but can you provide the rest of the version number? I suspect you’re running into something like the issue below, but how to deal with it can vary depending on if you’re using 18.104.22.168 beta or a later canary:
Thanks for the details. So the oauth issue seems to be resolved but now you’re seeing the GetResponse Timed Out message.
If I follow correctly you’re setting up a new computer to use a pre-existing (for a year and a half) backup set. (I guess what CrashPlan would have called “adopting”?)
If that’s correct, I expect you’re running into having so many existing backup files that the “see what files are at the destination” call is timing out. Does the timeout error happen about 10 minutes into the backup process or near the end?
Some things to try include:
turn on --no-backend-verification (this disables the automatic checking of some of your backup files, so it’s not a good thing to leave on)
upgrade to an experimental version so you can try setting the new --http-operation-timeout parameter to something like 20m (it defaults to 10m)