File delete fails on Google Drive

I was running my daily back up

Diagnostics:

Nov 3, 2017 11:26 AM: delete duplicati-20170801T050000Z.dlist.zip.aes
System.Net.WebException: GetResponse timed out —> System.Net.WebException: Aborted.
at System.Net.HttpWebRequest.EndGetResponse (IAsyncResult asyncResult) [0x00000] in :0
at Duplicati.Library.Utility.AsyncHttpRequest+AsyncWrapper.OnAsync (IAsyncResult r) [0x00000] in :0
— End of inner exception stack trace —
at Duplicati.Library.Utility.AsyncHttpRequest+AsyncWrapper.GetResponseOrStream () [0x00000] in :0
at Duplicati.Library.Utility.AsyncHttpRequest.GetResponse () [0x00000] in :0
at Duplicati.Library.JSONWebHelper.GetResponse (Duplicati.Library.Utility.AsyncHttpRequest req, System.Object requestdata) [0x00000] in :0

It looks like it can’t find the file on Google Drive, but when I look, I can see the file it is trying to delete.
Suggestions appreciated.

Additional info:
Duplicati: Duplicati.Server, Version=2.0.2.1, Culture=neutral, PublicKeyToken=null (Duplicati.Library.Main, Version=2.0.2.1, Culture=neutral, PublicKeyToken=null)
Autoupdate urls: https://updates.duplicati.com/beta/latest.manifest;https://alt.updates.duplicati.com/beta/latest.manifest
Update folder: /usr/share/Duplicati/updates
Base install folder: /usr/lib/duplicati
Version name: “2.0.2.1_beta_2017-08-01” (2.0.2.1)
Current Version folder /usr/lib/duplicati
OS: Unix 4.9.0.0
Uname: Linux omv 4.9.0-0.bpo.3-amd64 #1 SMP Debian 4.9.30-2+deb9u5~bpo8+1 (2017-09-28) x86_64 GNU/Linux
64bit: True (True)
Machinename: omv
Processors: 4
.Net Version: 4.0.30319.17020
Mono: True (3.2.8) (3.2.8 (Debian 3.2.8+dfsg-10))
Locale: en-US, en-US, en-US
Date/time strings: dddd, MMMM d, yyyy - h:mm:ss tt
Tempdir: /tmp/
SQLite: 3.8.7.1 - Mono.Data.Sqlite.SqliteConnection
SQLite assembly: /usr/lib/mono/gac/Mono.Data.Sqlite/4.0.0.0__0738eb9f132ed756/Mono.Data.Sqlite.dll

If it couldn’t find the file I think you’d see a “file not found” type error, but yes - it looks like Duplicati asked Google to delete a file but never got a response back about it aborted.

Did you try running the backup again? My guess is it’s a fluke error (internet hiccough?) and likely isn’t repeatable - however it might make sense to add a re-try loop and/or more sensible message related to destination timeouts.

@JonMikeIV
Tried running the backup multiple times. Same result each time.
Would it make sense to “move the file to trash” on the gdrive side and try again?
I have 3 backups defined and all three were working until the 1am backup on 11/2.
After that, 2 worked, this one did not. This failed backup was the only one of the 3 backups that had files to delete on the backend.

OK, then it sounds like for whatever reason Google is having trouble doing the delete requested by Duplicati.

If you feel like testing, rather than deleting the file I’d suggest moving it to a different folder. That way if things don’t improve you can put the destination back into the original failure state.

@JonMikelIV,

Move the file to a different directory and the backup completed successfully.
Is this going to be an ongoing problem for this particular backup?
Any suggestions on diagnostics I can provide the Duplicati team?

I’m glad you were able to resume your backups, though to be honest I have no idea what caused the issue in the first place.

A response timeout generally means the destination (in this case Google Drive) never got back to Duplicati about something, implying the issue is at their end - and there’s not much Duplicati can do when it gets no response from whatever it’s talking to (other than throw an error). :frowning:

However, somebody more familiar with Google Drive than I am might have a suggestion of diagnostics you could provide…

Anyone else storing backup on Google Drive? Am I the only person having this “GetResponse timed out” issue?
This seemed to work ok when I was using the linuxserver.io Docker image, which I believe is Ubuntu 16.04 LTS based.
The OS version I’m running here is Debian Jessie 8.9. Duplicati is running headless per Headless installation on Debian or Ubuntu

I wanted to run headless on Jessie vs running a relatively heavy docker container (>500 MB).
Suggestions?

I’m not currently running Google Drive backups, so I can’t say whether or not it’s general issue, but if you have a very large backup you may be running into the issue covered in this topic:

Does that sound like it applies to your situation and, if so, are you open to trying a canary version of Duplicati?

@JonMikelV,

I guess it depends on what is considered a “large backup”. The smallest backup experiencing the problem is
Source: 5.37 GB / Backup: 4.69 GB. An example file it is trying to delete is duplicati-20170805T050143Z.dlist.zip.aes which is 297 KB. Possible Google API issue?

Unfortunately, when it comes to timeout issues like what this seems to be, the definition of “large backup” can vary by provider. But with the source and backup sizes you mentioned I wouldn’t think that’s the issue unless you have a very small block size.

If the timeout error message is being returned in less than 10 minutes, then this is likely a different issue than the one I linked to.

@JonMikelV,

duplicati attempts to delete the file 5 times (within 10 minutes total) before the backup fails.
Upload volume size is 50 MB.

As an experiment, I created a container from linuxserver.io’s docker image on docker hub.
I added a backup by importing from an external file.
Modified the backup to point to the correct source.
I copied the file (dated Aug 5) that would be deleted when the backup runs (retention is 3 months).
Rebuilt the local database so that it includes all the files on Google Drive.
Ran the backup. The backup was successful and the Aug 5 dlist.zip.aes file was deleted.

So, the question is, what is different in the docker image, which includes everything and the headless installation that includes only what is needed on a headless server?

Both versions (docker and bare metal) are the Beta version of duplicati (Duplicati - 2.0.2.1_beta_2017-08-01).
Full disclosure, the Jessie barebones install is on an OpenMediaVault server. Could something be missing on the OMV headless duplicati that would affect Google Drive backends. [this is really weird]

That fits in line with default settings of 5 retries and a 10 minute total timeout.

But with a backup size of 4.69GB and an dblock (upload volume) size of 50MB you should only be seeing 100 or so dblock files (plus some dlist and dindex files) so there shouldn’t be any timeout type issues with listing such a small file set.

Is it possible your OMV has some firewall or other security setup that is blocking requests to (or responses from) Google Drive?

Personally, I use an unRAID Docker for my Duplicati - but that’s mostly out of laziness, your idea of a lighter memory footprint than my 440M Docker does sound appealing…

Sometimes its the simple things that bite you in the butt.
I went back and reviewed the Problems getting started with duplicati thread. I saw @kenkendk’s comments about mono at …support-in-Mono

Debian Jessie (8.0)

The default Mono version is 3.2.8, which can run Duplicati, but lacks the cert-sync tools. Uninstall any Mono packages and then use the Mono supplied Debian packages, which will give you the latest version of Mono and the ca-certificates-mono package which fixes SSL.

After removing mono 3.2.8 and installing mono from mono-project.com (5.4.0.201 at this time) Google Drive file delete works! Life is good.

2 Likes

Glad you figured it out!

I went ahead and flagged your post as the solution, please let me know if you disagree. :slight_smile: