Failed: The remote server returned an error: (403) Forbidden Google Drive

Sorry I should have been explicit. Yes it is the User Interface password under the settings.

This sounds even stranger thenā€¦ I donā€™t use one, but still get intermittent 403 errors from Google Drive.
I had one for a short time. I guess I can set one and see whether I can make 403 errors worse to debug.

One known problem with the user interface password is that browsers used to autofill it into bad places.
This typically showed up on Restore though because GUI password got put in the encryption password.

2.0.5.112_canary_2021-01-20 put fix in Canary, and it should be in the 2.0.6.x Beta releases.
Itā€™s possible that there are some other files to make autofill-proof, if that turns out to be issue.

Fixed password manager autofill, thanks @drwtsn32x

Very similar issue here.
I have four backups and one of them suddenly stopped working. For me it doesnā€™t make any difference if itā€™s started automatically or manually. All four backups use the same AuthID and backup to the same Google team drive. I already renewed the AuthID (with full access even), but it didnā€™t help.

The backup thatā€™s failing is by far the biggest (~800GB, currently at 500GB uploaded as it didnā€™t have a complete run yet). Second biggest is 85GB.

One thing I noticed is that the ā€œcheck connectionā€ function takes like 15 minutes to tell me that the check was successfull. While before, when I initially set the backup up, it took like 5 seconds. But it takes that long for all backups and not just the failing one.

I just set the number of retries to 20. Iā€™ll check if that halps.

Log:

System.Net.WebException: The remote server returned an error: (403) Forbidden.
  at System.Net.HttpWebRequest.GetResponseFromData (System.Net.WebResponseStream stream, System.Threading.CancellationToken cancellationToken) [0x00146] in <9c6e2cb7ddd8473fa420642ddcf7ce48>:0 
  at System.Net.HttpWebRequest.RunWithTimeoutWorker[T] (System.Threading.Tasks.Task`1[TResult] workerTask, System.Int32 timeout, System.Action abort, System.Func`1[TResult] aborted, System.Threading.CancellationTokenSource cts) [0x000f8] in <9c6e2cb7ddd8473fa420642ddcf7ce48>:0 
  at Duplicati.Library.Main.BackendManager.List () [0x00049] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.Collections.Generic.IEnumerable`1[T] protectedFiles) [0x0000d] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.Collections.Generic.IEnumerable`1[T] protectedFiles) [0x00000] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify (Duplicati.Library.Main.BackendManager backend, System.String protectedfile) [0x0011d] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x01048] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00009] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at Duplicati.Library.Main.Controller+<>c__DisplayClass14_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x0004b] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0026f] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at Duplicati.Library.Main.Controller.Backup (System.String[] inputsources, Duplicati.Library.Utility.IFilter filter) [0x00074] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00349] in <156011ea63b34859b4073abdbf0b1573>:0

Iā€™m running Duplicati - 2.0.6.3_beta_2021-06-17 on Ubuntu Server 20.04

I havenā€™t personally had one never come back, but it has sometimes exceeded the retries I choseā€¦

Implement exponential backoff for backend errors #4661 was recently done by a volunteer. Thanks!
Scheduled Backup Failure: Google Drive: (403) Forbidden was cited, and Google suggests backoff.

If you have a fast system and upload, itā€™s possible your 800 GB helped push past 750 GB daily limit.
That will mean even 20 retries wonā€™t get past it, and youā€™ll have to wait for Google to unlock thingsā€¦

Thatā€™s unlikely. My internet at home isnā€™t fast enough to upload 750GB in one day. It might be that my google account is also used on another server with more upload speed, but Iā€™m pretty sure it isnā€™t. Also my other three backups to the same google drive never had a problem

I suppose itā€™s possible for Google to have localized problems that might explain why some of your systems are varying. What appears like a unified folder structure to you is probably spread all over their data center.

If it doesnā€™t come back by itself shortly, trying to troubleshoot a bit lower is possible but somewhat complex.

Export As Command-line can get you a URL to use with Duplicati.CommandLine.BackendTool.exe to see whether a simple operation like list or get fares any better or gives any further information about the 403.

Being on Ubuntu, you canā€™t use .NET Framework network tracing, which is hard but gives the best visibility.

Looks like setting the number of retries to 20 helped. The backup started the next time I tried

1 Like

BTW I am also using google drive as a backend (backing now 4.4 TB), and also faced this error last December at initial measurements. For me, using 2.0.6.3_beta_2021-06-17, the following combination was already sufficient:
ā€“number-of-retries 10
ā€“retry-delay 20

Once there is a stable beta including the backoff mechanism instead of constant --retry-delay, switching to that should render the --number-of-retries having this high not needed.

1 Like

I have just enhanced error reporting for Google Drive errors by unpacking and displaying the HTTP response. I was receiving 403 errors. The detail relating to these errors was ā€œThis file has been identified as malware or spam and cannot be downloadedā€. It appears that Google checks files before allowing them to be downloaded. In order to bypass this error Google Drive has a query string parameter called ā€œacknowledgeAbuseā€ that must be set if a file is detected as having spam/malware (the flag canā€™t be set generally, it can only be set for files detected as having spam/malware). I have modified the code to add this flag if this error is detected but only if a new Google Drive Duplicati option called ā€œacknowledge-abuseā€ is also enabled.

The code is completeā€¦ if I havenā€™t fixed the issue above for everyone then at least the improved error reporting will get them closer.

Where do I find detail on how to put this code into an early release branch?

1 Like

This sounds like the better display of returned errors (beyond status code) that Iā€™ve long been wanting. Although youā€™re after Google Drive errors (and 403 has annoyed me too), is it a general HTTP helper?

Itā€™s done using a GitHub pull request but I donā€™t do them so can mostly refer you to public information.
How to join in the development of Duplicati?
Proposing changes to your work with pull requests (GitHubā€™s documentation, but other info is around)

Maybe someone who is more familiar with this will stop by to help, but thatā€™s the basics of how it goes.
After the pull request gets put into the master branch by somebody, the next step is a Canary release.

These tend to happen when enough changes build up or an emergency happens (like fixing Dropbox).
Ideally they would happen more often, but there is a need for someone to volunteer to do the releases.
Duplicati seeks volunteers in all areas such as forum, manual, test, and most importantly development.

Regardless, thanks for the code!

EDIT:

Actually, my 403 errors were on uploads, so possibly thatā€™s a different problem. Regardless, any extra information that can be displayed would be helpful (and hopefully not be too long or have private info).

Itā€™s definitely a better display of errors as itā€™s returning the error information that Google Drive responds with so you have some hope of correcting the issue. Itā€™s not a general HTTP helper as it is applied slightly differently in different circumstances. Also once errors have been detected then developers will be able to code solutions to mitigate the error as I did when Google Drive tells me that the particular file contains spam or malware.

Thank you for the information on how to get the files back from Duplicati. On reading the doco theyā€™d like me to fork the Master branch, I can then just comment, commit and push my changes into the fork for someone to include in a Canary release when they get to it. FYI, a Canary release was created yesterday, hopefully another will be created soon after I push my changes.

Before I went to bed last night I thought I should probably include my work across all web methods. I have now been through all the web methods of the Google services and added the improved logging. That should give you the information you need when you next get an error. After you get an error message then we can work together to determine the cause and fix the code if needed.

That was kind of an odd one, noted as a ā€œpreliminary buildā€, unsigned, with just a few files, and no autoupdate (probably a best plan given the other omissions). Long version of its backstory is here.

Plans for next release? was another contributor (thank you both) hoping to Canary soon. Problem currently is that the release manager has been unavailable, but thereā€™s an opening for a volunteer.
You saw the style of a typical release note. Beyond that, some amount of Git expertise is required.

So yes, Iā€™m hoping we can keep releases flowing, but the future of them is kind of murky right now.
I see your pull request (thanks!). Maybe someone can get it into master before next ā€œrealā€ Canary.

FWIW, I had the same issue trying to backup 2 servers where each would work manually but one would fail when running as scheduled backups. I solved the problem by not running the backups concurrently.
Evidently, running multiple backups that start at the same time from two different hosts is something Google finds particularly annoying. As soon as I changed the timing such that there was no overlap, everything worked.

cheers

2 Likes