Sorry I should have been explicit. Yes it is the User Interface password under the settings.
This sounds even stranger thenā¦ I donāt use one, but still get intermittent 403 errors from Google Drive.
I had one for a short time. I guess I can set one and see whether I can make 403 errors worse to debug.
One known problem with the user interface password is that browsers used to autofill it into bad places.
This typically showed up on Restore though because GUI password got put in the encryption password.
2.0.5.112_canary_2021-01-20 put fix in Canary, and it should be in the 2.0.6.x Beta releases.
Itās possible that there are some other files to make autofill-proof, if that turns out to be issue.
Fixed password manager autofill, thanks @drwtsn32x
Very similar issue here.
I have four backups and one of them suddenly stopped working. For me it doesnāt make any difference if itās started automatically or manually. All four backups use the same AuthID and backup to the same Google team drive. I already renewed the AuthID (with full access even), but it didnāt help.
The backup thatās failing is by far the biggest (~800GB, currently at 500GB uploaded as it didnāt have a complete run yet). Second biggest is 85GB.
One thing I noticed is that the ācheck connectionā function takes like 15 minutes to tell me that the check was successfull. While before, when I initially set the backup up, it took like 5 seconds. But it takes that long for all backups and not just the failing one.
I just set the number of retries to 20. Iāll check if that halps.
Log:
System.Net.WebException: The remote server returned an error: (403) Forbidden.
at System.Net.HttpWebRequest.GetResponseFromData (System.Net.WebResponseStream stream, System.Threading.CancellationToken cancellationToken) [0x00146] in <9c6e2cb7ddd8473fa420642ddcf7ce48>:0
at System.Net.HttpWebRequest.RunWithTimeoutWorker[T] (System.Threading.Tasks.Task`1[TResult] workerTask, System.Int32 timeout, System.Action abort, System.Func`1[TResult] aborted, System.Threading.CancellationTokenSource cts) [0x000f8] in <9c6e2cb7ddd8473fa420642ddcf7ce48>:0
at Duplicati.Library.Main.BackendManager.List () [0x00049] in <e60bc008dd1b454d861cfacbdd3760b9>:0
at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.Collections.Generic.IEnumerable`1[T] protectedFiles) [0x0000d] in <e60bc008dd1b454d861cfacbdd3760b9>:0
at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.Collections.Generic.IEnumerable`1[T] protectedFiles) [0x00000] in <e60bc008dd1b454d861cfacbdd3760b9>:0
at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify (Duplicati.Library.Main.BackendManager backend, System.String protectedfile) [0x0011d] in <e60bc008dd1b454d861cfacbdd3760b9>:0
at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x01048] in <e60bc008dd1b454d861cfacbdd3760b9>:0
at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0
at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00009] in <e60bc008dd1b454d861cfacbdd3760b9>:0
at Duplicati.Library.Main.Controller+<>c__DisplayClass14_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x0004b] in <e60bc008dd1b454d861cfacbdd3760b9>:0
at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0026f] in <e60bc008dd1b454d861cfacbdd3760b9>:0
at Duplicati.Library.Main.Controller.Backup (System.String[] inputsources, Duplicati.Library.Utility.IFilter filter) [0x00074] in <e60bc008dd1b454d861cfacbdd3760b9>:0
at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00349] in <156011ea63b34859b4073abdbf0b1573>:0
Iām running Duplicati - 2.0.6.3_beta_2021-06-17 on Ubuntu Server 20.04
I havenāt personally had one never come back, but it has sometimes exceeded the retries I choseā¦
Implement exponential backoff for backend errors #4661 was recently done by a volunteer. Thanks!
Scheduled Backup Failure: Google Drive: (403) Forbidden was cited, and Google suggests backoff.
If you have a fast system and upload, itās possible your 800 GB helped push past 750 GB daily limit.
That will mean even 20 retries wonāt get past it, and youāll have to wait for Google to unlock thingsā¦
Thatās unlikely. My internet at home isnāt fast enough to upload 750GB in one day. It might be that my google account is also used on another server with more upload speed, but Iām pretty sure it isnāt. Also my other three backups to the same google drive never had a problem
I suppose itās possible for Google to have localized problems that might explain why some of your systems are varying. What appears like a unified folder structure to you is probably spread all over their data center.
If it doesnāt come back by itself shortly, trying to troubleshoot a bit lower is possible but somewhat complex.
Export As Command-line can get you a URL to use with Duplicati.CommandLine.BackendTool.exe to see whether a simple operation like list
or get
fares any better or gives any further information about the 403.
Being on Ubuntu, you canāt use .NET Framework network tracing, which is hard but gives the best visibility.
Looks like setting the number of retries to 20 helped. The backup started the next time I tried
BTW I am also using google drive as a backend (backing now 4.4 TB), and also faced this error last December at initial measurements. For me, using 2.0.6.3_beta_2021-06-17, the following combination was already sufficient:
ānumber-of-retries 10
āretry-delay 20
Once there is a stable beta including the backoff mechanism instead of constant --retry-delay, switching to that should render the --number-of-retries having this high not needed.
I have just enhanced error reporting for Google Drive errors by unpacking and displaying the HTTP response. I was receiving 403 errors. The detail relating to these errors was āThis file has been identified as malware or spam and cannot be downloadedā. It appears that Google checks files before allowing them to be downloaded. In order to bypass this error Google Drive has a query string parameter called āacknowledgeAbuseā that must be set if a file is detected as having spam/malware (the flag canāt be set generally, it can only be set for files detected as having spam/malware). I have modified the code to add this flag if this error is detected but only if a new Google Drive Duplicati option called āacknowledge-abuseā is also enabled.
The code is completeā¦ if I havenāt fixed the issue above for everyone then at least the improved error reporting will get them closer.
Where do I find detail on how to put this code into an early release branch?
This sounds like the better display of returned errors (beyond status code) that Iāve long been wanting. Although youāre after Google Drive errors (and 403 has annoyed me too), is it a general HTTP helper?
Itās done using a GitHub pull request but I donāt do them so can mostly refer you to public information.
How to join in the development of Duplicati?
Proposing changes to your work with pull requests (GitHubās documentation, but other info is around)
Maybe someone who is more familiar with this will stop by to help, but thatās the basics of how it goes.
After the pull request gets put into the master
branch by somebody, the next step is a Canary
release.
These tend to happen when enough changes build up or an emergency happens (like fixing Dropbox).
Ideally they would happen more often, but there is a need for someone to volunteer to do the releases.
Duplicati seeks volunteers in all areas such as forum, manual, test, and most importantly development.
Regardless, thanks for the code!
EDIT:
Actually, my 403 errors were on uploads, so possibly thatās a different problem. Regardless, any extra information that can be displayed would be helpful (and hopefully not be too long or have private info).
Itās definitely a better display of errors as itās returning the error information that Google Drive responds with so you have some hope of correcting the issue. Itās not a general HTTP helper as it is applied slightly differently in different circumstances. Also once errors have been detected then developers will be able to code solutions to mitigate the error as I did when Google Drive tells me that the particular file contains spam or malware.
Thank you for the information on how to get the files back from Duplicati. On reading the doco theyād like me to fork the Master branch, I can then just comment, commit and push my changes into the fork for someone to include in a Canary release when they get to it. FYI, a Canary release was created yesterday, hopefully another will be created soon after I push my changes.
Before I went to bed last night I thought I should probably include my work across all web methods. I have now been through all the web methods of the Google services and added the improved logging. That should give you the information you need when you next get an error. After you get an error message then we can work together to determine the cause and fix the code if needed.
That was kind of an odd one, noted as a āpreliminary buildā, unsigned, with just a few files, and no autoupdate (probably a best plan given the other omissions). Long version of its backstory is here.
Plans for next release? was another contributor (thank you both) hoping to Canary soon. Problem currently is that the release manager has been unavailable, but thereās an opening for a volunteer.
You saw the style of a typical release note. Beyond that, some amount of Git expertise is required.
So yes, Iām hoping we can keep releases flowing, but the future of them is kind of murky right now.
I see your pull request (thanks!). Maybe someone can get it into master
before next ārealā Canary.
FWIW, I had the same issue trying to backup 2 servers where each would work manually but one would fail when running as scheduled backups. I solved the problem by not running the backups concurrently.
Evidently, running multiple backups that start at the same time from two different hosts is something Google finds particularly annoying. As soon as I changed the timing such that there was no overlap, everything worked.
cheers