Is there a work arround for google drive 403?

hello guys i keep getting error 403 when i backup to google drive, and i readed a few posts on the net and all i see that duplicati blames google, and that is weird because, the same library i try to backup always worked fine on my qnap with hbs3, so i realy think it is on duplicati’s side rather than google,

do anyone know a work around ?

That might not be easily understood which side. If there’s another log entry that helps further I’m not sure of that either. You can try enabling more logs or looking at other logs found in Duplicati depending on what you’re looking at and depending if Duplicati logs anything else to be of help.

In my own code, I’ve seen things that people blame on eg Google Drive, yet reworking the code solves the problem, and even us developers can incorrectly understand things. Sometimes reworking works around eg an issue in Google Drive as well. It can get really time consuming to try to actually truly figure it out.

So I’m not confident either way to guess on this past 50% or even with a likely or anything in any way if they think its a Drive issue. Drive does seem to use 403 for a number of possibilities as well which doesn’t help.

There’s also stuff like the following link which suggests 403 can sometimes be fixed via sign out and sign back in and I think that’s with official Drive use (think I got it right the first time). This is how you can fix HTTP 403 error on Google Drive Though should mention these websites with help sometimes repeat actually totally unhelpful not ever going to fix it things.

It can be a real pain lol. I even have an app crash for 2 years with Google’s Docs app on Android. They never fix it. I’ve even sent them the crash log. Nothing. Paste and randomly crash :slight_smile:

You might have seen this while looking at posts, but raise number-of-retries and maybe retry-delay.

What Google actually calls for for on some of its 403 errors is exponential backoff on retry attempts. actually added that, but Canary releases are too unknown for many.
If you want to venture into the latest fixes (and latest bugs), it’s available, but not by an autoupdate.

Use exponential backoff for retries #2399

refers to a new pull request which has not been processed yet. Any volunteers to process these?
Although volunteers in all areas (including forum) are needed, we have gotten new pull requests.

I’m also not quite sure if the rework will tell us details of a 403 on upload. Also, is it on an upload?
The original fix was for download, but my Google Drive 403 have been on uploads, when I check.
Simple long-term record is log-file=<path>, log-file-log-level=retry. A put is upload, get download.

It’s difficult to get a view of the network, because it’s encrypted. Windows network tracing is a way, however it’s a bit hard to set up, and you have to be careful not to accidentally post your secrets…

but what i find weird, hbs3 don’t suffer from this problem,

sadly duplicati don’t upload the same way as hbs3,
that would be so much easier it have to re-scan files and if not exist or broken fix and re-upload them,

i never had any errors with hbs3, but i moved to unraid because my nas died so i have to re-upload 28tb because duplicati upload in a diffrent way,

i find duplicati a nice alternative for hbs3, i am not some command user, and like to use gui’s

That’s a good point. Another way of looking into it. I like it :slight_smile:

It could be more precise to say that it doesn’t at the time being. Again, its not always black and white. It definitely could be Duplicati though so don’t get me wrong. Its not like it doesn’t have issues.

Its just not an easy one to say either-or without (possibly) many hours spent looking into it with heavy debugging on the code or maybe the idea that ts678 says if it goes well with the https connection.

The fix handles all HTTP methods including put and post (I assume by “pull” you’re referring to “put”). I’ve only tested it on a get, in terms of I’ve only had issues with a get to test it on. It will provide you with the 403 details returned from Google Drive (if it works correctly for a put) :-).

BTW… since implementing the fix, Duplicati has successfully performed 539 backups (I have it backing up every 15 minutes for testing purposes).

1 Like

Five thumbs up if I had five hands :smiley:

For the automated testing. I love automated testing.

The only “pull” that I can find is “pull request”, but I think your pull request answer gives me hope that

Care to do some? What Duplicati needs is more people testing more things, maybe even exceptions, possibly including network errors (real or artificial), random hard kill during backup, all sorts of fun. :wink:
To be most helpful, testers should be willing to collect and provide logs, database info, etc. for debug.

Because I’m a bit low on spare systems, my automated test is a production backup plus after-backup
tests. Started with a test all with full-remote-verification. Tracked bug down and wrote it up.
test all with full-remote-verification shows “Extra” hashes from error in compact #4693
This would have been extremely hard to dig into without the debug data collection that the test keeps.

Goal now is to find some developer to take it from steps-to-reproduce into a fix. We need developers.
Ignoring that for a moment, the next best thing is good test cases with data, and maybe an analysis…

so what can i do for easy fix ?
there so many things i don’t understand,
i was expecting an easy to use back-up program,

I have a full time consuming list of important things upcoming for maybe most of the year. Trying to get it to end and it never seems to lol. For a moment right now am taking a short break.

If I run across issues during the automated backups similar enough to automated testing, but a lot slower and not as varied as real programming automated tests, probably will be quite verbal about it :slight_smile:

True, its not easy to understand.

You could try to update Duplicati to the latest canary pre-release maybe? @ts678 @PaulHop might provide clarity there. I think that’s what was suggested above.

Besides that, I think you’d have to wait or work on a puzzle and gain understanding.

To boil it down to the simplest way of saying it. At least that’s what I see here though maybe not fully correct since I’m not re-reading everything.

Try what was suggested. Edit job, go to Options screen 5, “Add advanced option”, raise, save job.

Sorry, that was just me misreading the forum post in my hurry.

I agree with your sentiment here, but I’m time-poor. I see that Duplicati uses NUnit for a few tests. NUnit is now old and XUnit has replaced it. Unit Tests for Duplicati are probably only useful for a small set of Duplicati’s functionality. I suspect we’d be better off with integration tests… but can we add integration tests to Duplicati?

One of the system’s I’m currently working on has around 200 XUnit integration tests, they take about 90 seconds to run for over 1000 operations hitting the database… nice! The integration tests are aimed at the back end of the system as they call a RESTful API. I don’t have time to look through the code at the moment… do you know how the UI is talking to the back-end and could we automate the back-end from XUnit?

The next issue would be that the area I just modified is the Google Drive handler… we’d need to work out how you mock connecting to Google Drive… what’s more, we don’t know how Google Drive is going to behave in certain circumstances… I suppose we can only establish tests for what we do know.

So we’d only be establishing automated testing to confirm that a code change doesn’t break something (which is ok). We couldn’t use automated testing to check something like the new code that I’ve just added because we don’t know (as yet) how Google Drive will behave when a 403 error occurs for say a PUT operation until we get one.

Side Note: I’m hoping that with all the backups I’m doing that my Google Drive will run out of space… it’s getting close and hopefully it will fall over on a PUT or POST call so that it will test the error reporting change I made in that area.

@Hinako This could be possible as all of the handlers are projects in their own right and can be compiled separately. I could probably hand you the Google Drive handler with the enhanced error reporting code in it. From memory I think it’s about six files that you update. If it doesn’t work then you can just revert back to the originals. Let me know if you’re interested and I’ll compile the libraries for you and make them available from my website.

Thanks @PaulHop for the pull request. We’re quite short on volunteers, so it may take some time before someone is able to review it.

Duplicati can certainly be improved to handle these errors better, but there’s an interesting restic discussion here that seems to hint that Google Drive itself is not entirely blameless.

The graph from ncw is interesting. I wonder if Duplicati is teetering on a rate limit, with 403 as a hint?
Resolve a 403 error: Project rate limit exceeded (if new error detail shows that) may be configurable, however if that’s the one that hurts me, I’d expect other complaints at the same time, and I find none.

@warwickmm @ts678 I doubt Google Drive is having problems with rate limiting. I’m hitting Google Drive every 15 minutes from a server connected directly to an internet backbone… I’m hitting it pretty quickly. The 403 error that I received doesn’t appear in Google’s 403 error list: here so we’re just guessing until the enhanced error handling is in place.

thanks i give it a try ,

retry-delay > 1min
number-of-retries > 10

i hope this will slove it because man, i think im like bussy 2 weeks now to upload 2 tb

seems to have worked the delays and so,

another question, is it possible for duplicati to start next upload if the other one is finished ?
like a chain of uploads or something,

Uploads (up to 4 at a time by default) start when the upload is completed to go.

  --asynchronous-concurrent-upload-limit (Integer): The number of concurrent
    uploads allowed
    When performing asynchronous uploads, the maximum number of concurrent
    uploads allowed. Set to zero to disable the limit.
    * default value: 4

Default Remote volume size (Options sreen) is 50 MB, so it may take awhile.
Initial backup will find everything needs uploading. Later ones just do changes.

What i mean is this