Is there a work arround for google drive 403?

After some web search for context, I guess you’re talking about QNAP’s Creating a Backup Job. For Duplicati, Using Duplicati from the Command Line will let you link a list of jobs in the script you write, however for scheduling you would have to use Task Scheduler or cron or whatever your system has.

Duplicati from the GUI can do scheduling per-job without direct linkage between, however a later job cannot start until current one ends. Effectively you can run jobs back-to-back (why do you need to?) however I’m not sure how well you can control the order (if it matters) of running the queued-up jobs.

I use unraid, and yes its a qnap thing, it is nice that jobs follow each other, becuase sometimes i update large files so a time scedule is kinda well bad , since my upload is only 40mbit, to run 2 ore more jobs at once will bottleneck my internet or the current job, thats why i ask if duplicati can do chain jobs , if one finish the next one start, and i use web interface, i dont know command line

I have like 30 jobs that always ran monthly on my qnap,

And since i have to re-upload like 40tb, this will take alot time if i have to manualy do the jobs

That’s a lot of stuff. Let’s pull it apart.

Why? Duplicati only runs 1 job at once. Are you trying to start backup right after large-file-update?
Are you trying to keep Duplicati from being the second job while QNAP job is going (will be hard)?

Current job from which backup program? Duplicati Internet usage can also be reduced with above asynchronous-concurrent-upload-limit and you can also use throttle-upload to limit it further. Controls on CPU and disk action also exist, although you don’t say where you’re running Duplicati. Generally it’s best to run Duplicati on the source system. Access across SMB is less unreliable too.

If the goal is limited Internet strain, see above, and Duplicati GUI will do implicit chaining of delayed scheduled jobs. If you don’t care about run order, you could schedule all at once and let them chain.

I don’t know how many you plan for Duplicati, or whether there’s a need to run all of them in one row.

For job size, try to set blocksize so that you have about a million or fewer blocks in backup. It’s faster.

For job timing, maybe you could schedule them on different days of the month to spread backup load.

If above was enough to get you going, great. If not, please clarify any of this specifically wanting more.

I have two servers running Duplicati, the second server threw a 403 a couple of days ago and it was the Google “Project Rate Limit Exceeded” issue. The enhanced error reporting worked and the issue was captured. It’s not an issue that happens regularly for me, but I have just implemented exponential backoff retry logic in three areas where I feel this issue may occur. The number of retries is configurable. I’ve also adding logging to the main public Google Drive and Google Services operations to provide us with more information moving forward. I’m in testing now.

Just in case you’re about to duplicate efforts, similar functionality was already implemented here for all backends. If that doesn’t provide what you need, perhaps there’s a way to generalize your solution so that all backends may benefit?

Thank you, I like this other solution as it handles all backends. It only handles puts, which is the most likely area to be affected by rate-limit errors, but they can occur on any transfer. The solution also implements retry logic on any exception, it doesn’t limit itself to only retry-able exceptions, I’ll have a chat to the dev about this to see if he minds if I make a couple of changes. The trick is that retry-able exceptions present differently for each backend, I have an idea that might suit this problem.

Off topic ,

Guys…, i have a new problem i keep getting oath error
And i don’t know why, maby someone can tell me?

Agreed. Look over the forum posts, right at the top. There are many reports, but it needs admin action.