I am also affected by the Google Drive bug where Google Drive (Limited access) “forgets” the access. To solve this, I created a dedicated Google account for the backups, and have been using it with a Google Drive “Full Access” AuthID for a few years.
I am now performing a DRP test (simulating that I lost everything, including the Duplicati DB and settings such as AuthID and trying to restore), and I found out that when I use https://duplicati-oauth-handler.appspot.com/ there is no longer anoption to generate a Google Drive “Full Access” AuthID.
Is this a bug in Duplicati’s OAuth server, or did Google finally decided to close the door on this type of IDs?
The slightly good news is that Canary releases (with an Experimental coming next) have
Description:
Remote Synchronization Tool
This tool synchronizes two remote backends. The tool assumes that the intent is
to have the destination match the source.
If the destination has files that are not in the source, they will be deleted
(or renamed if the retention option is set).
If the destination has files that are also present in the source, but the files
differ in size, or if the source files have a newer (more recent) timestamp,
the destination files will be overwritten by the source files. Given that some
backends do not allow for metadata or timestamp modification, and that the tool
is run after backup, the destination files should always have a timestamp that
is newer (or the same if run promptly) compared to the source files.
If the force option is set, the destination will be overwritten by the source,
regardless of the state of the files. It will also skip the initial comparison,
and delete (or rename) all files in the destination.
If the verify option is set, the files will be downloaded and compared after
uploading to ensure that the files are correct. Files that already exist in the
destination will be verified before being overwritten (if they seemingly match).
Usage:
Duplicati.CommandLine.SyncTool <backend_src> <backend_dst> [options]
Arguments:
<backend_src> The source backend string
<backend_dst> The destination backend string
Options:
-y, --confirm, --yes Automatically confirm the operation [default:
False]
-d, --dry-run Do not actually write or delete files. If not set
here, the global options will be checked [default:
False]
--dst-options <dst-options> Options for the destination backend. Each option
is a key-value pair separated by an equals sign,
e.g. --dst-options key1=value1 key2=value2
[default: empty] []
-f, --force Force the synchronization [default: False]
--global-options <global-options> Global options all backends. May be overridden by
backend specific options (src-options,
dst-options). Each option is a key-value pair
separated by an equals sign, e.g. --global-options
key1=value1 key2=value2 [default: empty] []
--log-file <log-file> The log file to write to. If not set here, global
options will be checked [default: ""] []
--log-level <log-level> The log level to use. If not set here, global
options will be checked [default: Information]
--parse-arguments-only Only parse the arguments and then exit [default:
False]
--progress Print progress to STDOUT [default: False]
--retention Toggles whether to keep old files. Any deletes
will be renames instead [default: False]
--retry <retry> Number of times to retry on errors [default: 3]
--src-options <src-options> Options for the source backend. Each option is a
key-value pair separated by an equals sign, e.g.
--src-options key1=value1 key2=value2 [default:
empty] []
--verify-contents Verify the contents of the files to decide whether
the pre-existing destination files should be
overwritten [default: False]
--verify-get-after-put Verify the files after uploading them to ensure
that they were uploaded correctly [default: False]
--version Show version information
-?, -h, --help Show help and usage information
which should be able to reupload your files with proper ownership info for Duplicati to use.
There was talk of more documentation for the tool (maybe some uses), but it’s not out yet.
EDIT 1:
Windows name for this is Duplicati.CommandLine.SyncTool.exe. Linux name is different.
Google Drive (Limited access) does not work well due to a Google bug. You’ll likely discover this problem when you actually lost your data and need to restore it (I fortunately discovered it during my DRP test). The workaround is extremely time and resource consuming (reupload files, etc.).
There’s not enough support from Google to solve these issues in the foreseeable future.
Assuming I got my conclusions right, that looks to me like a big nail in the coffin of Google Drive as backend storage…?
At least me personally I’ll now migrate to something else (and downsize or cancel my Google One storage plan).
And, shouldn’t this backend support be removed, or at least decorated with a big red letter disclaimer? I think it’s going to cause a lot of trouble, which may have the side effect of tarnishing Duplicati’s image, even though Duplicati is not to blame.
Yes, this was closed down after we tried to get Google to approve it. They did not acknowledge the problem with loosing access and repeatedly suggested using a Javascript API that does not work for background backups.
I have spent a lot of time on it, and have not found a way to reset permissions after they are broken, except re-uploading the files.
At least for personal use, it can be problematic. If you do want Google Drive, you can host a local version of the OAuth server, either as a simple process or as a Docker image.
With a non-public OAuth server, you can have a limited number of full-access accounts (1000 IIRC). I have written a guide for how to have a self-hosted OAuth server with Google Drive as an example here:
Running a self OAuth server is a bit bothersome, so I’ve decided to migrate to a GCS bucket using Duplicati.CommandLine.SyncTool.exe as suggested by ts678.
Lot’s of data to migrate and I think I’m hitting a bug in SyncTool. After copying a lot of files, eventually it starts failing with error:
Error copying duplicati-xxxxxxx.dindex.zip.aes: Failed to authorize using the OAuth service: Authentication provider for googledocs gave error: { "error": { "code": 400, "message": "Invalid JSON payload received. Unexpected token.\nclient_id=xxxxxxxxxx\n^", "status": "INVALID_ARGUMENT" }}. If the problem persists, try generating a new authid token from: https://duplicati-oauth-handler.appspot.com?type=googledrive
After a number of retries, it aborts. Just relaunching the same command line picks up where it left.
My educated guess is that the OAUTH token is timing out, and if so the solution would be for SyncTool to re-authenticate and retry?
The logic is for the sync-tool is using the same implementation as Duplicati. If something fails on a request, it will dispose the backend and create a fresh instance to prevent stale data like you suggest.
For that reason, it should not re-use a stale token, but instead grab a new token on each retry. Also, the error message is not from Duplicati, it is from the Google Auth server, so the local process is requesting a new token, and this is sent to Google, which is then rejecting it.
The error message itself is something I have seen in the logs but do not have a clear understanding of. What it looks like is that somewhere between the OAuth code (running on Google App Engine) and the Google Auth servers, part of the query is cut off. It looks like the cut-off is random, giving different error messages. Most requests are fine, but there seems to be some correlation with the failures.
In your example you can see that the request is rejected by the Google Auth server because invalid JSON is received. This is entirely unexpected because the code simply sends a JSON string (which I have tried logging, and it is complete), and then it is randomly rejected.
If you have time to try, we have a new OAuth server running, that is not using GAE, and it would be interesting to know if this has the same behavior:
This will make the Auth requests go to the new server. Then you need to go to that address and make a new AuthID and replace the current one. Then you can continue as normal.
For now, the two systems are fully isolated, so a token for one system does not work on the other. Long-term, the plan is to make a transparent switchover, but for now you need to take care not to mix the two. You can always revoke the tokens that you do not need.
Thanks. I tried the new OAuth server, but it does not seem to be working.
After I click “Continue” in this screen, I can see the animated progress bar at the top for a while:
Oopps… is it possible that SyncTool.exe does not yet support this param? (I tried it on 2.1.0.119_canary_2025-05-29).
Unrecognized command or argument '--oauth-url=https://oauth-service.duplicati.com'.
Description:
Remote Synchronization Tool
[...]
Usage:
Duplicati.CommandLine.SyncTool <backend_src> <backend_dst> [options]
Arguments:
<backend_src> The source backend string
<backend_dst> The destination backend string
Options:
-y, --confirm, --yes Automatically confirm the operation [default: False]
-d, --dry-run Do not actually write or delete files. If not set here, the global options will be
checked [default: False]
--dst-options <dst-options> Options for the destination backend. Each option is a key-value pair separated by
an equals sign, e.g. --dst-options key1=value1 key2=value2 [default: empty] []
-f, --force Force the synchronization [default: False]
--global-options <global-options> Global options all backends. May be overridden by backend specific options
(src-options, dst-options). Each option is a key-value pair separated by an equals
sign, e.g. --global-options key1=value1 key2=value2 [default: empty] []
--log-file <log-file> The log file to write to. If not set here, global options will be checked
[default: ""] []
--log-level <log-level> The log level to use. If not set here, global options will be checked [default:
Information]
--parse-arguments-only Only parse the arguments and then exit [default: False]
--progress Print progress to STDOUT [default: False]
--retention Toggles whether to keep old files. Any deletes will be renames instead [default:
False]
--retry <retry> Number of times to retry on errors [default: 3]
--src-options <src-options> Options for the source backend. Each option is a key-value pair separated by an
equals sign, e.g. --src-options key1=value1 key2=value2 [default: empty] []
--verify-contents Verify the contents of the files to decide whether the pre-existing destination
files should be overwritten [default: False]
--verify-get-after-put Verify the files after uploading them to ensure that they were uploaded correctly
[default: False]
--version Show version information
-?, -h, --help Show help and usage information
BTW, I think strictly speaking it would need to support two --oauth-url parameters, one for the source and another for the destination.
In my particular case, I would need that since I am copying from Google Drive and, as we know, if I generate a new AuthID on the new OAuth server, I will not be able to access the existing data.
Oh, yes, you are right. This is an issue with the way the OAuth url is passed around. It expects “something else” to handle the OAuth url, and the sync tool does not do this.
I have made a PR that fixes this problem so you can provide different OAuth url’s for each backend via the destination url.
Yes, exactly.
That should not be needed, if it is the same Google account. From Google’s perspective the two OAuth servers are for the same app (despite being two different servers). The same app and the same account should grant the same permissions.
The Google token itself is stored encrypted on the server, so you cannot use the AuthId with a different server, but you should be able to create an AuthId on the new server and both will work.
But that does not fix the problem that you cannot specify a different OAuth server.
The source data is on Google Drive. If I understand correctly, I can not use my existing AuthID with the new OAuth server, so I need to generate a new one. This new AuthID will be “Google Drive (Limited Access)” whereas the old AuthID was generated as “Google Drive (Full Access)” in order to solve the “file can’t be read issue”. That’s why I think we need an additional --oauth-url parameter for the source, allowing me to use my current AuthID. Is my reasoning correct?
When is a release containing this PR planned? (or maybe there’s a nightly build I can download to test somewhere? I tried to find it in Github, but couldn’t)