Jottacloud Error 401 (Unauthorized)

Unfortunately things are not sounding that solid even on new code, and even rclone breaks sometimes.
Still, there seems to be a bit of improvement. Does that sound like an accurate summary of posts here?

Yes, that seems accurate. Backups that have a small amount of data to upload seem to be fine. Backups that have a large amount of data seem to still fail. When they do fail, they seem to fail at whatever the last steps of the backup are (usually around the part where it says: Deleting unwanted files …). Let me know if you guys need more specific logs to help with the cause.

1 Like

I’m not sure if this helps, but most of the time that it fails, I have to repair 1 file. I’ve posted the logs here JIC it’s of any help:

Failed: Found 1 files that are missing from the remote storage, please run repair
Details: Duplicati.Library.Interface.RemoteListVerificationException: Found 1 files that are missing from the remote storage, please run repair
at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, IEnumerable1 protectedFiles) at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile) at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task) at Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.<Backup>b__0(BackupResults result) at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action1 method)

Log data:
2022-09-06 04:15:59 -07 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-MissingFile]: Missing file: duplicati-i7d9fc80e7cd0424e8f89f50847ffb618.dindex.zip.aes
2022-09-06 04:15:59 -07 - [Error-Duplicati.Library.Main.Operation.FilelistProcessor-MissingRemoteFiles]: Found 1 files that are missing from the remote storage, please run repair
2022-09-06 04:15:59 -07 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
Duplicati.Library.Interface.RemoteListVerificationException: Found 1 files that are missing from the remote storage, please run repair
at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, IEnumerable`1 protectedFiles)
at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile)
at Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext()

What I’m realizing is that this topic on the 401 error drifted into the 500 error (roughly 50/50 based on searching for mentions in posts), then this latest post doesn’t seem to have any HTTP error mentions.

Disable automatic use of v2 authid for Jottacloud #4779 which got committed Aug 7 awaiting Canary release (with some unofficial builds for testing) is mentioning 500 and 400, and people saying fix didn’t work completely seem to be seeing 500 even after the fix (although what’s run is sometimes unstated).

So at the moment Jottacloud issues seem rather numerous, but I wonder if some are just generic ones such as missing remote dindex files. Known causes for that include failures during a compact which is part of “Deleting unwanted files”. You’d have to either remember error history or set up a log file to log.

Error during compact forgot a dindex file deletion, getting Missing file error next run. #4129
The log-file=<path> log-file-log-level=retry clue for this is if the named missing file had logged its delete.

1 Like

That’s the error I got yesterday.
I use the latest canary with the .dlls which include the fixes from above.
Everything went smooth for several weeks until yesterday.
The backup job was done, but a lot of files had to be deleted. Which means download, repack, upload. After ~12 hours the job failed.

Duplicati.Library.Interface.UserInformationException: Autorisation mit dem OAuth Service ist fehlgeschlagen: Server error. Wenn das Problem weiter besteht, versuche ein neues AuthID Token zu generieren von: Duplicati OAuth Handler —> System.Net.WebException: The remote server returned an error: (500) Internal Server Error.

I created a new token, restarted the job and it failed again after ~4 hours.
On the third try the job finished properly.

Could it be that there are too many requests to Jottacloud which fails the token?

2 Likes

I had the same behavior the last days.
Because sometimes the token expired, I had to change it into a new one in 7 jobs. Because it’s no big fun, I wanted to try just one job for all and deleted some old versions to gather enough free cloud space. Regarding the token lifetime a big mistake!
Uploading large backups, deleting versions (huge amount of data) or repairing databases, all of them failed several times with oauth service errors until they were finally finished.
Now smaller update backups are working fine. At least it seems so.
(with 2.0.6.104_canary_2022-06-15 for Windows)
(Of course I went back to separate backup jobs!)

1 Like

Because that does not have the latest code fix attempt, did you install the unofficial DLLs as well?

Let me attempt a more thorough review of who it seemed to help. Anyone can update their status:
(but I’ll update the below from the posts – seems to be looking better as people pick up the patch)

Y @wmrch
Y @FreaQ
Y @Duplicaterr
Y @dukrat (partial)
Y @waLIEN (partial, --asynchronous-concurrent-upload-limit=1 helped some, big backups worse)
N @SirTerrific (unknown Duplicati)
Y @jankkm
Y @Dominic-Schaefer
Y @SulfurRacoon (but saw at least one different error)
? @shalmirane (unknown Duplicati presumably without latest changes because release is sought)

@waLIEN did an informal summary of the above here, but there’s uncertainty about some versions.

You don’t have the latest fix unless you did install from file (so not Duplicati autoupdate) to 2.0.6.104 followed by replacing the unofficial DLLs. In an autoupdate situation the first install is just a launcher searching for installed updates and running them. Patching that launcher won’t help Jottacloud code. Patching the update will be rejected because files are validated. You must run patched base version.
About → System info can show BaseVersionName and ServerVersionName, if you need that detail.

It would be nice if people who saw no improvement above can report on a 2.0.6.104 + patched base.
The better we can feel about the latest code fix, the better the chances release manager will release.
I’d have to ask because he doesn’t follow the forum much, but I want some good stats to support me.
If anybody wants to go into the (vaguely-defined) release manager role, there’s an opening available.

Now I did.
So I’m using 2.0.6.104_canary_2022-06-15 for Windows (updated via GUI) with the 3 DLLs (originals in “C:\Program Files\Duplicati 2” overwritten) and the option asynchronous-concurrent-upload-limit=1.
I have no issues for the moment. Big thanks!

I’ll post, when/if the next error appears.

If autoupdated means updated to 2.0.6.104 without an install from .msi file, the patch isn’t in use.
You need to get your “C:\Program Files\Duplicati 2” itself on 2.0.6.104 plus the unofficial patches.
In addition to checking About → System info to find your versions, you can read changelog.txt
Please make sure your base install (in Program Files folder) is 2.0.6.4 plus the unofficial patches.
I think an autoupdate 2.0.6.104 (if present) won’t pose a problem, but base needs to be as above.

I’m sorry, my fault, I meant, that I updated it via GUI at “About”.
System Info is saying base version is still 2.0.5.111_canary_2020-09-26.

I just installed duplicati-2.0.6.104_canary_2022-06-15-x64.msi now and in addition overwrote the 3 DLLs again.
Now it’s both the same version number:

  • ServerVersionName : - 2.0.6.104_canary_2022-06-15
  • BaseVersionName : 2.0.6.104_canary_2022-06-15
1 Like

Good news at least for me:
Today I had 6 backup jobs done in less than 1 h each and a very big one done in 14 h (eactly 13:59:50) without any errors (though there was an automatic router reboot in the night)!
Because the upload speed seems limited by jottacloud I think, I have only 1,2 to 1,5 MB/s. (My connection usually supports 51 Mbit/s = 6,4 MB/s.) That means the big backup was about 60-75 GB.

Since the login changes at jottacloud I was never able to upload such big data without errors.
Thank you all very much!

Correct, I’m running the latest canary available.

I was never able to get that dev virtual environment running in VirtualBox, I guess they were never meant to run on Home versions of windows host; as no matter what I disabled from HyperV, it never finished booting to the login screen.

Just for information:
It is not the same error, but since the error seems to be caused by jottacloud, this log may be useful here:

System.AggregateException: Mindestens ein Fehler ist aufgetreten. ---> System.AggregateException: In die Übertragungsverbindung können keine Daten geschrieben werden: Eine vorhandene Verbindung wurde vom Remotehost geschlossen.(...)

(…) —> System.IO.IOException: In die Übertragungsverbindung können keine Daten geschrieben werden: Eine vorhandene Verbindung wurde vom Remotehost geschlossen. —> System.Net.Sockets.SocketException: Eine vorhandene Verbindung wurde vom Remotehost geschlossen
bei System.Net.Sockets.Socket.EndSend(IAsyncResult asyncResult)
bei System.Net.Sockets.NetworkStream.EndWrite(IAsyncResult asyncResult)
— Ende der internen Ausnahmestapelüberwachung —
bei CoCoL.AutomationExtensions.d__101.MoveNext() --- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde --- bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) bei Duplicati.Library.Main.Operation.BackupHandler.<FlushBackend>d__19.MoveNext() --- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde --- bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) bei Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext() --- Ende der internen Ausnahmestapelüberwachung --- bei Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext() --- Ende der internen Ausnahmestapelüberwachung --- bei CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task) bei Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.<Backup>b__0(BackupResults result) bei Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action1 method)
bei Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
bei Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)
—> (Interne Ausnahme #0) System.AggregateException: In die Übertragungsverbindung können keine Daten geschrieben werden: Eine vorhandene Verbindung wurde vom Remotehost geschlossen. —> System.IO.IOException: In die Übertragungsverbindung können keine Daten geschrieben werden: Eine vorhandene Verbindung wurde vom Remotehost geschlossen. —> System.Net.Sockets.SocketException: Eine vorhandene Verbindung wurde vom Remotehost geschlossen
bei System.Net.Sockets.Socket.EndSend(IAsyncResult asyncResult)
bei System.Net.Sockets.NetworkStream.EndWrite(IAsyncResult asyncResult)
— Ende der internen Ausnahmestapelüberwachung —
bei CoCoL.AutomationExtensions.d__101.MoveNext() --- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde --- bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) bei Duplicati.Library.Main.Operation.BackupHandler.<FlushBackend>d__19.MoveNext() --- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde --- bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) bei Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext() --- Ende der internen Ausnahmestapelüberwachung --- bei Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext() ---> (Interne Ausnahme #0) System.IO.IOException: In die Übertragungsverbindung können keine Daten geschrieben werden: Eine vorhandene Verbindung wurde vom Remotehost geschlossen. ---> System.Net.Sockets.SocketException: Eine vorhandene Verbindung wurde vom Remotehost geschlossen bei System.Net.Sockets.Socket.EndSend(IAsyncResult asyncResult) bei System.Net.Sockets.NetworkStream.EndWrite(IAsyncResult asyncResult) --- Ende der internen Ausnahmestapelüberwachung --- bei CoCoL.AutomationExtensions.<RunTask>d__101.MoveNext()
— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —
bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bei Duplicati.Library.Main.Operation.BackupHandler.d__19.MoveNext()
— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —
bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bei Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext()<—
—> (Interne Ausnahme #1) System.AggregateException: Mindestens ein Fehler ist aufgetreten. —> System.IO.IOException: In die Übertragungsverbindung können keine Daten geschrieben werden: Eine vorhandene Verbindung wurde vom Remotehost geschlossen. —> System.Net.Sockets.SocketException: Eine vorhandene Verbindung wurde vom Remotehost geschlossen
bei System.Net.Sockets.Socket.EndSend(IAsyncResult asyncResult)
bei System.Net.Sockets.NetworkStream.EndWrite(IAsyncResult asyncResult)
— Ende der internen Ausnahmestapelüberwachung —
bei CoCoL.AutomationExtensions.d__101.MoveNext() --- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde --- bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) bei Duplicati.Library.Main.Operation.BackupHandler.<FlushBackend>d__19.MoveNext() --- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde --- bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) bei Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext() --- Ende der internen Ausnahmestapelüberwachung --- ---> (Interne Ausnahme #0) System.IO.IOException: In die Übertragungsverbindung können keine Daten geschrieben werden: Eine vorhandene Verbindung wurde vom Remotehost geschlossen. ---> System.Net.Sockets.SocketException: Eine vorhandene Verbindung wurde vom Remotehost geschlossen bei System.Net.Sockets.Socket.EndSend(IAsyncResult asyncResult) bei System.Net.Sockets.NetworkStream.EndWrite(IAsyncResult asyncResult) --- Ende der internen Ausnahmestapelüberwachung --- bei CoCoL.AutomationExtensions.<RunTask>d__101.MoveNext()
— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —
bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bei Duplicati.Library.Main.Operation.BackupHandler.d__19.MoveNext()
— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —
bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
bei Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext()<—
<—
<—

The error appeared with Duplicati - 2.0.6.104_canary_2022-06-15 (installte MSI + 3 DLLs) while doing a scheduled backup.
The Oauth token is still valid.
Other later scheduled backups had no errors.
A manual retry did fine.

To add to the fun, I just deployed the code in PR/10 from duplicati-oauth-handler to https://duplicati-oauth-handler-beta.appspot.com/ again. I have not yet tried the latest Duplicati code because I did not know of this problem when I started my failed backup again.
One puzzling question I have, and maybe @Albertony can chime in here, but why are we disabling oauth v2? Without reading through the code, but instead reading through the comments, it seems a quick short-term fix was to ignore the V2 tokens that come back from Jotta and instead continue to use the V1 tokens. This seems like a waiting timebomb when Jotta flat-out refuses V1 tokens and/or refuses to issue new Cli tokens that support V1. What am I missing here?
Now I am running the unlikely combination of an old Duplicati that does not have the code fix (presumably in the 3 DLLs mentioned) and the oauthh handler fixes. This combination would presumably force my Jotta to use V1 tokens because the oauth handler is refusing to hand out v2 tokens. Is that kindda right?
Unless there is some horrible reason why we cannot use the v2 tokens, I’d say we should try to implement v2 before Jotta removes support for v1.

Quick answer, of the top of my head (may add/correct more later when I have time):

Correct. With the oauth fix the Duplicati fix is not necessary - i.e. the fix which is in master, but no releases yet, but is also in the 3 dlls for simple patching. They both basically do the same thing, just from separate sides: Oauth service not sending v2 tokens vs Duplicati not accepting v2 tokens.

V2 is simply a concept in the Duplicati OAuth handler where it returns the actual refresh token back to Duplicati instead of storing the refresh token internally and return a lookup key to it (the authid). So it basically has nothing to do with Jottacloud or OAuth at all.

ok, so the OAuth code remembers and refreshes the refresh token when
the refresh token rotates? I haven’t heard of a refresh token
rotating, only the access token. What is the problem with Duplicati
receiving the refresh token? The reports as I understand them are that
tokens, including refresh tokens, rotate every 12 hours or so, and
then we do not keep the new token so we are using a stale refresh
token. When we try to refresh the access token, we get an error
because of the stale refresh token. How does the problem manifest
itself?
It is almost as if the conditions are that you have to constantly use
the service for 12 hours, then something causes your tokens to be
invalid, then you have to generate a new token through the website.
Can we clearly articulate the circumstances required to lose access? I
think that will help with categorizing the various issues we are
seeing.

Just to add on this: It is the authid that is labelled v2. Duplicati code refers to “v1 authid” and “v2 authid”. So nothing to do with OAuth versions. With v1 authid the refresh token (and access token) is stored in the oauth service, and the authid is a unique key the oauth service generates and presents to you so that you can configure your Duplicati backend with it. Duplicati will then later use this authid when contacting the oauth service, and the oauth service then loads the stored tokens, refreshes with request to Jottacloud if necessary, and returns a valid access token back to Duplicati that it can use for authenticated api requests. A v2 authid, in contrast, is the actual refresh token, so the refresh token is stored directly in your Duplicati backend configuration and not in the oauth service. It is still the oauth service that is responsible for the token refreshing, but Duplicati now supplies the refresh token, aka. v2 authid, to the oauth service and it will request/refresh token from Jottacloud and return an access token back to Duplicati.

There is no explicit refresh token refreshing, but Jottacloud, as one of few providers, generates a new refresh token with every regular access token refresh, i.e. it returns a new refresh token with every new access token. After performing such a refresh the old refresh token is invalidated. Access token expires after 1 hour, refresh tokens does not have a separate expiration time. Now this does not work well with the v2 authid approach, since then you store the refresh token in your backend configuration (authid property). After 1 token refresh (e.g. 1 hour) this refresh token will be invalidated, and your backend configuration effectively broken. So with Jottacloud we need to use v1 authid. Now the problem with this is (was) that an v1 authid will be automatically upgraded to v2 authid: The oauth service piggy-backs an v2 authid to the access token response, and Duplicati automatically detects this and in the current session swaps its v1 authid with a v2 authid. Throughout this session the v2 authid is used, but this means the refresh token stored in the oauth service (v1 authid mode) will not be updated whenever there is a token refresh (since the session is now in v2 authid mode). Next time you run a backup (new session) it will start out with the v1 authid stored in the backend configuration again, but then the oauth service will load the saved token which is now an old and invalidated refresh token, and when it is sending a refresh request to Jottacloud with it then Jottacloud marks your tokens as stale, and within maybe 12 hours authentication will fail.

The original problem is about this v2 authid auto-upgrade, I think. But the confusing factor is the timing and the sequence of events:

  1. Duplicati asks initially in a session the oauth service for a token based on the v1 authid and gets an v2 authid back, which it keeps for the remainder of this session.
  2. Duplicati must run the session long enough that a token refresh is needed, i.e. 1 hour, so that it will perform a token refresh using its received v2 authid. This makes the refresh token stored in the oauth service for the v1 authid invalid.
  3. Duplicati must run a new session starting with the v1 authid again, to trigger a token refresh request against Jottacloud using an invalidated refresh token.
  4. Finally, it takes 12, or is it 24 hours, from token refresh with invalid refresh token before Jottacloud marks the token (all instances of it) as stale.

I don’t know if it was possible to follow my ramblings above, and I don’t know if it is 100% correct. Worse, I have not been able to grasp if it explains everything, i.e. that this is the actual problem and the only problem. From previous reports, it seems not. But it has proven difficult to verify the reports from users against the sequence described. And then also this thread has grown into a mix of users with latest canary, users with the 3 patched dlls preventing v2 authid auto-upgrade, and users which have other problems with Jottacloud that are not really related to the above…

If this is possible, that would be very helpful, yes.

I have been running 2.0.6.104_canary_2022-06-15 (with DLL fix) since July and never had to create a new token.
I’m running 6 different jobs, starting at midnight until 6 a.m. (depending on the amount of data to backup of course it takes longer). They are running daily.
What is maybe worth to mention: I have a static IP address, my WAN IP never changes.

This is a fine explanation once I reread it a couple times.

Duplicati asks initially in a session the oauth service for a token based on the v1 authid and gets an v2 authid back, which it keeps for the remainder of this session.

From this explanation, it is as if the OAuth Handler ignores all this
and is never aware of the new tokens being exchanged between Duplicati
and Jottacloud. It would never get the new refresh and access tokens
to store in the database.

Duplicati must run the session long enough that a token refresh is needed, i.e. 1 hour, so that it will perform a token refresh using its received v2 authid. This makes the refresh token stored in the oauth service for the v1 authid invalid.

I seem to remember memcache calls in the V2 code, however. It looked
to me like the only difference in V1 vs V2 was the gits that go into
the authid in order to mary the request with the memcache record.

Duplicati must run a new session starting with the v1 authid again, to trigger a token refresh request against Jottacloud using an invalidated refresh token.

this would immediately result in a 500 since neither the access token
nor the refresh token are valid.

Finally, it takes 12, or is it 24 hours, from token refresh with invalid refresh token before Jottacloud marks the token (all instances of it) as stale.

When I was originally working with this a long time ago (back in April
or May?) I had a long-lived backup I was trying to finish. At that
time I had theorized something similar to this explanation but without
Duplicati switching from V1 to V2. My symptoms were I would obtain a
valid token from Jottacloud, perform the test to verify it worked,
saved it in the config and attempted to run the backup.

At the time, it appeared as if the OAuth handler would hand out valid
tokens to Duplicati. It had a counter that would tick down to 0 before
sending a request to Jottacloud to refresh thhings. I don’t remember
if the received refresh token waaas storedd or not.

Next the tokens seemed to be getting properly refreshed as long as one
of the writer threads accessed Jottacloud within an hour. If it took
more than an hour between calls to Jottacloud, i.e. a single upload
took more than an hour and nobody was running faster, then started
getting 500s from Jottacloud. I’m not fully certain, but I believe
that discussion happened on the PR #7 at Github for the oauth handler.
Somehow we convinced ourselves it was resolved because the PR made it
into mainline.

But yes I agree, once Jotta gets mad about the tokens, it invalidates
all the tokens and you have to register a new CLI token to make peace.

This time when I ran a backup, the delta was quite large because I
hadn’t run one in a while. I fell asleep during the backup and when I
awoke it appeared to have completed the file transfer, then got stuck
in some metadata handling. I say this because when the backup was
restarted, Duplicati appeared to be racing through all the new files
and deltas since the last successful backup but did not transfer
hardly any of it to Jottacloud because it was already on the storage.
Any deltas were picked up and the backup finished without any backend
errors.

Between the first and second attempts at running the backup, I
deployed the “merge” from PR #10 for the OAuth Handler and pointed my
Duplicati at it. In all the instances where the tokens get
invalidated, it appears the backup had to run for at least 12 hours
before things break.

I’ll try running another backup tomorrow and see if the existing
tokens are still good. This means I am running unpatched Duplicati
against the beta oauth handler. From the last post, it seems this is
an acceptable configuration because fixing it in the OAuth handler
makes the fixes in Duplicati irrelavent.

BTW anyone can use the beta OAuth handler if they want to try this and
are having trouble getting Duplicati to take the DLLs or don’t want to
bother with them. I would be interested in people’s results after
running any of the code against the beta OAuth handler.

Oh I seem to remember there was a rate limiting thing put in place on
the OAuth handler. I increased it to some large value and the errors
went away. To be honest, it was a while ago and several projects have
occupied my brain between then and now.

I ththink most of this summation is right, but I’m curious if Authid
V1 and V2 have been around since PR #7 was implemented or not. Is this
some weird red hairing, or is our own housekeeping causing us to send
expired tokens in some rare circumstance.

I think if we can come up with that, we can start to unravel some of
this. Again, I think this summary you provided was extremely helpful.

There is a memcache, yes. I think for v2 authid it only caches the access token so that when Duplicati supplies its refresh token (aka v2 authid) to the oauth service can send a cached access token back if it is still valid, instead of having to send a request (token refresh request) to Jottacloud every time.

I don’t think this is true. The “invalidated” refresh token can still be used, until the 12/24 hours stale token event.

Correct. I consider the OAuth handler change the “correct” fix. The Duplicati fix is to some degree more of a workaround, partially because I thought it would get released much quicker (but it seems not…). They both do the same thing with regards to this issue: Prevent authid auto-upgrade from v1 to v2. I still think it is best to include both, for completeness, robustness, etc. But for testing, you only need one: Either run Duplicati from master build (or use the dll patch), or run release version of Duplicati against beta OAuth service.

The v2 authid has been around in the oauth service since 2016, and from the start it sent back the v2 authid together with a v1 response: Support for v2 tokens that are just the refresh tokens · duplicati/oauth-handler@07c3061 · GitHub

That is a good question!

I’ve been using rclone while waiting for the fix to make it to release, and in the last few days it has stopped working.

It gets stuck deleting unwanted files on my daily backups, and when I try to do a full verify with download of all remote files, that hangs on the first or second file over a few megabytes in size.

I think I will try regenerating the token next, but it appears that something has changed and rclone no longer works. I’ll do some research to see if any other rclone users are having this issue.