This is a fine explanation once I reread it a couple times.
Duplicati asks initially in a session the oauth service for a token based on the v1 authid and gets an v2 authid back, which it keeps for the remainder of this session.
From this explanation, it is as if the OAuth Handler ignores all this
and is never aware of the new tokens being exchanged between Duplicati
and Jottacloud. It would never get the new refresh and access tokens
to store in the database.
Duplicati must run the session long enough that a token refresh is needed, i.e. 1 hour, so that it will perform a token refresh using its received v2 authid. This makes the refresh token stored in the oauth service for the v1 authid invalid.
I seem to remember memcache calls in the V2 code, however. It looked
to me like the only difference in V1 vs V2 was the gits that go into
the authid in order to mary the request with the memcache record.
Duplicati must run a new session starting with the v1 authid again, to trigger a token refresh request against Jottacloud using an invalidated refresh token.
this would immediately result in a 500 since neither the access token
nor the refresh token are valid.
Finally, it takes 12, or is it 24 hours, from token refresh with invalid refresh token before Jottacloud marks the token (all instances of it) as stale.
When I was originally working with this a long time ago (back in April
or May?) I had a long-lived backup I was trying to finish. At that
time I had theorized something similar to this explanation but without
Duplicati switching from V1 to V2. My symptoms were I would obtain a
valid token from Jottacloud, perform the test to verify it worked,
saved it in the config and attempted to run the backup.
At the time, it appeared as if the OAuth handler would hand out valid
tokens to Duplicati. It had a counter that would tick down to 0 before
sending a request to Jottacloud to refresh thhings. I don’t remember
if the received refresh token waaas storedd or not.
Next the tokens seemed to be getting properly refreshed as long as one
of the writer threads accessed Jottacloud within an hour. If it took
more than an hour between calls to Jottacloud, i.e. a single upload
took more than an hour and nobody was running faster, then started
getting 500s from Jottacloud. I’m not fully certain, but I believe
that discussion happened on the PR #7 at Github for the oauth handler.
Somehow we convinced ourselves it was resolved because the PR made it
But yes I agree, once Jotta gets mad about the tokens, it invalidates
all the tokens and you have to register a new CLI token to make peace.
This time when I ran a backup, the delta was quite large because I
hadn’t run one in a while. I fell asleep during the backup and when I
awoke it appeared to have completed the file transfer, then got stuck
in some metadata handling. I say this because when the backup was
restarted, Duplicati appeared to be racing through all the new files
and deltas since the last successful backup but did not transfer
hardly any of it to Jottacloud because it was already on the storage.
Any deltas were picked up and the backup finished without any backend
Between the first and second attempts at running the backup, I
deployed the “merge” from PR #10 for the OAuth Handler and pointed my
Duplicati at it. In all the instances where the tokens get
invalidated, it appears the backup had to run for at least 12 hours
before things break.
I’ll try running another backup tomorrow and see if the existing
tokens are still good. This means I am running unpatched Duplicati
against the beta oauth handler. From the last post, it seems this is
an acceptable configuration because fixing it in the OAuth handler
makes the fixes in Duplicati irrelavent.
BTW anyone can use the beta OAuth handler if they want to try this and
are having trouble getting Duplicati to take the DLLs or don’t want to
bother with them. I would be interested in people’s results after
running any of the code against the beta OAuth handler.
Oh I seem to remember there was a rate limiting thing put in place on
the OAuth handler. I increased it to some large value and the errors
went away. To be honest, it was a while ago and several projects have
occupied my brain between then and now.
I ththink most of this summation is right, but I’m curious if Authid
V1 and V2 have been around since PR #7 was implemented or not. Is this
some weird red hairing, or is our own housekeeping causing us to send
expired tokens in some rare circumstance.
I think if we can come up with that, we can start to unravel some of
this. Again, I think this summary you provided was extremely helpful.