Jottacloud Error 401 (Unauthorized)

ok, so the OAuth code remembers and refreshes the refresh token when
the refresh token rotates? I haven’t heard of a refresh token
rotating, only the access token. What is the problem with Duplicati
receiving the refresh token? The reports as I understand them are that
tokens, including refresh tokens, rotate every 12 hours or so, and
then we do not keep the new token so we are using a stale refresh
token. When we try to refresh the access token, we get an error
because of the stale refresh token. How does the problem manifest
itself?
It is almost as if the conditions are that you have to constantly use
the service for 12 hours, then something causes your tokens to be
invalid, then you have to generate a new token through the website.
Can we clearly articulate the circumstances required to lose access? I
think that will help with categorizing the various issues we are
seeing.

Just to add on this: It is the authid that is labelled v2. Duplicati code refers to “v1 authid” and “v2 authid”. So nothing to do with OAuth versions. With v1 authid the refresh token (and access token) is stored in the oauth service, and the authid is a unique key the oauth service generates and presents to you so that you can configure your Duplicati backend with it. Duplicati will then later use this authid when contacting the oauth service, and the oauth service then loads the stored tokens, refreshes with request to Jottacloud if necessary, and returns a valid access token back to Duplicati that it can use for authenticated api requests. A v2 authid, in contrast, is the actual refresh token, so the refresh token is stored directly in your Duplicati backend configuration and not in the oauth service. It is still the oauth service that is responsible for the token refreshing, but Duplicati now supplies the refresh token, aka. v2 authid, to the oauth service and it will request/refresh token from Jottacloud and return an access token back to Duplicati.

There is no explicit refresh token refreshing, but Jottacloud, as one of few providers, generates a new refresh token with every regular access token refresh, i.e. it returns a new refresh token with every new access token. After performing such a refresh the old refresh token is invalidated. Access token expires after 1 hour, refresh tokens does not have a separate expiration time. Now this does not work well with the v2 authid approach, since then you store the refresh token in your backend configuration (authid property). After 1 token refresh (e.g. 1 hour) this refresh token will be invalidated, and your backend configuration effectively broken. So with Jottacloud we need to use v1 authid. Now the problem with this is (was) that an v1 authid will be automatically upgraded to v2 authid: The oauth service piggy-backs an v2 authid to the access token response, and Duplicati automatically detects this and in the current session swaps its v1 authid with a v2 authid. Throughout this session the v2 authid is used, but this means the refresh token stored in the oauth service (v1 authid mode) will not be updated whenever there is a token refresh (since the session is now in v2 authid mode). Next time you run a backup (new session) it will start out with the v1 authid stored in the backend configuration again, but then the oauth service will load the saved token which is now an old and invalidated refresh token, and when it is sending a refresh request to Jottacloud with it then Jottacloud marks your tokens as stale, and within maybe 12 hours authentication will fail.

The original problem is about this v2 authid auto-upgrade, I think. But the confusing factor is the timing and the sequence of events:

  1. Duplicati asks initially in a session the oauth service for a token based on the v1 authid and gets an v2 authid back, which it keeps for the remainder of this session.
  2. Duplicati must run the session long enough that a token refresh is needed, i.e. 1 hour, so that it will perform a token refresh using its received v2 authid. This makes the refresh token stored in the oauth service for the v1 authid invalid.
  3. Duplicati must run a new session starting with the v1 authid again, to trigger a token refresh request against Jottacloud using an invalidated refresh token.
  4. Finally, it takes 12, or is it 24 hours, from token refresh with invalid refresh token before Jottacloud marks the token (all instances of it) as stale.

I don’t know if it was possible to follow my ramblings above, and I don’t know if it is 100% correct. Worse, I have not been able to grasp if it explains everything, i.e. that this is the actual problem and the only problem. From previous reports, it seems not. But it has proven difficult to verify the reports from users against the sequence described. And then also this thread has grown into a mix of users with latest canary, users with the 3 patched dlls preventing v2 authid auto-upgrade, and users which have other problems with Jottacloud that are not really related to the above…

If this is possible, that would be very helpful, yes.

I have been running 2.0.6.104_canary_2022-06-15 (with DLL fix) since July and never had to create a new token.
I’m running 6 different jobs, starting at midnight until 6 a.m. (depending on the amount of data to backup of course it takes longer). They are running daily.
What is maybe worth to mention: I have a static IP address, my WAN IP never changes.

This is a fine explanation once I reread it a couple times.

Duplicati asks initially in a session the oauth service for a token based on the v1 authid and gets an v2 authid back, which it keeps for the remainder of this session.

From this explanation, it is as if the OAuth Handler ignores all this
and is never aware of the new tokens being exchanged between Duplicati
and Jottacloud. It would never get the new refresh and access tokens
to store in the database.

Duplicati must run the session long enough that a token refresh is needed, i.e. 1 hour, so that it will perform a token refresh using its received v2 authid. This makes the refresh token stored in the oauth service for the v1 authid invalid.

I seem to remember memcache calls in the V2 code, however. It looked
to me like the only difference in V1 vs V2 was the gits that go into
the authid in order to mary the request with the memcache record.

Duplicati must run a new session starting with the v1 authid again, to trigger a token refresh request against Jottacloud using an invalidated refresh token.

this would immediately result in a 500 since neither the access token
nor the refresh token are valid.

Finally, it takes 12, or is it 24 hours, from token refresh with invalid refresh token before Jottacloud marks the token (all instances of it) as stale.

When I was originally working with this a long time ago (back in April
or May?) I had a long-lived backup I was trying to finish. At that
time I had theorized something similar to this explanation but without
Duplicati switching from V1 to V2. My symptoms were I would obtain a
valid token from Jottacloud, perform the test to verify it worked,
saved it in the config and attempted to run the backup.

At the time, it appeared as if the OAuth handler would hand out valid
tokens to Duplicati. It had a counter that would tick down to 0 before
sending a request to Jottacloud to refresh thhings. I don’t remember
if the received refresh token waaas storedd or not.

Next the tokens seemed to be getting properly refreshed as long as one
of the writer threads accessed Jottacloud within an hour. If it took
more than an hour between calls to Jottacloud, i.e. a single upload
took more than an hour and nobody was running faster, then started
getting 500s from Jottacloud. I’m not fully certain, but I believe
that discussion happened on the PR #7 at Github for the oauth handler.
Somehow we convinced ourselves it was resolved because the PR made it
into mainline.

But yes I agree, once Jotta gets mad about the tokens, it invalidates
all the tokens and you have to register a new CLI token to make peace.

This time when I ran a backup, the delta was quite large because I
hadn’t run one in a while. I fell asleep during the backup and when I
awoke it appeared to have completed the file transfer, then got stuck
in some metadata handling. I say this because when the backup was
restarted, Duplicati appeared to be racing through all the new files
and deltas since the last successful backup but did not transfer
hardly any of it to Jottacloud because it was already on the storage.
Any deltas were picked up and the backup finished without any backend
errors.

Between the first and second attempts at running the backup, I
deployed the “merge” from PR #10 for the OAuth Handler and pointed my
Duplicati at it. In all the instances where the tokens get
invalidated, it appears the backup had to run for at least 12 hours
before things break.

I’ll try running another backup tomorrow and see if the existing
tokens are still good. This means I am running unpatched Duplicati
against the beta oauth handler. From the last post, it seems this is
an acceptable configuration because fixing it in the OAuth handler
makes the fixes in Duplicati irrelavent.

BTW anyone can use the beta OAuth handler if they want to try this and
are having trouble getting Duplicati to take the DLLs or don’t want to
bother with them. I would be interested in people’s results after
running any of the code against the beta OAuth handler.

Oh I seem to remember there was a rate limiting thing put in place on
the OAuth handler. I increased it to some large value and the errors
went away. To be honest, it was a while ago and several projects have
occupied my brain between then and now.

I ththink most of this summation is right, but I’m curious if Authid
V1 and V2 have been around since PR #7 was implemented or not. Is this
some weird red hairing, or is our own housekeeping causing us to send
expired tokens in some rare circumstance.

I think if we can come up with that, we can start to unravel some of
this. Again, I think this summary you provided was extremely helpful.

There is a memcache, yes. I think for v2 authid it only caches the access token so that when Duplicati supplies its refresh token (aka v2 authid) to the oauth service can send a cached access token back if it is still valid, instead of having to send a request (token refresh request) to Jottacloud every time.

I don’t think this is true. The “invalidated” refresh token can still be used, until the 12/24 hours stale token event.

Correct. I consider the OAuth handler change the “correct” fix. The Duplicati fix is to some degree more of a workaround, partially because I thought it would get released much quicker (but it seems not…). They both do the same thing with regards to this issue: Prevent authid auto-upgrade from v1 to v2. I still think it is best to include both, for completeness, robustness, etc. But for testing, you only need one: Either run Duplicati from master build (or use the dll patch), or run release version of Duplicati against beta OAuth service.

The v2 authid has been around in the oauth service since 2016, and from the start it sent back the v2 authid together with a v1 response: Support for v2 tokens that are just the refresh tokens · duplicati/oauth-handler@07c3061 · GitHub

That is a good question!

I’ve been using rclone while waiting for the fix to make it to release, and in the last few days it has stopped working.

It gets stuck deleting unwanted files on my daily backups, and when I try to do a full verify with download of all remote files, that hangs on the first or second file over a few megabytes in size.

I think I will try regenerating the token next, but it appears that something has changed and rclone no longer works. I’ll do some research to see if any other rclone users are having this issue.

I’m using rclone with Jottacloud myself, and have not seen any problems with it. But I’ve not used rclone as a backend for Duplicati.

Sorry for the radio silence.
It is now working for me:
I am running FreeBSD 2.0.6.104_canary_2022-06-15 (as I was before) but I did not realise I had to use the unofficial DLLs as well.
Since I did I have not had a single issue - even with a backup running for days with millions of files (before using the DLLs it would invalidate the token with every backup that ran longer than a couple of hours).

Thanks a lot for the support & patience.!

1 Like

The solution is still working for me here but I have noticed that Duplicati uses much less bandwidth now. Only about 10-20mpbs while I have about twice as much available. Has anyone else a similar behaviour?

  • I have well under 5TB uploaded to JC
  • jotta-cli uploads as fast as expected on the same machine
  • Throttling is turned off.

Don’t know the reason for that, but just mentioning that I have a PR to change the Jottacloud backend to use a newer upload API, same as currently used by rclone and probably jotta-cli: Jottacloud upload implementation changed to use new api by albertony · Pull Request #4715 · duplicati/duplicati · GitHub. I’ve left it as a draft for quite some time, because I don’t want to complete it until the authentication issues are out of the way (and also until I see that it is worth investing more of my time on it, i.e. other completed PRs actually gets merged and released).

Hi! After a semi-thorough market inventory I just settled on Jottacloud + Duplicati to replace Crashplan. Running into this immediately was a bit of a cold shower. :joy: I’ve read through this whole thread, switched to the Canary build (2.0.6.104_canary_2022-06-15) and patched with the three DLLs linked from here (which I’m really not very comfortable with, but a bit desperate to get it working). I still get error 500 after a couple of hours and have to generate a new CLI key. Did I miss some step?

Since I like JottaCloud photo sync better than Qfile sync, I want to sync the other way, from JotttaCloud to my Qnap NAS. Anyone knows of a way to do this?

Windows MSI 2.0.6.104 fresh install
albertony/duplicati at jottacloud-disable-v2-authid (github.com) (now merged into duplicati/duplicati) 3 patched DLLs
–asynchronous-concurrent-upload-limit unset
https://duplicati-oauth-handler-beta.appspot.com/ (duplicati-oauth-handler PR/10 code)
The beta handler helped; my backups are usually fine now. But I still get tokens invalidated during deletion of large file sets very rarely, could be unrelated since it is so rare.

I often fail during the deletion of large file sets if there is anything significant being uploaded (>2GB).

And now it’s working fine. The last 1.5 TB uploaded without issues. Don’t know what the problem was. I initially missed that I had to install the complete 2022-06-15 build and not just use autoupdate to it, but even after remedying that I still had the OAuth problems. Also couldn’t install the canary .deb file - had to download the zip archive and extract on top of the installation. Plus patched files, so it’s a bastard of an install. Hope we can get this mainlined at some point.

For what it’s worth, rclone still works very well. Not ideal but it does mean you can use the normal stable version.

Any chance for those changes to make it into the released canary?

Probably slightly above zero, but release manager had no time last month and no forecast on time.
You can watch Releases category, where questions of this sort come up. Last reply was yesterday.

Thanks to the volunteers (including on this topic) who keep Duplicati going, but more are needed…

Developers at all levels and areas are encouraged, but release management is kind of a large role, depending on how it’s defined. Releasing what’s already committed (all 4 changes) might be easier compared to ongoing pull request work. Maybe I’ll ask about time again later if no release emerges.

1 Like

Still not working? :confused::thinking:

Please try the latest canary… :crossed_fingers: