Replacing the OAuth service

Are you talking about manual registration using the provider’s web interface?
I don’t recall having done that, and rclone directions don’t seem to demand it.

Specifically, if you mean registering the redirect URL, theirs might be fixed at:

http://127.0.0.1:53682/

as documented in their manual. I’m not sure fixed is considered best practice.

My thoughts on “trust no one” is that whatever admin or malware controls the
server potentially has access to the refresh token when it’s decrypted for use.

This is the same argument I use for Duplicati, on a fully compromised system.
Ultimately the secret must be obtained in raw form to pass to a provider’s API.

It’s also the argument I used earlier for a corporate IT department wanting the
refresh tokens kept in-house somehow, although they have other issues then.

We face the same issues. I don’t know if GAE does more handholding for high
availability through machine failure, updates, etc. I would hope it’s well hidden.

Responsibility for doing backups and restores of stored tokens may also move.
Maybe GAE helps there. I’m not sure what the C# design is, or even if it’s local.
https://github.com/kenkendk/kvpsbutter seems to be in use, but I didn’t look far.

Potentially we’re also in multiple-server future if the Internet keeps fragmenting.
If it gets bad enough, it might get like the corporate IT model, except per region.

If issue is “encryption on the new service is different”, C# can call to Python code
as AuthID comes in. Do the usual decrypt for use, but also reencrypt in new way.

Question is whether passphrase length increase is needed, or just done because
it was known the rest of the encryption didn’t match up, so why not update more?

I don’t know what logs exist, but you’d probably want to make sure that incoming
request rate doesn’t exceed migration capacity. If it does, just defer the migration.

Is there a big advantage to the new encryption code, or is it just what was handy?

The docs suggest it is optional. I am guessing that there is a flow using their tokens. They also mentions the issue with opening the firewall to allow the request.

Yes, at some point the encryption must be removed. The encryption is just to ensure not storing a bunch of data in case there is an leak. Since the keys are not present outside the request, a compromise is gradual and not total.

This is used to support backups and storage of the tokens outside the local filesystem. This allows multi-server setups for load balancing.

The default is to store encrypted tokens on the local filesystem.

Yes, that is exactly the plan. But the current code does not allow access to the refresh token, so this needs to be added in a secure manner.

This was an somewhat arbitrary choice on my part and can be easily changed. It does change the AuthID so the client needs to change with it. I think I chose a longer length here because brute force capabilities have increased.

A bit of both to be honest. The encryption in GAE is based on PBKD and a somewhat custom MAC. For the C# version I opted to use AESCrypt as it is already a key component of Duplicati.

I misunderstood the following plan then:

whereas my plan (which may or may not work) just needs a minimal amount of code for decryption.

Are you talking about doing migration with help of something similar to the Python 3 port just done?
There could be server-to-server communication, but the plan I had just has an enhanced C# server.

Ideally, the migration should be available whenever a user fires up some ancient AuthID to be used.
Timing would need to be thought through more. A gradual cut has less potential risk than an instant.

I’d guess we lose the appspot.com domain name as we depart, so that has to be handled somehow.

I’m not sure which code this means, but I certainly am not claiming that the code has been made yet.
We already agree that all the server varieties have a refresh token at some point for an actual usage.

Specifically, I’m talking about things like Python.NET or IronPython bringing pycryptodome to C#.
Python and .NET - An Ongoing Saga is one evaluation of the differences. I haven’t run either one.

Another thing I haven’t used, but it sounds like you can still run things somehow to read database?

Part of the what-when question. Running two independent servers will lead to synchronization pain.

It seems a little odd that there’s no upgrade path. So your new account is totally distinct from old, so refresh tokens become no good despite all our other efforts at trying to provide for a nice migration?

As general question:

Would GAE still run if one got to it from a redirect from https://oauth-service.duplicati.com? Perhaps next Canary could default to that, then at some point the actual service comes from there?
Gradual uptake through Canary and then Beta would provide a slower rollout, to reduce some risks.

I sure hope this new server stays up well. The update server has been flaky again, not too long ago.

Yes, that hinges on having direct access to the storage. I am not sure if that is possible, but it would be working without any changes to the Python code then.

I think that does not solve the issue that Duplicati devs are mostly C# and having Python stuff in there makes it harder to maintain. Despite multiple Python versions, many libraries only work with CPython anyway.

Yes, it is not a nice solution. Previously, the OAuth was handled by a separate service, and you need to set it up there. This solution is now deprecated (but still works), and in here you can only have one domain registered. So I can change it to the new domain, but that breaks all existing tokens. Adding a domain is only possible with the new version, based on Azure.

Yes, that should be possible. I think the default is to allow redirects for the OAuth process, so it should just work. We could initially just forward in case the token is not known by the new service.

This is caused by Ceenhttpd which has some issue with injecting a timeout. I had hoped to fix it, but it seems the managed Kestrel server is working nicely so the update server should use that.

The OAuth implementation uses ASP.Net, and thus Kestrel, so I would not expect flakiness.

gcloud datastore export is batch export. Other options at Exporting and Importing Entities.

Automatic Upgrade to Firestore presumably happened awhile ago without any real choice.

Multiple services can share the datastore I think, but I’m not sure if you can deploy further.

App Engine might block re-deployments, but that leaves questions about what’s permitted.

Enabling deployments for legacy runtimes reaching end of support exists, if that helps any.

Supplied links claim Python.NET is an integration of CPython, so it’s used if that’s needed.

There might not be much to maintain, and not for long. Current server seems to have code to expire AuthID which have gone unused for 365 days. If policy continues, all are migrated or gone in a year.
The current encryption looks nicely limited to two spots (encrypt and decrypt) in simplecrypt.py.

I had held off suggesting a C# rewrite of the Python encryption. That seems even harder to maintain. Newly issued tokens should probably use the new encryption, as it sounds a little bit better in quality.

It’s just proposed as a potential method, but any implementation that can migrate well is fine with me.

Yes, that was one possible approach: side-loading access to the store.

Yes, that was also part of my reason for choosing AESCrypt, but I could add just the decryption part for legacy upgrades, if I can get access to the data store from the new service.

Yes, but it refreshes the timer if it is used, so it could potentially last “forever”.

If it’s used, they get migrated. If it’s not used, they expire.

EDIT 1:

This is the reason I said migration is maybe a temporary problem, however the old encryption will eventually be insufficient. We can encourage people to upgrade AuthID. If they don’t, they remain operational until a firmer retirement (for their own safety) happens. Recognizing upgrades can be simplified if a version number can be built into the AuthID using a string that fits the needed forms.

EDIT 2:

To be specific, I think the old plan has a 21 hexit passphrase, and the new one has a 32 hexit one.
The old plan has Python encryption. The new plan has SharpAESCrypt. The migration proposal is current AuthID (21 hexits) using SharpAESCrypt encryption, just in case it’s faster or more secure.

At OAuth server, encryption can probably be told apart by some recognizable header, or recorded, however to end user there is no change, which is what one wants, as change requires changing…