Battle plan for migrating to .Net8

FreeBSD just got .NET 8 in their ports system and is working on it. Maybe this will help at some point.

How so? The SQLite Encryption Extension (SEE) is not used AFAIK, but we do face issue of the weak RC4 encryption being removed from the public domain System.Data.SQLite. One proposed solution is

other links at

but I’m not sure either of these issues has to be solved for some initial builds to move the port forwards.

I hope test accounts (perhaps shared, somehow) will be available for internal testing at some point.
If all else fails, maybe we could track backend uses in usage report to see what a test release gets.
This might need more detail than the web interface shows, but I think that more raw data is around.

I’m more worried about CPU flavors, especially for ones like Storj that have limited CPU native code.
uplink.NET talks about platform specific libraries, but seems to have less CPU ambition than .NET 8.
Current base is uplink-c. I don’t know if uplink would be feasible. Go tends to allow broad support.

EDIT:

So issue exists now and I’m not sure if .NET 8 makes it worse, but proposed idea might make it better.

What is the plan for GitHub branch usage and continued releases of current Duplicati?

We got a pretty good Canary that needs at least one fix but had a few more underway
before going Beta. Plans may have changed, but I’m not sure a wait-for-.NET 8 will do.

Every time we talk to developers about fancy branching, there’s been some reluctance.

I see a new draft PR aiming at master, so thought it was time to start talking about this.

What is the fix that is needed? OAuth?

I suggest we just move on the canary and get to beta, so there is less holding back.

My personal preference is to use something similar to the current setup, where there is a master branch that is used to pick pre-releases. Pre-releases then becomes releases.

I think this approach is transparent, in that PRs in general target the master branch. Releases are traceable via their branch, and changes can be cherry-picked by the release manager.

For instance, if we consider the current commit stable, there is no problem merging more to master, as the canary branch can be picked from another commit than the current head.

But if there is support for another branch structure, I am open to change :slight_smile:

I think plans have changed with the maintainer exit unfortunately. But I do not see a need to wait for the .net8 fix. I am just following my Suggested work plan, and will be looking at the tray icons today.

My original plan for tray icons was outline above, but I am looking at @tsuckow 's idea of using Avalonia instead. Also looked at Tauri.

hey guys, a bit late to the party, but it is great to see something happening on this topic.
So I was wondering why you do need to have GUIs, tray icons and etc. Or I do understand but I do not understand why this is a priority?
Why SQLite is a priority?

I understand though that you want to still support current users and their usage but would not made more sense to change architecture a bit?
I.e. have single API serve as a server and GUI to change administration settings but actual clients to just use this API to get new info (scheduled backups start, stop, failures), send files to backup to single storage account (S3, DropBox, Mega etc) for your entire home set of devices?

That way you do not need to have one-rule-them-all deployments and update mechanisms, because each client would be updated differently (why I can’t have i.e. mobile application to manage my duplicati? do I need trayicon there?).

This way you could set up duplicati server somewhere remote and configure it there. Installation using docker should be not a problem unless someone wants to install it on Raspberry PI Zero or something similar. And clients could be custom implementations, like you can have 3rd party apps connect to your Jellyfin instance. Sometimes some people i.e. reimplement server side too (like i.e. Vaultwarden on Cloudron is Rust reimplementation of bitwarden because some people where complaining that stack is a bit too heavy). That would be better architecture, but maybe I do not know something. Ah and with docker usage on server side only it would be much easier to drop SQLite entirely and just go which something much better? Like Postgresql?

This would be really cool project to work on if you guys need any help though :slight_smile:
Just give me a shout if you need any help!

Another that is getting kind of urgent but should maybe be in some Canary for more exposure is

which probably fixes several other things including support below for the 2021-09-06 OpenSSH

This was also in progress, as I think there are some known problems with the current Storj code:

The worry was whether a single-master design will allow pre-.NET-8 releases after .NET 8 goes in.
I am not familiar with GitHub capabilities, but I thought this was one reason for fancier branch plan.
Maybe you plan to use conditionals to control what is built from the master? I haven’t gone looking.

I think this is sort-of what we are aiming for. The UI is important for most users, because they would not be comfortable with the commandline approach. Unfortunately, the GUI libraries are also one of the places where operating systems are not very common at the library level.

At the core I think Duplicati is a “single machine” type application, so I would like to have the option of “download and run” without needing a separate server. On the other hand, the projects are already divided into server and GUI parts. I think the future could have some kind of aggregated control panel for multiple (headless) clients, but we are not there yet.

The implementation relies heavily on relational data and some constraints. It is not a problem to use another database, but there are many roundtrips to the database in a backup run, so having something out-of-process would likely slow it down. For most RDMS there is a setup and maintenance involved, which we can skip with SQLite.

I would personally rather rewrite the logic to avoid using relational database and have some simple key-value store with denormalized data. But that is off-topic.

Yes, I think that sounds like a very nice architecture, but at the moment, the core product is single-user-single-machine, but I did mention the non-UI Docker images. I think these could be the CLI and/or the server component. That would allow you to start the server on a machine, and connect to it via the WebUI on HTTP.

Not entirely what you suggest, but at least some of the way.

The current release build system is branch based, so we can simply make a branch (from a pre-.NET8 commit) and build a release from that.

Perhaps off-topic, but I have merged the two other PRs. The Storj one is a bit hairy, as it includes management of new native libraries, so I have postponed including that for a bit.

It works at least once. After that maybe branch becomes the long term maintenance branch?
That might be different from current. Planned cherry-pick of changes from .NET 8 seems OK.

I don’t know what the release cycle will be. I was thinking of two Canary, with big risks in first.
Second Canary cleans up any breaks from the big risks, adds small risks, has shorter testing.

Plans (including version numbers) will need rework if pre-.NET-8 goes stable or .NET 8 ships.

So now maybe either short Canary for SSH, or hope that 2023.0.1 cleaned up after 2023.0.0?
Sometimes the small number of Canary users doesn’t get all the problems Beta does anyway.

You’ve got the stats (apparently Experimental is even fewer), so please go as you think best…
I think the Beta brings some good things to more people, but it got delayed by the OAuth work.

yes, and this brings a lot of other headaches like you stated in the first post: 26 different packages. Or even more.

This does makes sense for browser or video player as you mentioned (firefox, vlc). But backup solution, mostly you just want it to run and do its work - with minimal maintenance. So basically login, few api endpoints for statuses of backup, starting backup on demand and restoring of backups, and notifications if there some problem. That is it for most users I believe. Pretty simple API.

Doing all-in-one solution would be very hard to achieve even as a commercial project for entire team, company.
Even harder to achieve if you are doing this by yourself or with some other folks in OS project.
Decoupling applications would make much more sense:

  • less complicated builds
  • more frequent bugfixes (because you do not need to run all the builds and tests and all to fix a typo)
  • easier start for newcomers (someone wants to start Android app development? “Sure just use this API”).
  • easier maintainability of whole solution (imagine changing a pipeline for Duplicati if someone would want to change linux app to QT or Flutter)

But I am not trying to change your mind about it.
Still I would like to get involved if you need help regardless.

BTW: what will happen if you loose machine running duplicati?
Since it is single-machine and whole DB regarding backup is stored in SQLite?

I don’t really see how you can avoid the multi platform builds, but I would love them to go away. Previously, they were not needed because .Net used the Java-like idea that there is a runtime installed on the machine, and then the byte-code can just run on that. For various reasons, the .Net build system now targets the CPU and OS type, causing many of the builds.

But since Duplicati is meant to run on these operating systems, I do not see another solution. Even if each machine was just the running the CLI backup, it would still need a special build.

Regardless of how you slice it, each of the UI kits are also separate. I am testing if Avalonia can mask this for us, so there is just one “UI”, but unless we stop having an entry application, I also do not see how we can avoid this. If there was no UI application, just the web-page, that would work, but I think and end user would be confused if there are no notifications and no status icon integrated in the OS.

Maybe I misunderstand the idea about the API, but essentially, Duplicati runs the “server” (aka. Duplicati.Server.exe) which is launched by the TrayIcon, Windows Service, or other service runner. This is the API that everything communicates through, and this is what hosts the HTML/CSS/JS content. Sure, the API could be better documented (OpenAPI, Swagger, etc), but it was built before that was the norm, and it has not been maintained.

Please do :slight_smile: , I think alternate ideas are the way forward.

The database is really just a performance thing. Since everything remote is encrypted, it is super slow to figure out if a block is already backed up, or what blocks to use for restore. The dindex files are actually just redundant metadata to support recreating the database without downloading all the content files, but the recreate can work even without them.

Small update is that I got it up and running with Avalonia, and it looks good. I really like the idea of having a single UI codebase. There are some quirks with cross-thread communication crashing Avalonia, and I will see if I can make a good fix for that.

There is a problem with the application showing the Dock icon on MacOS, but I think this can be handled in the packaging step.

I do not think that this will make those builds go away but will surely limit the variety and by extent maintenance needed to keep it afloat.

I can’t say I have a tons of experience in building portable .net app - I did it only once for single application but it was working for few months when I was there.
On that project (Web API) we had two versions, for windows and linux.
Binary was build from single codebase using dotnet self contained, single file app Create a single file for application deployment - .NET | Microsoft Learn
Probably this would have to be done 3 times for Duplicati (linux, windows and os) and doubled for arm and x64. .NET Runtime Identifier (RID) catalog - .NET | Microsoft Learn
For x32 I am not sure if this should be priority (what devices are on 32 oses in 2024?).
Still a few but 6 is better than 26, or 34.
And of course those would be only binaries for API.

  • For linux UI I would rather use Snaps, AppImages or Flatpak - big OS projects that I am familiar with, tends to use those - for a reason being able to package all the dependencies and it makes your application more portable in very fragmented word of Unix/Linux. This way application could be bundled with API, UI and tray icon and all its dependencies.
  • For Windows it could be .msi that bundles all of it, all of the necessary dependencies, install and runs it.
  • For Mac Os I can’t really say since I did not used it.

What is the difference? Mainly I would say distribution model since you are not bundling everything in one code base and one single build pipeline.
I would rather create

  • build, test, publish pipeline for server (API)
  • build, test, publish pipeline for UI (probably for each client separately)
  • and publish pipeline for bundles (.msi, AppImage, others)

That way if there would arise a need for change (another UI framework, client, platform) you do not need to change whole process just the part of it. If someone would like to use another packaging/distribution system, can either grab existing artifact or add new build target, and then use it in another publish/bundle pipeline.

So most important difference is smaller learning curve for someone that just try to jump in and fix some issue with UI or backup target extension. One code base is nice when you are working alone on some project but if some FE developer want to jump in to do some stuff on the client side, probably it would be better to not force them to install whole .net development dependencies just to be able to work with FE. I remember the pain for trying to just build Duplicati. :wink:

Probably it would be good idea to just write WebUI and then package it in some nodeJS application like i.e. https://www.electronjs.org/ Or maybe not, since you are saying that Avalonia looks good. :slight_smile:

Sorry if you already were thinking about this solution and disregarded it for some reason.

The database is really just a performance thing. Since everything remote is encrypted, it is super slow to figure out if a block is already backed up, or what blocks to use for restore. The dindex files are actually just redundant metadata to support recreating the database without downloading all the content files, but the recreate can work even without them.

Maybe it would be better than to just store some really simple file on the backup storage and load them when you are trying to read the backup. Kopia works like that and it is much, much faster. SQLite is rather not fastest DB out there.

True! I will leave that out for starters and see if there is a request.
Could consider dropping the Arm builds initially, and let MacOS use Rosetta for starters.

Yes, I think for Duplicati we need to do some tricks to get multiple executables to use the same libraries, or we will have to publish multiple heavy binaries, but other than that, this is my idea.

My main obstacle in this area has been signing. There are many good pipelines, but I have not found a great way to handle signing the libraries and packages, so I have opted to build locally and sign.

But good input, I will re-evaluate the build pipeline here.

I looked at electron some time ago, my main issue was that the binaries are very large, since they ship the entire browser and then some. This is problematic for update package sizes IMO. I also looked at Tauri which is much smaller, but since the Avalonia work was already done, I think this is the current easiest path.

The local database can be quite large, and as it stores hashes it does not compress well. I think it would be problematic to store the lookup data remotely, as it would have to be downloaded and decrypted before each run.

I have considered storing more data, but mostly to assist in faster restores of the local database.

So you were running release builds on your local machine?

but since the Avalonia work was already done, I think this is the current easiest path.

Sure np!

Yes. That is the setup. All builds are done locally and signed locally.
If you build remote and sign locally, then the build servers could be compromised, and you will essentially sign malware.

Cool, can’t say what was the real process there and why it was solved this way.
So do you need some help with the code? Where is the .net8 branch now?

Always looking for help!

The most recent work is in #5112 and it includes an upgrade to SDK style projects, upgrade to .net8, switch to Avalonia, and mostly removed the old HttpServer in favor for Kestrel.

There are some issues with the webserver invoking the Avalonia UI from another thread, and this crashes the app. I am looking in to this today, and hopefully it will be fixed.

Some of the things in this PR I could use help with are:

  • The HttpServer should be fully removed, but I am not looking at it right now.
  • I am not sure the VSS snapshots are working with the update, needs testing and potentially fixing.
  • The Storj backend relies on some native libraries that I am not sure are working correctly with all platforms.
  • Azure backend library was changed, some testing is required to see if this still works as expected.
  • Update of build pipelines from Github (unsigned builds for now).

I would suggest checking out the branch feature/kestrel-avalonia-upgrade and making a PR for merging into this branch.

In case you want to jump into the build pipelines, there is a setup as posted above that looks like what you envision.

Cool, I will look into HttpServer removal and Azure Backend.
Storj, I think someone here was mentioning that worked on that one. I do not have even idea what that Storj thing is. :slight_smile:
Updates to the pipeline can wait I guess since code need to be working in order to be released.