Kind of off-topic for Kestrel, but it’d be nice to start thinking about how to protect secrets…
I mean… It is arguably a pre-req to getting to Kestrel.
What all is sensitive in the database? Is it just the connection info for the backup destinations?
Could we store the sensitive bits outside sqllite in an encrypted json or something and not have to worry about it anymore?
As for service vs not, at some point if you want backups to work on boot then the decryption key has to be stored somewhere and any sufficiently privileged malware can get it.
I don’t do Kestrel. I thought it was a web server replacement. Regardless, I’m happy to discuss more.
Duplicati-server.sqlite may hold roughly any settings that a command line client takes at its invocation.
Connection info is a good example. Backup destination is one. An SMTP server could be another one.
Maybe you’d want to guard encryption passphrase, not so much for this system, but if that got reused.
Super sophisticated malware or a live attacker can probably bypass anything because Duplicati has to use decrypted results at some point, and those can be stolen, so basically the aim is to add difficulty…
One could certainly use some sort of Duplicati scrambling, and it could stay in the database if desired. SQLite just used to make it kind of easy – and also easy to turn off for easier development/debug use.
Duplicati.Server.exe--unencrypted-database and --server-encryption-key are current controls.
Fix not revealing stored passwords from the UI [$100] #2024 has some thoughts about protection, and awhile back I looked at system keychain styles, and IIRC I think some OSs seemed better than others. Ultimately a keychain has to give data to something at some point, so how easy/hard is it to subvert it?
Unfortunately this discussion goes off the rail rather quickly and we don’t learn how exactly Linux is using system libraries.
It’s really important to get that because performance differences can be huge; notably my try at optimizing list folder was not tested correctly on Windows; IIRC while you tested it you did not find any great impact, and it’s because on the same computer running Linux, I got consistent result at more than 40s to click on a huge directory in the restore browsing window, and 5 s with my patch, while the same test (same database) on a Win10 VM without the patch is running in 8s on the same hardware ! The impact of having better Sql looks underwhelming on Windows instead of really significant on Linux.
This could explain in part why this was never properly fixed. The impact was so different between the systems it could not be properly handled with half baked bug reports. Possibly on Mac the performance was much better also. I have seen that on Mac the system library could be used, but it’s all very murky. I intend to search more about this topic, I want to take a dive into the Interop Sqlite thing, I’m beginning to suspect that a world of horrors is hiding here
I’m not sure I fully follow that thought. If you worry about variability, wouldn’t the question be whether we can supply our own SQLite, provided the system doesn’t get in trouble by using it? Of course, with many Linux versions the question would be what SQLite to supply, then maybe worry about its own libs needs.
https://repology.org/project/sqlite/versions forecast 3.22 if Linux Mint 19 uses Ubuntu 18.04.
Windows tends to lie about fsync by default, so SQLite will behave like no journal. That’s my general experience with win ntfs vs linux. That is a massive performance difference if writes (including temp tables) are made.
.net SQLite module can install its own version of the library. I didn’t get to testing as I believe getting onto .netstandard and core it a better start, then changing this dependency first. I don’t even mean all projects, just even direct parents ow this dependency.
Security is important, as @gpatel-fr has stated, resources are low and stability wins over features. I can’t see a cheap option that’s not dropping encrypted db support.
If it’s secret protection, supplying the password is the best I can see that’s achievable.
Although I’m not sure how well timeframe fits, there’s already the idea that migrating off old .NET can (maybe should) involve manual install at least at first. This gives the ability to write directions on how encryption should be removed before leaving old way, and inform people on risks this could increase.
This is not just encryption, but general. It should get a deliberate acceptance that it’s much new code.
was an amusing phrase once used, but hopefully we can have a little more confidence before a ship.
Regarding timeframe, if the wish is to improve SQLite on current .NET prior to Stable, that’s different.
I’m not advocating a particular direction, just discussing some of the options that might help to decide.
This is worrisome because it impacts not just performance, but reliable writes, thus database integrity. Attempting to find further discussions, I found this which is mostly macOS, but gets into what one may legitimately assume about fsync (on Windows maybe it’s FlushFileBuffers), and what’s not promised.
I think most of Duplicati’s database integrity problems are its own fault, but occasionally we do get the “database disk image is malformed” that says things are wrecked below whatever SQL tables contain.
I don’t follow this. After dropping encrypted DB, what is this supplied to, and what effects does it have?
The (config) database is supposed to be located on the machine that has access, so if you can get the database, you probably have everything (except the passwords).
So, yes, IMO only the passwords should be protected. The reason it is not fixed is that there is no clear marking for which fields can contain passwords. But starting with the connection info and backup passphrases would be a great start.
Since most Linux distros ship SQLite without the weak RC4 cipher, there is no protection on Linux by default. I think it would be a better solution to simply encrypt the fields in the database (done by the application) instead of using poor encryption on the whole file.
This would increase the cryptographic strength, increase system compatibility, simplify working with the DB in a UI tool, and keep all OS’es the same.
It is a bit like a physical keyring, it really keeps everything together, but also makes it likely that you will loose all keys at once.
The benefit from using the system keychain is that it is the best protection the system has to offer for storing secrets encrypted at rest. Generally the keychain will use whatever HW features are present to prevent leaking the master key, and already provides a method for unlocking the key. At least on MacOS there is a check for which processes can access a certain secret, which makes it significantly more secure than anything that can be done without OS support.
Sounds amazing, that http server that is currently being used is ridiculously outdated; it is a small wonder that it still works with modern browsers.