I don’t think you’ll find any disagreement that this is important, but keep in mind you have an unfinished beta product. How robust it needs to be before it’s called stable has been the topic of discussion and requires a substantial amount of effort which is difficult to achieve when all the available volunteers are fully occupied.
Other really important things include: it should install, should not fail by itself, should not fail from a network error, should restore, etc. All of those have had either general hard-to-find issues or unusual cases solved, however some are fixed only in Canary which is rather bleeding-edge. At some point, a Beta will come out.
For awhile the “focus” (a word I’m not fond of unless somebody says what not to do) was on doing 188.8.131.52, which had a lot of reliability fixes in it. That was 14 months after the prior Beta, and I hope we can do better.
is one statement of the situation. It’s sometimes very hard to figure out how to break things, especially in a way that allows a fix to be done. What would be ideal would be more volunteers to do negative testing with enough control and logging to allow the developers to understand what’s going on and to resolve the issue.
Early on, user reported problems are easy to reproduce, so easy to fix. This is less and less the case now.
If you believe this is really important enough that you will help, there are probably various holes to be found. They’re not all huge holes. I just killed my Duplicati process three times in mid-backup, and got no failures, in fact the recovery on next backup did what it should, and marked the backup as partial, and uploaded the synthetic filelist which is the previous backup list plus updates completed before the interruption was done.
So in that case the design worked as intended (and I think that’s one of those Canary fixes not yet in Beta). Things gets progressively harder as the problems get harder to reproduce. Even SQLite has its limitations and so does hardware. I’ve seen data sheets of current disk drives that won’t guarantee data integrity if the prescribed shutdown sequence from the OS isn’t followed. Good luck with that during crash or power loss.
Best practice with backups is always to have multiple backups, done differently, in the hope that misfortune doesn’t hit both at the same time. Software is never completely perfect, and that includes backup software.
While I would be thrilled if you test carefully and submit quality reports, you can also just try doing what you originally wanted to do and see how well it holds up. Surprisingly, I can’t find any prior discussions quite like what you wanted. I think most users use the GUI, and fixes are generally driven by the issues the users file.
I didn’t focus on this in my earlier research for kill issues, and I know there have been some mysterious locking issues (possibly related to the incomplete kill that you saw – similar issue happens on GUI Quit). Please feel free to file a GitHub issue with steps so anyone to reproduce it. That’s the key to getting a fix.