Hi Joe, thanks for posting your thoughts!
There are some backends that do not fully honor the timeout settings. Which backend was stuck here?
That is a general issue with omissions. Duplicati can only report if a backup has run, not when it did not run. This is one of the motiviations for creating the Duplicati Console.
There is not an overall timeout set, because there is no way to pick a sensible value. The backup size could change from 1GiB to 1TiB between runs for example.
The logic is that the backends are currently responsible for handling stalls, by detecting periods with no transfer progress. This strategy is not fully implemented, and will be extended to be monitored and handled by the upload manager at a later time.
The only thing that prevents that from happening is that the UI is not designed to handle multiple running backups. For that reason, the server component is constrained to queue operations and only run once backup at a time.
We do plan to introduce separated processes for each operation, and with that in place it would be trivial to run multiple backups in parallel.
The issue here is that the backup has “started” as far as Duplicati is concerned (because it has been queued). Because we do not know how long a backup should take, it is not an error to have a backup in the queue for days.
That said, I understand the problem, and I think the fix is to ensure that backups properly handle upload stalls, which points back to fixing the backend.
We could add a feature that sends a report of kinds if a backup has been in the queue for more than some pre-defined time, but there is a good chance that it will send false-positive messages. If you would like that feature, feel free to register it on Github.