What about time-run-out option?

A long time ago I write (or read, I don’t rememeber exactly) about the request to carry out a time-run-out function, to avoid and endless hang during a backup job.
Today I am in the situation to ask it again. My mega.nz destination backup jobs don’t work anymore. They hang over indefinitely. Sometime I don’t realize it, and this prevents my other daily backup jobs to run. It’s a dangerous situation, at least for me.
So, is it possible to force duplicati to stop the current backup job if it has lasted more than xxx hours?
Thank you.

There are probably some in the forum and in GitHub Issues. It might be possible to search for them, but

I don’t know if it’s better or worse to just ask again rather than to continue a discussion with background.

Because I don’t have time to find previous (a good search engine can help) and there’s new information about the hang situation, I’m just going to present the new information and make some other comments.

Very little is possible without volunteers. They are strongly encouraged in all areas, yet they are very rare.

You’re almost talking about some sort of long-term failsafe where user would have to guess backup time, which seems very awkward to me, especially for initial backups which can take weeks for a large backup.

Backup job hangs in the middle of the work #3683

putting a timer on the WhenAll() where its being blocked (they have about 8 WhenAll()'s but probably only one of them should be the issue) with a time longer than your longest backup.

was a (perhaps temporary) solution by @Xavron from Duplicati lock up ongoing (still?) investigations…
Whether or not it’s a feasible permanent solution, someone needs to actually do a pull request and tests.

Simply letting Duplicati continue on its way, despite having unfinished work, seems like it may be unsafe.
I would far prefer that someone figure out why the hang sometimes happens, but it takes skills and time. These are very much the limiting factor on all progress, though “asks” do help a little to set the priorities.

If the point of the hang can be narrowed down some (sometimes it seems to be in backend operations), shorter-term timeouts in local areas (maybe some shared backend code, if backend is a worry) may do. Small operations “should” be a little more time-predictable because there’s a maximum size on dblocks.

The whole backend timeout area could use some work to see what per-file timeouts are (if they exist…).
The one that I’m personally aware of is that OneDrive may time out if transfer goes beyond 100 seconds.
That turns out to be low level HttpClient.Timeout Property, but I’m think about looking for a higher location.
There are too many different low level storage codes, and many are third-party, so aren’t easily changed.

Or maybe hangs have another source (which might be a worse thing). Need someone to look for trends among the reports (volunteers?), although sometimes surface symptom isn’t same as underlying cause.