Yes, this is a support problem. Log isn’t there when you need it. If it keeps happening, lots of logging to lots of drive space can be set up, but few people do that. Email on Error is somewhat more feasible for constant use, and picks up a little, but less than the ideal amount – most are just one-line summaries…
This is, I believe, a Duplicati bug involving a bad lookup in an attempt to upload a backup interrupted by something (a fatal error may do it). It shows the previous backup plus whatever new made it. More here where there’s talk about whether this is still needed, now that manual stopping is getting more possible.
Your experience points out that proper stoppings aren’t always possible, so then how should things go?
The good news is you’re also confirming that lack of a synthetic filelist for an interrupted day is not a big problem (relatively speaking), but you probably have a day gap instead of synthetic backup for that day.
There would probably be some amount of inspection and repair, but more work to backup is likely too… Looking at that job log compared to the usual might show some differences beyond what you’ve posted. “BytesUploaded” in “BackendStatistics” in “Complete log” might give an idea if you’re bandwidth-limited.
I’m not sure if this is going to get solved easily. Even with logs, there’s little to no logging of tmp file use, and tmp files are used for many things. I think another report had a stack trace that narrowed the scope a bit.to SpillCollector which is collecting partially filled files of blocks to finish off the backup, and which might suffer from extreme skew where a “long pole” file runs long enough that early enders get deleted. Your backup does not seem so hugely long that a tmp file clean would be it, but it’s not totally ruled out.
For the “403” question, you could see if About → Show log → Stored logs it. If so, on which backups? Your experience does match mine which is that the 403 is transient. I’m just not sure if it’s related here.