The backup in general is more vulnerable until the first one has completed, and the dlist file reflecting its contents has been uploaded. Without a dlist, some operations that might normally work might not work.
Getting at least a stake put down with a small initial backup would ensure you have initial dlist uploaded.
Beyond that, I can’t say much, knowing nothing about what sort of mangling you saw. Reporting issues, especially on canary releases which tend to have new code and associated issues, can be very helpful. There’s been worry about how well new parallel upload code handles errors (but there are old bugs too).
Reproducing the problem, e.g. by deliberately causing errors or setting --number-of-retries low, can help understand how to resolve bugs. Logs and maybe database bug reports (currently somewhat broken…) would help understand the steps Duplicati took to get to the problem, and you can give the manual steps.
But getting back to your upload, if you think you might have a broken database again, you can back it up manually after completing some increment of backup. Especially if you set –no-auto-compact, a backup won’t change files already uploaded, but just add more. So if disaster strikes, you roll everything back…
I think just putting the old database back can have it delete the new unknown backend files, which hugely offends people who didn’t want that, e.g. they restored an older DB from an image backup or something:
Repair command deletes remote files for new backups #3416
I don’t know of a good way to study that, however you might see other errors, as my backup was seeing. For increasing your chance of success, make sure your network connection seems to be working nicely, for example by seeing if an Internet speed test runs as fast as you expect, then raise --number-of-retries, perhaps also raising --retry-delay to try to approximate where exponential backoff would go, if it existed…