My host ran out of disk space and resulted in duplicati just stopping without obvious reason. After clearing out some space it didn’t appear to restart. I tried repairing the DB but it failed because the file it needs is only uploaded once the first upload happens? Not realizing what would happen I deleted and tried to rebuild.
Is there no way to help protect from this? A periodic upload of the relevant info before it finishes the first full upload? Some warnings to indicate it isn’t in a state to be regenerated?
I’ve blown everything away and restarted from scratch which given how much I had uploaded already is a bit painful. Is there something I could have done better?
Hello @trapexit and welcome to the forum!
In 20-20 hindsight, starting with a small initial backup (for example, the most important data) might have gotten an initial file list quickly, then you could add additional folders. Adding in chunks might still be best because you will have a more periodic upload of the relevant info, basically forced by this upload plan…
There’s in theory a synthetic filelist uploaded, when resuming an interrupted backup, which creates a file list based on the previous backup plus progress before interruption. I suspect it has a bug explained here and I’m not certain even a bug-free version would handle the special case of an interrupted initial backup, but starting from there seems like a possible reasonable path to make recovery from interruptions better.