Manual compact ran, now unexpected difference in fileset


#1

I updated the title here. My previous message is below. I thought the compact operation ran fine, but when the next scheduled backup operation ran, I got this fatal error.

Failed: Unexpected difference in fileset version 29: 12/15/2018 7:20:43 AM (database id: 284), found 10 entries, but expected 12

It seems Duplicati may have deleted something it now thinks it needs. I didn’t mess with the CLI at all, this was all standard GUI operation and running a manual compact shouldn’t break backups.


I have a backup job with Dropbox as a destination. For some reason on this job I had set no-auto-compact, and I realized I had nearly 300GB of files on the destination. I ran a manual compact operation from the GUI today, and I watched as it started deleting old files - all good.

Then suddenly about 2 hours ago I noticed dropbox was no longer reporting any change in the backup destination folder size. I checked the logs in Duplicati, and sure enough the last delete operation was about 2 hours ago. The status bar says “starting backup…” which it has said from the beginning. There doesn’t appear to be any network activity between my system and dropbox and the logs haven’t changed since the last successful delete. Is there something else I can look at to see if anything is happening? Is it doing something locally I can keep an eye on?

quick addition: according to top, mono is using 105% CPU (?) so it does seem to be doing something. It’s just not logging it yet.

edit: it eventually finished; there is too much info in the remote log to see what happened - I paged through about 25 pages but couldn’t get back to the timestamp when things seemed to hang up.


#2

Solved: Definitely tempdir was filling up. After the job finished I still have 2.4GB worth of dup* files in my tempdir, but this time the log says the compact operation completed.


I’m not sure about this, but it looks like the issue was /tmp filling up. The compact job is so large that even though I made my /tmp (ramdrive) 3GB I saw a disk full error on some later backups. I set up a shell script to “df -h” every minute and record the timestamp and I can see the 3GB slowly filling up.

I’ll try moving /tmp back to a large disk for this operation and see if it works. I still think there’s something in 2.0.4.5_beta that isn’t removing tmp files properly, but I’ll see if I can at least get the compact job working for now.