Previous volume not finished, call FinishVolume before starting a new volume


#21

Since you’re still using 2.0.3.3 beta you may run into this error.

Unfortunately, the fix for it currently only exists in 2.0.3.6 canary which is not a version I’d recommend you update to at this time.

So I guess the best we can do is try to come up with a process to allow pre-2.0.3.6 users to recover from the error.


#22

I deleted the backup set, deleted the databases, deleted the files on B2, restarted duplicati, created a new backup by hand, and the backup failed with the “Previous volume…” error the very first time through. So we can be sure, at least, the issue isn’t corruption or inconsistency from a previous backup.

Can you say what causes this error?


#23

I believe this is a bit of a catch-all error were something else goes bad but that error isn’t caught which causes THIS to go bad, where the error is caught.

Are you still using the 50MB upload volume size? It should’t be needed (I know some people have gone up to 1GB) but it seemed to resolve the issue for you last time…


In the past, it sounded like one of the sources of this error was a “dinosaur” file - though this should have been fixed long before 2.0.3.3 beta… though it’s possible this specific bug snuck back in…


#24

I’m observing the same “call FinishVolume” issue here: after a downgrade (including database schema changes) from canary to beta, every backup produces these four errors (or similar):

30 Jun 2018 06:14: Message
Expected there to be a temporary fileset for synthetic filelist (106, duplicati-b00913e73e805478e8ae9e019c14f67c7.dblock.zip.aes), but none was found?
30 Jun 2018 06:13: Message
scheduling missing file for deletion, currently listed as Uploading: duplicati-ie8bc4ccd0031487aa7807fe78e42c81d.dindex.zip.aes
30 Jun 2018 06:13: Message
scheduling missing file for deletion, currently listed as Uploading: duplicati-b2772ab46f2ea4fe49c87efc604c4f236.dblock.zip.aes
30 Jun 2018 06:13: Message
removing file listed as Temporary: duplicati-20180629T132605Z.dlist.zip.aes

While it’s running I frequently can’t view the logs because “the database is locked” (or similar).

Oddly, it also claims that the backup has succeeded, until quite some time after it has finished. So for example on today’s run, while it was still going, Duplicati said that the last successful backup was yesterday (which failed), and while it was tidying up it I think it said today’s was the last successful backup, but it’s now saying that the last successful run was in May. Is there anything I could do to help dig into the cause of this?


#25

I’d suggest you first try a restore and see if the any of the “failed” backups are actually available for restoring. I suspect what’s happening is that the backup part of the job runs just fine, but them something catastrophic happens during a later step like retention or compacting.

Beyond that, since this involves a database downgrade I’m going to see if maybe @Pectojin has any thoughts.


#26

Only the most recent failed backup ever seems to be available to restore, but there is always very little material contained within it. Therein lies a clue: I had moved my server directory away from a too-small partition a while back, as a possible solution to problems I was encountering, and the only contents of these ephemeral failed backups were a couple of files from the Duplicati server directory. As it turns out, I’d foolishly moved the server directory to a partition which was getting backed up. Backing up the 9GB backup database, with all its churn (and perhaps also file locks?) might be the problem. Retrying with revised configuration now.

Given that the consensus elsewhere seems to be against including the server directory in backups, this might be worth warning about/protecting against?

Will report back on progress of retry.

EDIT: Retry #1 appears to have produced a complete error-free backup…


#27

Thanks for taking the time to let us know what actions resolved the issue for you.

I do recall some discussion about automatically excluding the sqlite database from backups, but I’m not sure anything got decided or implemented. :frowning:


#28

Heh; three more successful backups does suggest that excluding the server directory has solved my problem, so +1 on that automatic exclusion! :slight_smile:


#29

@conrad Glad to hear that you sorted it out. Is this the beta build? I would like to believe that the problem is fixed in the canary, but if you see the problem there, I need to do more digging…


#30

Hi Ken, no, that’s the April beta still, so not canary & nothing to worry about.

I did try running canary on this backup, but it seemed to have extreme database performance issues with my 9GB database: my backups on canary back in March were taking ~6h, on April’s beta they’re now ~13h, but canary (would’ve been April-era, with version 5 database) failed to complete one backup in three weeks: it was backing up new files (mainly photos around 10MB) at a rate of only a few per day.