Previous volume not finished, call FinishVolume before starting a new volume


#1

I see there’s a few other posts on this topic, but it doesn’t look like there are any answers which are directly applicable. (My /tmp volume has 17gb free, and there’s no Windows service issues of course.)

I’m running a backup on my Mac using the beta (2.0.2.1) version of duplicati to backblaze b2. The backup is about 200gb and 400k files. My upload volume size is 250mb.

Every time, whether on the nightly schedule or manually, I get the error above. I’m running two other backup configurations on this same machine (one to b2, and another to sftp), and those both complete every night, so it doesn’t seem to be a basic configuration issue. I’ve tried repairing the db, rebuilding the db, and everything I can think of besides installing a canary or blowing away the whole backup and starting over.

I see elsewhere this error is likely not the root cause, but I’m not sure how to debug this. Do you have suggestions on what additional logs I should collect? Thanks.

Here’s the logs I have:

Apr 1, 2018 3:31 PM: Message
Expected there to be a temporary fileset for synthetic filelist (4, duplicati-b00091ba58de54eafae421bdf140fe753.dblock.zip.aes), but none was found?

Apr 1, 2018 3:30 PM: Message
scheduling missing file for deletion, currently listed as Uploading: duplicati-i86a457542c3c449d90056363bcc85bf8.dindex.zip.aes

Apr 1, 2018 3:30 PM: Message
scheduling missing file for deletion, currently listed as Uploading: duplicati-bdcb6e3df7be342959d78df7917635afd.dblock.zip.aes

Apr 1, 2018 3:30 PM: Message
removing file listed as Temporary: duplicati-20180401T214210Z.dlist.zip.aes

Apr 1, 2018 3:57 PM: Failed while executing "Backup" with id: 1
System.InvalidOperationException: Previous volume not finished, call FinishVolume before starting a new volume
  at Duplicati.Library.Main.Volumes.IndexVolumeWriter.StartVolume (System.String filename) [0x00010] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.UpdateIndexVolume () [0x00043] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.FinalizeRemoteVolumes (Duplicati.Library.Main.BackendManager backend) [0x00169] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x00860] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Controller+<>c__DisplayClass16_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x0030f] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0014b] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Controller.Backup (System.String[] inputsources, Duplicati.Library.Utility.IFilter filter) [0x00068] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00454] in <a6c0c2089b9a44ec9be5057a44f12116>:0

#2

Are you willing to update and see if the issue has been resolved in the 2.0.3.3 beta version?


#3

I updated hoping it would, but it did not. I don’t have access to the logs right now, but I got a different error about a corrupt file, and it suggested I run repair. I did, and I got the same error again, so repair didn’t seem to help. I rebuilt the database, and the corrupt file error went away, but then I got the FinishVolume error again. If it would help, I can get the exact corrupt file error later tonight.


#4

As mentioned here, this error is likely caused by something that happened earlier in the process but wasn’t correctly caught - that’s probably why you’re not seeing a “definitive” fix as there could be multiple error points that happen before Duplicati detects the issue as part of StartVolume:


Not that it’s a fix, but as a test, did you try changing your dblock / Upload Volume size? It seems to have worked here:


Otherwise, for some people it seems the issue is related to running out of temp space:


This issue does appear to be long standing so there must be some parameters or environment specifics causing it to happen otherwise I would expect it to have shown up way more than it already has (Search results for '"Previous volume not finished"' - Duplicati):


#5

This problem popped up for me recently on 2.0.3.1.

Fortunately it was just my secondary backup whose only purpose is to back up the database of the primary backup. I was planning on recreating this job entirely, but then it magically cleared itself up after 5 failures. Weird.


#6

Sorry for the delay replying, I had other things going on for a while.

I didn’t try changing my dblock / Upload Volume size. What exactly would this do? What happens to old data already sent? In the case you pointed to, the reporter said “on this PARTICULAR backup that would never actually start shipping data to the remote Backblaze server”, butI have 200gb in B2 for this backup set, so it seems different.

I’ve got 11gb free, so unless duplicity is using a lot of temp space that’s not the issue.

I took a look again tonight. I first reported the “Previous volume not finished” error on April 1. The logs show the same error every night when the backup ran through April 6. Since then, every night, it logs this:

System.IO.InvalidDataException: Found inconsistency in the following files while validating database: 
<elided>, actual size 263478, dbsize 0, blocksetid: 513378
. Run repair to fix it.
  at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency (System.Data.IDbTransaction transaction, System.Int64 blocksize, System.Int64 hashsize, System.Boolean verifyfilelists) [0x000e9] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x00238] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Controller+<>c__DisplayClass17_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x00036] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0014b] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Controller.Backup (System.String[] inputsources, Duplicati.Library.Utility.IFilter filter) [0x00068] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00494] in <670b0e4a9ae144208688fcb192d20146>:0 

The filename and other data appear the same every night. So perhaps this is the “real” underlying error. I tried running repair, then a backup, but the backup failed in the same way. I did a recreate (delete and repair) on the database, but then I got the “Previous volume not finished” error again.

This feels to me like some sort of corruption in the files in the backup set which is detected during backup, but without any mechanism to fix. At the very least, “Run repair to fix it” is very misleading advice.

At this point, I’m probably just going to nuke the backup set and start over from scratch over the weekend.


#7

This would just create smaller zip files. In your case I think you’re using a 250MB dblock / Upload volume size. Since Duplicati defaults to queuing up 4 of those files for upload you’d be using about 1TB of temp space.

Changing to a smaller size is just an easy way to see if things start working again (suggesting it’s s temp space issue). It won’t do anything to already backed up data - those files will still stay at the old setting (250MB in your case) UNTIL they get cleaned up as part of the normal compacting process (assuming you’re not using “Keep all versions”). That’s the process that finds small or partially used zip files and re-compresses them into new full-sized / fully utilized ones (then deletes the old ones).

You can safely changing dblock size pretty much any time you want. :slight_smile:

This could be caused by things including:

  • a side effect of whatever is causing the “Previous volume not finished” issue (that’s my guess)
  • an intermittent communications issue causing the wrong size to be stored in the database
  • a destination bug such that the size was incorrectly reported
  • a destination timing issue such that Duplicati requested the file size BEFORE the file was done being written by the destination (since the file couldn’t be opened to get the size, 0 is returned)
  • an actual bug in the Duplicati SQL (though this is unlikely as it would probably appear much more frequently)

If you have the space, I’d suggest testing a new backup before nuking the old one. Even though you’re having issues adding to the existing backup it should still be able to be used for restoring purposes.

I’m going to flag @kenkendk and @Pectojin here just in case they have any input because I know this has been an open (but infrequent) issue for a while now and even if we can’t pinpoint the cause, having a method of resolving it would be nice.