Previous volume not finished, call FinishVolume before starting a new volume

I see there’s a few other posts on this topic, but it doesn’t look like there are any answers which are directly applicable. (My /tmp volume has 17gb free, and there’s no Windows service issues of course.)

I’m running a backup on my Mac using the beta (2.0.2.1) version of duplicati to backblaze b2. The backup is about 200gb and 400k files. My upload volume size is 250mb.

Every time, whether on the nightly schedule or manually, I get the error above. I’m running two other backup configurations on this same machine (one to b2, and another to sftp), and those both complete every night, so it doesn’t seem to be a basic configuration issue. I’ve tried repairing the db, rebuilding the db, and everything I can think of besides installing a canary or blowing away the whole backup and starting over.

I see elsewhere this error is likely not the root cause, but I’m not sure how to debug this. Do you have suggestions on what additional logs I should collect? Thanks.

Here’s the logs I have:

Apr 1, 2018 3:31 PM: Message
Expected there to be a temporary fileset for synthetic filelist (4, duplicati-b00091ba58de54eafae421bdf140fe753.dblock.zip.aes), but none was found?

Apr 1, 2018 3:30 PM: Message
scheduling missing file for deletion, currently listed as Uploading: duplicati-i86a457542c3c449d90056363bcc85bf8.dindex.zip.aes

Apr 1, 2018 3:30 PM: Message
scheduling missing file for deletion, currently listed as Uploading: duplicati-bdcb6e3df7be342959d78df7917635afd.dblock.zip.aes

Apr 1, 2018 3:30 PM: Message
removing file listed as Temporary: duplicati-20180401T214210Z.dlist.zip.aes

Apr 1, 2018 3:57 PM: Failed while executing "Backup" with id: 1
System.InvalidOperationException: Previous volume not finished, call FinishVolume before starting a new volume
  at Duplicati.Library.Main.Volumes.IndexVolumeWriter.StartVolume (System.String filename) [0x00010] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.UpdateIndexVolume () [0x00043] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.FinalizeRemoteVolumes (Duplicati.Library.Main.BackendManager backend) [0x00169] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x00860] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Controller+<>c__DisplayClass16_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x0030f] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0014b] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Controller.Backup (System.String[] inputsources, Duplicati.Library.Utility.IFilter filter) [0x00068] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00454] in <a6c0c2089b9a44ec9be5057a44f12116>:0

Are you willing to update and see if the issue has been resolved in the 2.0.3.3 beta version?

I updated hoping it would, but it did not. I don’t have access to the logs right now, but I got a different error about a corrupt file, and it suggested I run repair. I did, and I got the same error again, so repair didn’t seem to help. I rebuilt the database, and the corrupt file error went away, but then I got the FinishVolume error again. If it would help, I can get the exact corrupt file error later tonight.

As mentioned here, this error is likely caused by something that happened earlier in the process but wasn’t correctly caught - that’s probably why you’re not seeing a “definitive” fix as there could be multiple error points that happen before Duplicati detects the issue as part of StartVolume:


Not that it’s a fix, but as a test, did you try changing your dblock / Upload Volume size? It seems to have worked here:


Otherwise, for some people it seems the issue is related to running out of temp space:


This issue does appear to be long standing so there must be some parameters or environment specifics causing it to happen otherwise I would expect it to have shown up way more than it already has (Search results for '"Previous volume not finished"' - Duplicati):

This problem popped up for me recently on 2.0.3.1.

Fortunately it was just my secondary backup whose only purpose is to back up the database of the primary backup. I was planning on recreating this job entirely, but then it magically cleared itself up after 5 failures. Weird.

Sorry for the delay replying, I had other things going on for a while.

I didn’t try changing my dblock / Upload Volume size. What exactly would this do? What happens to old data already sent? In the case you pointed to, the reporter said “on this PARTICULAR backup that would never actually start shipping data to the remote Backblaze server”, butI have 200gb in B2 for this backup set, so it seems different.

I’ve got 11gb free, so unless duplicity is using a lot of temp space that’s not the issue.

I took a look again tonight. I first reported the “Previous volume not finished” error on April 1. The logs show the same error every night when the backup ran through April 6. Since then, every night, it logs this:

System.IO.InvalidDataException: Found inconsistency in the following files while validating database: 
<elided>, actual size 263478, dbsize 0, blocksetid: 513378
. Run repair to fix it.
  at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency (System.Data.IDbTransaction transaction, System.Int64 blocksize, System.Int64 hashsize, System.Boolean verifyfilelists) [0x000e9] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x00238] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Controller+<>c__DisplayClass17_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x00036] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0014b] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Controller.Backup (System.String[] inputsources, Duplicati.Library.Utility.IFilter filter) [0x00068] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00494] in <670b0e4a9ae144208688fcb192d20146>:0 

The filename and other data appear the same every night. So perhaps this is the “real” underlying error. I tried running repair, then a backup, but the backup failed in the same way. I did a recreate (delete and repair) on the database, but then I got the “Previous volume not finished” error again.

This feels to me like some sort of corruption in the files in the backup set which is detected during backup, but without any mechanism to fix. At the very least, “Run repair to fix it” is very misleading advice.

At this point, I’m probably just going to nuke the backup set and start over from scratch over the weekend.

This would just create smaller zip files. In your case I think you’re using a 250MB dblock / Upload volume size. Since Duplicati defaults to queuing up 4 of those files for upload you’d be using about 1TB of temp space.

Changing to a smaller size is just an easy way to see if things start working again (suggesting it’s s temp space issue). It won’t do anything to already backed up data - those files will still stay at the old setting (250MB in your case) UNTIL they get cleaned up as part of the normal compacting process (assuming you’re not using “Keep all versions”). That’s the process that finds small or partially used zip files and re-compresses them into new full-sized / fully utilized ones (then deletes the old ones).

You can safely changing dblock size pretty much any time you want. :slight_smile:

This could be caused by things including:

  • a side effect of whatever is causing the “Previous volume not finished” issue (that’s my guess)
  • an intermittent communications issue causing the wrong size to be stored in the database
  • a destination bug such that the size was incorrectly reported
  • a destination timing issue such that Duplicati requested the file size BEFORE the file was done being written by the destination (since the file couldn’t be opened to get the size, 0 is returned)
  • an actual bug in the Duplicati SQL (though this is unlikely as it would probably appear much more frequently)

If you have the space, I’d suggest testing a new backup before nuking the old one. Even though you’re having issues adding to the existing backup it should still be able to be used for restoring purposes.

I’m going to flag @kenkendk and @Pectojin here just in case they have any input because I know this has been an open (but infrequent) issue for a while now and even if we can’t pinpoint the cause, having a method of resolving it would be nice.

Well, that was unexpected. I lowered dblock size to 50mb from 250mb, and the backup succeeded! It did give a number of warnings and messages, though. Here’s the final log:

DeletedFiles: 5366
DeletedFolders: 1984
ModifiedFiles: 131466
ExaminedFiles: 438989
OpenedFiles: 139555
AddedFiles: 8089
SizeOfModifiedFiles: 432141008
SizeOfAddedFiles: 7320056855
SizeOfExaminedFiles: 216992089916
SizeOfOpenedFiles: 21841350286
NotProcessedFiles: 0
AddedFolders: 2883
TooLargeFiles: 0
FilesWithError: 0
ModifiedFolders: 0
ModifiedSymlinks: 0
AddedSymlinks: 258
DeletedSymlinks: 19
PartialBackup: False
Dryrun: False
MainOperation: Backup
CompactResults: null
DeleteResults:
    DeletedSets: []
    Dryrun: False
    MainOperation: Delete
    CompactResults: null
    ParsedResult: Success
    EndTime: 4/28/2018 3:49:04 PM (1524955744)
    BeginTime: 4/28/2018 3:46:08 PM (1524955568)
    Duration: 00:02:56.4425330
    BackendStatistics:
        RemoteCalls: 29
        BytesUploaded: 581116075
        BytesDownloaded: 359526964
        FilesUploaded: 23
        FilesDownloaded: 4
        FilesDeleted: 0
        FoldersCreated: 0
        RetryAttempts: 0
        UnknownFileSize: 0
        UnknownFileCount: 0
        KnownFileCount: 1422
        KnownFileSize: 183244722678
        LastBackupDate: 4/28/2018 3:21:53 PM (1524954113)
        BackupListCount: 4
        TotalQuotaSpace: 0
        FreeQuotaSpace: 0
        AssignedQuotaSpace: -1
        ReportedQuotaError: False
        ReportedQuotaWarning: False
        ParsedResult: Success
RepairResults: null
TestResults:
    MainOperation: Test
    Verifications: [
        Key: duplicati-bc050fd0058074d4b9463aa7bf9c84b42.dblock.zip.aes
        Value: [
            Key: Extra
            Value: +/u4gSbSdsF0gFbPPKI8ISMII55jF38j+o9suZn0kdE=,
            Key: Extra
            Value: +0LeLKQbdvgPwCRROwpVq/wqKmE2bk25hPrVlLSe0LE=,
            Key: Extra
            Value: +37BXnPuXV4WtKVKkUlDDz74BcWBgd5g0qKSzrrAy1Y=,
            Key: Extra
            Value: +3gyzD1vuCGFXaP2lxpihBUojSplb7HvZYjWI7cvsRI=,
            Key: Extra
            Value: +4kOVT05mmucHIcOMixQSM/KAb5dBM20s0vxITs+q4I=,
...
        ],
        Key: duplicati-bc050fd0058074d4b9463aa7bf9c84b42.dblock.zip.aes
        Value: [],
        Key: duplicati-20180327T134006Z.dlist.zip.aes
        Value: [],
        Key: duplicati-i234b4e83503641e5947952956e45aacd.dindex.zip.aes
        Value: [],
        Key: duplicati-b3ad2f3a6a32d4700854685b6ae3ab943.dblock.zip.aes
        Value: []
    ]
    ParsedResult: Success
    EndTime: 4/28/2018 3:49:31 PM (1524955771)
    BeginTime: 4/28/2018 3:49:05 PM (1524955745)
    Duration: 00:00:25.5104210
ParsedResult: Warning
EndTime: 4/28/2018 3:49:31 PM (1524955771)
BeginTime: 4/28/2018 3:21:53 PM (1524954113)
Duration: 00:27:37.6345950
Messages: [
    removing file listed as Temporary: duplicati-20180428T120000Z.dlist.zip.aes,
    scheduling missing file for deletion, currently listed as Uploading: duplicati-b0a5554ece13c4607ae18611e043adbe9.dblock.zip.aes,
    scheduling missing file for deletion, currently listed as Uploading: duplicati-i1b6edb768d7f4262a2599ed7b0f86871.dindex.zip.aes,
    No remote filesets were deleted,
    No hash or size recorded for duplicati-bc050fd0058074d4b9463aa7bf9c84b42.dblock.zip.aes, performing full verification
]
Warnings: [
    Expected there to be a temporary fileset for synthetic filelist (4, duplicati-b00091ba58de54eafae421bdf140fe753.dblock.zip.aes), but none was found?,
    Error reported while accessing file: /private/tmp/dup-ac8b72b0-2d3b-46f8-9d84-1181d693d78b => Could not find file "/private/tmp/dup-ac8b72b0-2d3b-46f8-9d84-1181d693d78b",
    Failed to process path: /private/tmp/dup-ac8b72b0-2d3b-46f8-9d84-1181d693d78b => Path doesn't exist!
]
Errors: []

Maybe something in there will give a hint as to the real underlying cause. Also, free space on the local disk never went below 10gb, so I still don’t believe in temp space as the issue.

I’m not seeing anything particularly unexpected in the log though I wonder if the “scheduling missing file for deletion” lines are Duplicati getting out of it’s “Previous volume not finished” state.

If you feel like it, try going back to 250mb and see if the error comes back?

It’s possible the larger dblock size is causing something to time out resulting in the errors you were getting…

Yeah, I was going to wait to see if tonight’s scheduled backup goes ok, then try that tomorrow to see if the problem recurs.

Without changing any settings (still at 50mb), last night’s scheduled backup failed:

System.InvalidOperationException: Previous volume not finished, call FinishVolume before starting a new volume
  at Duplicati.Library.Main.Volumes.IndexVolumeWriter.StartVolume (System.String filename) [0x00010] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.UpdateIndexVolume () [0x00043] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.FinalizeRemoteVolumes (Duplicati.Library.Main.BackendManager backend) [0x0016b] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x008e0] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Controller+<>c__DisplayClass17_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x00036] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0014b] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Controller.Backup (System.String[] inputsources, Duplicati.Library.Utility.IFilter filter) [0x00068] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00494] in <670b0e4a9ae144208688fcb192d20146>:0

Darn.

It sounds like whatever is failing is less likely to fail on smaller volumes, but it still can happen.

At this point it’s running every night, and has failed four of the past five days. Is there additional logging or something I can turn on to diagnose this, so it can get fixed for real?

Probably the easiest way to approach this is to generate a log file (for your version I think you’d use --log-file=<path> and --log-level=profiling) then attach it to a post.

The profiling logs helped. The error (not in the logs, but in the generated email body):

C: Failed: Found inconsistency in the following files while validating database:
C: /private/var/folders/zz/zyxvpxvq6csfxvn_n00000s0000068/T/ProductMetadata.plist, actual size 1120007, dbsize 0, blocksetid: 490127
C: .. Run repair to fix it.

The query which seems to generate this, from the logs:

SELECT "CalcLen", "Length", "A"."BlocksetID", "File"."Path" FROM (
SELECT
    "A"."ID" AS "BlocksetID",
    IFNULL("B"."CalcLen", 0) AS "CalcLen",
    "A"."Length"
FROM
    "Blockset" A
LEFT OUTER JOIN
    (
        SELECT
            "BlocksetEntry"."BlocksetID",
            SUM("Block"."Size") AS "CalcLen"
        FROM
            "BlocksetEntry"
        LEFT OUTER JOIN
            "Block"
        ON
            "Block"."ID" = "BlocksetEntry"."BlockID"
        GROUP BY "BlocksetEntry"."BlocksetID"
    ) B
ON
    "A"."ID" = "B"."BlocksetID"

) A, "File" WHERE "A"."BlocksetID" = "File"."BlocksetID" AND "A"."CalcLen" != "A"."Length"  took 00:01:10.704

So I started poking into the sqlite db:

sqlite> select * from File where Path = '/private/var/folders/zz/zyxvpxvq6csfxvn_n00000s0000068/T/ProductMetadata.plist';
192852|/private/var/folders/zz/zyxvpxvq6csfxvn_n00000s0000068/T/ProductMetadata.plist|118275|30435
570675|/private/var/folders/zz/zyxvpxvq6csfxvn_n00000s0000068/T/ProductMetadata.plist|118275|97969
572932|/private/var/folders/zz/zyxvpxvq6csfxvn_n00000s0000068/T/ProductMetadata.plist|490127|98805

sqlite> select * from Blockset where ID in (118275, 490127);
118275|1120009|mM3nZjV9aw/nk2nAZkqn/aMT1kzzY6Z9rHpwK5O0u6Q=
490127|1120007|ucXqV2XWpYqD7bGTnI99H80oPIlqG9HS8EpPpCq3E+A=

sqlite> select * from BlocksetEntry where BlocksetID in (118275, 490127);
118275|0|211318
118275|1|211319
118275|2|211320
118275|3|211321
118275|4|211322
118275|5|211323
118275|6|211324
118275|7|211325
118275|8|211326
118275|9|211327
118275|10|211328

Oh look. There appear to be no blockset entries for BlocksetID 490127.

I ran repair, as suggested. After, there are still no blockset entries for this BlocksetID. So the DB is corrupt, and repair doesn’t repair it.

That error is removed in the latest canary build AFAIK

If you have the original database (not the recreated one), can you perhaps see when the empty blockset is created?

It could happen if there was some weird hash collision, I guess, but more likely something else has happened.

If you just want to recover, you should be able to just use purge to remove the file, and then run the backup again, but I would really like if we could zoom in and find the cause of the empty set.

I do have the original database. But I’m not sure how to see when the empty blockset was created. The BlockSet table doesn’t seem to have any kind of timestamp column. What should I look for?

I tried to use purge to remove the broken file using the cli, but now I get this:

/Applications/Duplicati.app/Contents/MacOS/duplicati-cli purge 'b2://...' /private/var/folders/zz/zyxvpxvq6csfxvn_n00000s0000068/T/ProductMetadata.plist --dry-run --dbpath=/var/root/.config/Duplicati/WCTNTGAABX.sqlite

Enter encryption passphrase:
No certificates found, you can install some with one of these commands:
    cert-sync /etc/ssl/certs/ca-certificates.crt #for Debian based systems
    cert-sync /etc/pki/tls/certs/ca-bundle.crt #for RedHat derivatives
Read more: http://www.mono-project.com/docs/about-mono/releases/3.12.0/#cert-sync
Starting purge operation

Unable to start the purge process as there are 527 orphan file(s)
Update "2.0.3.3_beta_2018-04-02" detected

I searched in the forum and see some very old reports about orphan files and the cleanup command, but there is no such command in my cli. I tried adding --auto-cleanup=true to my command line, but that didn’t help.

I don’t know if it’s related, but I notice the output says there’s an update available. When I run duplicati-cli help, it says at the end Version: - 2.0.2.1_beta_2017-08-01 which is surprising since the web UI says You are currently running Duplicati - 2.0.3.3_beta_2018-04-02. I ran duplicati-cli test --auto-update=true which fetched the update. And coincidentally, this happened:

  Listing remote folder ...
  Downloading file (8.48 MB) ...
  Downloading file (202.98 KB) ...
  Downloading file (249.92 MB) ...
duplicati-20180429T194526Z.dlist.zip.aes: 64 errors
	Missing: /private/tmp/
	Missing: /private/var/agentx/
	Missing: /private/var/at/
	Missing: /private/var/at/at.deny
	Missing: /private/var/at/spool/
	Missing: /private/var/at/tabs/
	Missing: /private/var/at/tmp/
	Missing: /private/var/audit/
	Missing: /private/var/audit/20180429201646.not_terminated
	Missing: /private/var/audit/current
	... and 54 more

So there seems to be a problem with the upstream files, too. This is all feeling a lot like at some point, a backup was interrupted, and duplicati did not recover well at all.

What do you recommend I do next?

Since I figured I was likely to just have to start all over again, I took a look at the code. I think the sanity checks in the repair logic were keeping the code from actually repairing my damaged database, so I downloaded the dlist file above, deleted it from b2, moved my sqlite file out of the way, then did repair. I did it from the cli, which gives a little more detailed status output. This forced a much longer repair process than I’ve observed before, different from what the UI “recreate” process seems to do:

Rebuild database started, downloading 3 filelists
  Downloading file (8.48 MB) ...
  Downloading file (40.60 MB) ...
  Downloading file (40.61 MB) ...
Mismatching number of blocklist hashes detected on blockset 488284. Expected 0 blocklist hashes, but found 1
Mismatching number of blocklist hashes detected on blockset 488453. Expected 0 blocklist hashes, but found 1
Filelists restored, downloading 707 index files
  Downloading file (339.56 KB) ...
  Downloading file (255.95 KB) ...
  Downloading file (171.62 KB) ...
  Downloading file (140.06 KB) ...
...
Recreate completed, verifying the database consistency
Recreate completed, and consistency checks completed, marking database as complete

After this, I let the normal nightly backup run, and it completed successfully (with a few warnings) for the first time in almost a month. I’m hopeful it will complete tonight as well.

My take-away is that the tools need a way to force various kinds of checks and recovery, since not all the consistency checks run at the same time everywhere.

As one very concrete example, I was in a catch-22 where I couldn’t purge due to orphans, but I couldn’t clean up the orphans because of a validation failure due to the file I needed to purge. PreBackupVerify is responsible for cleaning up orphans, but the failure at VerifyConsistencyAsync meant I had no way to get there, which is why I forced the system to recreate the db from scratch.

Failed again :frowning:

Failed: Previous volume not finished, call FinishVolume before starting a new volume
Details: System.InvalidOperationException: Previous volume not finished, call FinishVolume before starting a new volume
  at Duplicati.Library.Main.Volumes.IndexVolumeWriter.StartVolume (System.String filename) [0x00010] in <ae134c5a9abb455eb7f06c134d211773>:0
  at Duplicati.Library.Main.Operation.BackupHandler.UpdateIndexVolume () [0x00043] in <ae134c5a9abb455eb7f06c134d211773>:0
  at Duplicati.Library.Main.Operation.BackupHandler.FinalizeRemoteVolumes (Duplicati.Library.Main.BackendManager backend) [0x0016b] in <ae134c5a9abb455eb7f06c134d211773>:0
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x008e0] in <ae134c5a9abb455eb7f06c134d211773>:0
  at Duplicati.Library.Main.Controller+<>c__DisplayClass17_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x00036] in <ae134c5a9abb455eb7f06c134d211773>:0
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x00072] in <ae134c5a9abb455eb7f06c134d211773>:0