Missing fileset id's?

I’ve been stymied trying to repair this backup that was interrupted nearly halfway through. Repair consistently flags two dblock files that are missing 511 blocks each but no fileset id’s (as seen below):

Repair cannot acquire 511 required blocks for volume duplicati-be27ecf833ff248f78182f2b9fedbeaf3.dblock.zip.aes, which are required by the following filesets: 
This may be fixed by deleting the filesets and running repair again
Failed to perform cleanup for missing file: duplicati-be27ecf833ff248f78182f2b9fedbeaf3.dblock.zip.aes, message: Repair not possible, missing 511 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage. => Repair not possible, missing 511 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage.

I suspect the lack of a fileset id is what makes the subsequent commands, list-broken-files, purge-broken-files and affected less than useful:

list-broken-files
2017-10-30 05:13:07Z - Information: No broken filesets found in database, checking for missing remote files
2017-10-30 05:13:07Z - Information: Backend event: List - Started:  ()
2017-10-30 05:13:30Z - Information: Backend event: List - Completed:  (3.65 KB)
2017-10-30 05:13:30Z - Information: Marked 2 remote files for deletion
2017-10-30 05:13:33Z - Information: No broken filesets found

Is the lack of a numeric fileset id expected in this situation? Thanks.

Initial backup to pCloud used WEBDAV but I migrated the config to a local OSXFUSE instance since it seems more robust.

APIVersion : 1
PasswordPlaceholder : **********
ServerVersion : 2.0.2.12
ServerVersionName : - 2.0.2.12_canary_2017-10-20
ServerVersionType : Canary
BaseVersionName : 2.0.2.12_canary_2017-10-20
DefaultUpdateChannel : Canary
DefaultUsageReportLevel : Information
ServerTime : 2017-10-29T23:56:08.835264-07:00
OSType : OSX
DirectorySeparator : /
PathSeparator : :
CaseSensitiveFilesystem : true
MonoVersion : 4.8.0
MachineName : FUJI.local
NewLine :
CLRVersion : 4.0.30319.42000
CLROSInfo : {"Platform":"Unix","ServicePack":"","Version":"16.7.0.0","VersionString":"Unix 16.7.0.0"}
ServerModules : []
UsingAlternateUpdateURLs : false
LogLevels : ["Profiling","Information","Warning","Error"]
SuppressDonationMessages : false
BrowserLocaleSupported : true
backendgroups : {"std":{"ftp":null,"ssh":null,"webdav":null,"openstack":"OpenStack Object Storage / Swift","s3":"S3 Compatible","aftp":"FTP (Alternative)"},"local":{"file":null},"prop":{"s3":null,"azure":null,"googledrive":null,"onedrive":null,"cloudfiles":null,"gcs":null,"openstack":null,"hubic":null,"amzcd":null,"b2":null,"mega":null,"box":null,"od4b":null,"mssp":null,"dropbox":null,"jottacloud":null}}
GroupTypes : ["Local storage","Standard protocols","Proprietary","Others"]
Backend modules: aftp amzcd azure b2 box cloudfiles dropbox ftp file googledrive gcs hubic jottacloud mega onedrive openstack s3 ssh od4b mssp sia tahoe webdav
Compression modules: zip 7z
Encryption modules: aes gpg

It’s Monday so maybe my brain isn’t working yet, but I’m not seeing the missing fileset ID about which you are asking.

The way I read the messages:

  1. File duplicati-be27ecf833ff248f78182f2b9fedbeaf3.dblock.zip.aes is corrupted and missing 511 blocks of file data (exactly how much that is will depend on your blocksize= parameter setting, but the default is 50KB per block which would make that about 25,550 KB of missing data)
  2. The corrupt file can’t be fixed so must be replaced
  3. The list-broken-files command confirms the broken file (I expect the 2nd remote file marked for deletion the dindex or dlist file associated with the dblock file)

Did you try running the purge-broken-files command?

Also, you mentioned this was a moved backup - did it every work correctly for you after the move or did this issue start when the move happened?

It is supposed to list a sequence of filesets after the message, but instead it just shows the next output line, so I think the diagnosis is correct.

No, it is not expected, and honestly a bit confusing.

Can you make a bugreport of the database, then I will see if I can figure out why there are blocks that are counted as “required” and at the same time “not used”.

Hmmm, the database is less than 500 MB but create-bug-report did not complete (after 7 hours). I found the mono-sgen32 process running and chewing up CPU time but it did not respond to stop signals sent from the Web UI so I killed it from the command line.

I’ll try bugreport one more time after I recreate the database, otherwise I’ll write this one off and start over with a fresh installation. Thanks!

Apparently I was not patient enough the first time, it took 11 hours to complete the bugreport.
FWIW here’s the status report after recreating the database:

MainOperation: Repair
RecreateDatabaseResults:
    MainOperation: Repair
    ParsedResult: Success
    EndTime: 10/31/2017 3:42:16 PM
    BeginTime: 10/31/2017 3:23:07 PM
    Duration: 00:19:09.1506300
    BackendStatistics:
        RemoteCalls: 1853
        BytesUploaded: 0
        BytesDownloaded: 185995708
        FilesUploaded: 0
        FilesDownloaded: 1852
        FilesDeleted: 0
        FoldersCreated: 0
        RetryAttempts: 0
        UnknownFileSize: 0
        UnknownFileCount: 0
        KnownFileCount: 0
        KnownFileSize: 0
        LastBackupDate: 1/1/0001 12:00:00 AM
        BackupListCount: 0
        TotalQuotaSpace: 0
        FreeQuotaSpace: 0
        AssignedQuotaSpace: 0
        ParsedResult: Success
ParsedResult: Error
EndTime: 10/31/2017 3:42:17 PM
BeginTime: 10/31/2017 3:23:07 PM
Duration: 00:19:09.8411060
Messages: [
    Rebuild database started, downloading 1 filelists,
    Filelists restored, downloading 1851 index files,
    Recreate completed, verifying the database consistency,
    Recreate completed, and consistency checks completed, marking database as complete
]
Warnings: []
Errors: [
    Remote file referenced as duplicati-bcb8cb4d9a67042bea2c7b2a020b884ed.dblock.zip.aes, but not found in list, registering a missing remote file,
    Remote file referenced as duplicati-be27ecf833ff248f78182f2b9fedbeaf3.dblock.zip.aes, but not found in list, registering a missing remote file
]

For me, generating an 85M bugreport.zip file on a 143M sqlite file took ~60 seconds on an 8 processor 3.5Ghz Xeon Windows 7 x64 machine with 32G of memory.

Did it take 11 hours from the time you clicked “Create bug report…” or from the time you saw “Creating bug report…” in the status line? I’m wondering if a backup job was running so the bug report request got queued behind that…

That’s from the status line and “scrubbing file names” message in the log file. I’ve got a (4) Core I7 and 16 GB hackinstosh, the bugreport was 245 MB. No backup jobs were running. Perhaps it correlates to the size of the backup job? This one is about 260 GB, if I maximize my 6 Mb/sec upload bandwidth it looks like it will take about 5 days to complete.

@imnxnyer: Thanks for the bugreport.

I have looked through it, and the reported file really has no use as reported.
The problem is that there is another file duplicati-i8a12d5c9452a46f395af2f070e709cca.dindex.zip.aes that lists what is supposed to be inside duplicati-be27ecf833ff248f78182f2b9fedbeaf3.dblock.zip.aes.

Due to a logic issue in Duplicati, it records the blocks that it finds inside the dindex file , and records these. Then at the check where you see the error, it figures out that these blocks are missing. This is fine, but we should remove unused blocks prior to checking for missing blocks.

For now, there are two ways to fix it.

Simple fix is to remove the file duplicati-i8a12d5c9452a46f395af2f070e709cca.dindex.zip.aes and then rebuild the database.

The other way is a bit more dangerous, but you can open the database with SQLite Browser, and use the “Execute SQL” tab to run this:

DELETE FROM "Block" WHERE "VolumeID" = 3703;
DELETE FROM "RemoteVolume" WHERE "ID" IN (3703, 2813);
DELETE FROM "IndexBlockLink" WHERE "IndexVolumeID" = 2813;

Then remove the duplicati-i8a12d5c9452a46f395af2f070e709cca.dindex.zip.aes and all checks will pass.

Thanks for checking this out. I ran the SQL commands as directed and found two instances of the dindex: duplicati-i8a12d5c9452a46f395af2f070e709cca.dindex.zip.aes and one duplicati-i8a12d5c9452a46f395af2f070e709cca.dindex.zip.aes copy, so removed both. A subsequent repair resulted in the following output:

 Listing remote folder ...
promoting uploaded complete file from Uploading to Uploaded: duplicati-i8a12d5c9452a46f395af2f070e709cca.dindex.zip.aes copy
Repair cannot acquire 511 required blocks for volume duplicati-bcb8cb4d9a67042bea2c7b2a020b884ed.dblock.zip.aes, which are required by the following filesets: 
This may be fixed by deleting the filesets and running repair again
Failed to perform cleanup for missing file: duplicati-bcb8cb4d9a67042bea2c7b2a020b884ed.dblock.zip.aes, message: Repair not possible, missing 511 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage. => Repair not possible, missing 511 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage.
Return code: 0

I double checked the database and confirmed that the records deleted are gone and noticed a record for the ‘copy’ file adjacent to the ID=2813 record deleted in RemoteVolume:

"2814"	"30"	"duplicati-i8a12d5c9452a46f395af2f070e709cca.dindex.zip.aes copy"	"Index"	"541"	"YsW0h3s-(_obfuscated_)-mQ7sKL4TM="	"Uploading"	"0"	"0"

Do I need to remove another record?

That looks like a parsing problem in Duplicati, it should ignore such a file. Did you create this file yourself?

I think this can explain why things went bad. Duplicati removed the original file. But then it found this extra file, recreated the index file, and now it keeps hanging around.

I should still fix the issue, such that it does not complain about missing unused blocks though.

In any case, that file should be deleted, something like:

DELETE FROM "RemoteVolume" WHERE "ID" = 2814;
DELETE FROM "IndexBlockLink" WHERE "IndexVolumeID" = 2814;

Yes, I must have back when I was tinkering with removing files to repair the database.

OK, I think I’m getting the hang of this. That worked and got me back to what I seem to recall was the original problem, the file in this message failing the header check because it was truncated at 0 bytes:

Repair cannot acquire 511 required blocks for volume duplicati-bcb8cb4d9a67042bea2c7b2a020b884ed.dblock.zip.aes

Repair could not get around this so I removed the records with references to this truncated file in the RemoteVolume and IndexBlockLink tables and the database repair operation came back as valid.

Thanks so much for the excellent product and great support!

1 Like