I’m trying to restore a Duplicati backup from Amazon Drive. But at the “Select file” step, after “Fetching path information”, I get a warning “Got 1 warning(s)”.
Duplicati - 2.0.1.73_experimental_2017-07-15
MainOperation: Repair
ParsedResult: Warning
EndTime: 06/09/2017 15:10:06
BeginTime: 06/09/2017 15:09:49
Duration: 00:00:16.9067528
Messages: [
Rebuild database started, downloading 1 filelists,
Recreate/path-update completed, not running consistency checks
]
Warnings: [
Failed to process file: duplicati-20170513T223540Z.dlist.zip.aes => Invalid manifest detected, the field Blocksize has value 512000 but the value 102400 was expected
]
Errors: []
BackendStatistics:
RemoteCalls: 2
BytesUploaded: 0
BytesDownloaded: 318541
FilesUploaded: 0
FilesDownloaded: 1
FilesDeleted: 0
FoldersCreated: 0
RetryAttempts: 0
UnknownFileSize: 0
UnknownFileCount: 0
KnownFileCount: 0
KnownFileSize: 0
LastBackupDate: 01/01/0001 00:00:00
BackupListCount: 0
TotalQuotaSpace: 0
FreeQuotaSpace: 0
AssignedQuotaSpace: 0
ParsedResult: Success
Recreate/path-update completed, not running consistency checks
Failed to process file: duplicati-20170513T223540Z.dlist.zip.aes
Duplicati.Library.Main.Volumes.InvalidManifestException: Invalid manifest detected, the field Blocksize has value 512000 but the value 102400 was expected
at Duplicati.Library.Main.Volumes.VolumeBase.ManifestData.VerifyManifest(String manifest, Int64 blocksize, String blockhash, String filehash)
at Duplicati.Library.Main.Volumes.VolumeReaderBase..ctor(ICompression compression, Options options)
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
This means that the backup was made with --blocksize=512000 and the restore is attempting to run with --blocksize=100kb. Not sure why this is not picked up automatically, but you can set the advanced options and choose --blocksize=512000 to fix it.
Sorry to revive this topic but I will have to add this for future reference.
If you also set a different dblock size you will have to provide your --dblock-size as well. Otherwise you will also get above error when you try to restore without the local database. Example:
The workaround was from Dec 29, 2017. The fix in the same thread was Jun 30, 2018.
Are you saying that 2.0.4.5 has been tested and shows the --blocksize problem again?
What’s more concerning is that if –dblock-size must be given, how do you give it if backup has several? Choosing sizes in Duplicati says that (unlike –blocksize) it can be changed at will for an existing backup:
Unlike the chunk size described above, it can be beneficial to both increase or decrease the volume size to fit your connection characteristics. Also, the volume size can be changed after a backup has been created.
A quick search of known recent issues didn’t find this, but if well-proven then maybe you should file one.
Just now tested it again on the latest canary (duplicati-2.0.4.15_canary_2019-02-06) using random files and it works just as the developer says in the above github issue.
I was able to restore it with just the --blocksize and didn’t have to put in the --dblock-size.
The developer comment appears to not be that post, but this on Jun 29, 2018, with a fix the next day for:
Something goes wrong when restoring from the GUI, where the parameter --blocksize=100kb is set, causing the auto-detection of the blocksize to be disabled (the logic is that the user has explicitly requested a specific value by setting the option).
and the fix code (linked earlier) appears to be for --blocksize not --dblock-size. Do you need --blocksize?
From my reading (or perhaps misreading), this is all supposed to be automatic, even for a direct restore.