Inconsistency in file size on restore

While doing a restore, I get the following message:

Found inconsistency in the following files while validating database: /home/robert/Documents/Linguistics_career/Papers/.FBCIndex, actual size 786432, dbsize 0, blocksetid: 288 /home/robert/Documents/Linguistics_career/Papers/Exemplar Theory in phonology extends exemplar models from speech perception and word recognition.doc, actual size 188928, dbsize 0, blocksetid: 293 /home/robert/Documents/Linguistics_career/Papers/Exemplar Theory in phonology extends exemplar models from speech perception and word recognition.docx, actual size 1016413, dbsize 0, blocksetid: 295 /home/robert/Documents/Linguistics_career/Papers/Full noun compound list.xls, actual size 860160, dbsize 0, blocksetid: 299 /home/robert/Documents/Linguistics_career/Papers/KIRCHNER_section_edits.doc, actual size 117760, dbsize 0, blocksetid: 303 … and 9447 more. Run repair to fix it.Found inconsistency in the following files while validating database: /home/robert/Documents/Linguistics_career/Papers/.FBCIndex, actual size 786432, dbsize 0, blocksetid: 288 /home/robert/Documents/Linguistics_career/Papers/Exemplar Theory in phonology extends exemplar models from speech perception and word recognition.doc, actual size 188928, dbsize 0, blocksetid: 293 /home/robert/Documents/Linguistics_career/Papers/Exemplar Theory in phonology extends exemplar models from speech perception and word recognition.docx, actual size 1016413, dbsize 0, blocksetid: 295 /home/robert/Documents/Linguistics_career/Papers/Full noun compound list.xls, actual size 860160, dbsize 0, blocksetid: 299 /home/robert/Documents/Linguistics_career/Papers/KIRCHNER_section_edits.doc, actual size 117760, dbsize 0, blocksetid: 303 … and 9447 more. Run repair to fix it.

How do I run a repair?

Under the job menu you should see an “Advanced” line with a “Database …” link.

On that page you should see a blue button for “Repair” that should do the trick.

Please note that some users are reporting long database recreate times, so please try the repair before going to Recreate.

Also note that with 9,447 more files reporting “actual size n vs. dbsize 0” issues it sounds like there might be something else going on. So if these errors come back please let us know.

Can you please be more explicit? I see no “Advanced” line with a “Database …” link. When I run ‘restore’ there is
first, an Advanced Options dialogue that lets me choose options like ‘accept any ssl certificate’ or 'alternate destination marker.
second, under the encryption passphrase window, another Advanced Options dialogue box that says ‘Enter one option per line in command-line format, e.g. --prefix’,
but there is no database link nor blue button for repair.

It’s in the job menu, not restore or the main menu. The issue is that the local database thinks the files should be one size (in this case 0) but what is downloaded is a different size. Since Duplicati doesn’t know if the database is wrong or the download had a problem it ends up declaring the file “suspect”.

image


Which will bring you to:
image

I get the following error:
Failed to authorize using the OAuth service: GetResponse timed out.

I’m not clear on why it even needs the OAuth service, since the database is on my local drive.

The database is a local record of what all has been backed up to your destination. Since the backup can contain historical versions of files that are no longer the same (or available) locally a database repair has to download files from the destination and use their contents to for the rebuild.

Usually, the small dindex files are all that are needed for a repair - but if those aren’t available for some reason, then the actual dblock archive files would need to be downloaded. In both cases, the need to reference the destination files is why OAuth is required.

If you use the main menu “Restore” option and choose “Direct restore from backup files …” this will skip the local database altogether. It’s a slower process but should work even if you had no local database (such as a disk failure). Are you able to restore any files that way that you aren’t able to restore the way you’re trying currently?

‘Direct restore from backup files’ is what I’ve been doing. That’s the process that gives me the ‘found inconsistency’ errors,

Oh, sorry - I didn’t realize that.

Unfortunately I’m not sure what else to suggest - perhaps @kenkendk has some ideas.

Still waiting for some help on this issue …

I was under the impression your restore issue was resolved here:

That reply was from another thread. I opened this thread because many of the files weren’t restored, due to this inconsistency in file size. If you examine your last response, it says
’Unfortunately I’m not sure what else to suggest - perhaps @kenkendk has some ideas.’ If @kenkendk has some ideas, I’d like to hear them.

Hi,

I’m getting the same error but I can’t run repair due to the server being unbootable, I’m getting the error when trying to restore directly from files on another location. I’ve tried running the repair with diferent options (–no-backend-verification, --disable-filetime-check and others) without luck… I’m running out of options so I’d appreciate any help.

Thanks!

I’m a little confused about what you tried. It sounds like you have an unbootable server (can you boot it with anything else such as a rescue CD to retrieve the database?), and are trying a Direct restore from backup files using a different system. Despite the failure, did that leave a database to repair, or is this Repair being used without a database present, in which case repair would recreate (new start, and default full recreate).

Is this a command line repair or did you import the job into the second system (hazardous because it may damage remote files), and the options you tested are intended for backup (very likely to scramble remote).

Are you trying to get everything possible restored from the latest version, just a few files, old versions, etc.?

For latest of everything, you could try Duplicati.CommandLine.RecoveryTool.exe as described in Disaster Recovery. Being CLI, the remoteurl format can take a little guessing, but Storage Providers has examples.

How many of these errors are you getting, and do you know anything about the files’ importance or dates?

What version of Duplicati is this?

Yes, I am unable to boot the original server (already tried everything, HDD is gone and that’s the only backup) and all the restore tries are made on another system using “Direct restore from backup files”. It doesn’t leave any database to repair despite the message error asks me to repair.

Sorry I meant running the backup not running the repair. The backup is just a .bak from an SQL Server and I already tried to restore all the available versions but had no luck.

I didn’t tried the RecoveryTool, I will and get back with the results.

Only the “Inconsistency in file size” error.

Duplicati version: 2.0.4.19

Just tried the RecoveryTool.exe and when I’m trying to restore it fails with:

Sorting index file … done!
Building lookup table for file hashes
Index file has 828142 hashes in total
Building lookup table with 2047 entries, giving increments of 404
Computing restore path
Restoring 1 files to C:\Restore
Removing common prefix C:\FILEPATH\FILEPATH\FILEPATH\FILETORESTORE.bak\ from files
error: System.ArgumentOutOfRangeException: startIndex cannot be larger than length of string.
Parameter name: startIndex
en System.String.Substring(Int32 startIndex, Int32 length)
en Duplicati.CommandLine.RecoveryTool.Restore.MapToRestorePath(String path, String prefixpath, String restorepath)
en Duplicati.CommandLine.RecoveryTool.Restore.Run(List1 args, Dictionary2 options, IFilter filter)

Command ran: Duplicati.CommandLine.RecoveryTool.exe restore C:\BackendFiles 0 --targetpath=“C:\Restore”

So .bak file is a database backup of SQL Server, Duplicati backs that up, and this is the only file needed?

Still confused. The old server is dead, the new can’t restore any old .bak, so what’s being backed up now?

Are you trying to replicate your old server situation, and trying to put new .bak files onto old server backup?

As a general rule, you never want two active backups going to the same destination folder. Your situation is slightly different because I guess the old system is dead and you’re trying to continue on only the new one?

Still, it’s safer to use a new area for backups while trying to restore from the old. I hope files aren’t lost now.

Is that a privacy-fixed original source path of the one .bak file you want to restore? It looks like there’s an attempt to remove the common folder prefix, given multiple files, and possibly it’s confused by single file. From the FILETORESTORE.bak content, I’ll assume that’s indeed the needed file, and not a folder path.

I suspect it’s getting the error on that last return by trying to take the portion of the path after the prefix, but when the prefix is everything, that’s too late to be starting because there’s nothing left. A workaround may be to force the first return by not giving a --targetpath, assuming the drive letter is the same and there’s no conflict with a new file. See my question just above trying to understand the .bak file usage on new server.

Another time I got wrong while writing, I meant running the restore process not the backup. After running the command without --targetpath it started the restore process, I will get back to you with the result when it’s finished. Thank you very much!

It worked and the restored file was valid! Would be nice to know why it happened so it can be fixed but really soothing to have it restored. Thank you so much @ts678

Phew. I’m glad it worked but still puzzled by why it wasn’t working before (and what it means for your use). It almost looked like you had some missing dblocks, however apparently you had all the blocks for that file.

If you still have the download of the remote files, you could see if normal GUI restore from local files works into some other location, also using option –no-local-blocks=true to make sure it actually uses the backup. Although I think you’d hear about any download issues, this would be a way to know you did no downloads.

I suspect your database’s .bak files vary enough from one to the next (assuming it’s kind of a serialization) that Duplicati can’t get find much deduplication to do, so basically it must upload most of blocks each time. This can work to your advantage if you want to do a from-clean test to see if you can reproduce the issues.

Keeping a –log-file at –log-file-log-level=Retry ,or watching live log at About --> Show log --> Live may help. If any issue can be reproduced, there are higher levels of debugging possible if you’re interested in trying it.