Repeated failure to repair


#1

Dear members,

Since about a month ago a daily backup job fails and gives the advice to repair.
The error messages read:

16 nov. 2017 18:47: Message
    Processing all of the 9 volumes for blocklists
    16 nov. 2017 18:42: Message
    Probing 5 candidate blocklist volumes
    16 nov. 2017 18:42: Message
    Remote file referenced as duplicati-b1f2f6832508540e8848bbb64741f6c69.dblock.zip.aes, but not found in list, registering a missing remote file
    16 nov. 2017 18:42: Message
    Remote file referenced as duplicati-bc944a56ca034406c92766048cb80c56e.dblock.zip.aes, but not found in list, registering a missing remote file
    16 nov. 2017 18:42: Message
    Remote file referenced as duplicati-ba972d97fad9d41e1bd8e7e0078044c4c.dblock.zip.aes, but not found in list, registering a missing remote file
    16 nov. 2017 18:42: Message
    Filelists restored, downloading 9 index files
    16 nov. 2017 18:42: Message
    Rebuild database started, downloading 5 filelists

What is the best way to repair this? And I am not shy of the cli.

Please advise.


Gdrive (Google drive) and DB restore
Repair freezes (or takes days)
#2

Did you try this

image

and then Repair?


#3

Yes, yes, several times. But to no avail!

And the other option Recreate (delete and repair) as well

pablok


#4

In testing (when I was changing destination/deleting destination backup) I was having the same issues.

Create backup to remote.
Delete remote backup
Run duplicati - fails.
Repair, delete, and recreate all tried and it doesn’t re-create the backup remotely and backup again from scratch. It’s holding onto, and expecting old data at destination that’s gone. Shouldn’t the recreate at least reset and run a new backup from scratch again?

Once this was done the only thing I was able to do to get it operational again was export and import as a new profile.


#5

The Repair (and Recreate) processes access the remote backup files to build the local SQLite database file. If the remote files are gone then there’s nothing from which to build (or restore).

At that point you are essentially starting over which should be doable by any of:

  • Delete local database and run job
  • Export job then import it as a whole no job
  • Create a new job from scratch

However, I’m not sure how the process works if only SOME of the remote files are missing (which is what it sounds like @silversword411 is experiencing).


#6

That there is a local database, and remote database maybe there should be 3 (or 6) buttons on the database page to make database fixing easier?

Local Database:
Repair | Rebuild (from files) | Delete and recreate
Remote Database:
Repair | Rebuild (from Local database) | Delete and recreate


#7

There is only a local database, the backup information at your destination is all stored in individual zip (or 7z) files as part of the actual backup.

When a local Repair is done it’s basically comparing the local database stored information to the remote compressed file information.


#8

Can you figure out in some way why those files are missing?
I would say that the best approach is to start over, but without figuring out why the files are missing, you will end up in the same situation.

The problem here is that not all information is stored in the database. The dblock files contains the actual file data, and once they are missing it is not certain that they can be recreated (i.e. the files being backed up are changed).

The error message from the OP states that there are 3 dblock files that were supposed to exist, but for some reason they are missing.

The only way to recover from something like this is to remove all files that reference the missing dblock files. This is what the purge-broken-files operation does, but currently this is not supported on a partially recreated database (because it might start deleting things that you do not want deleted).


#9

How to repair dlock with command line?


#10

You cannot really repair dblock files. If you are lucky and have all the data it should contain on your machine, you can recreate it. Otherwise, you will need to use list-broken-files and purge-broken-files after removing the broken dblock file to recover to a working state.


#11

It seems to me, there is some important information here about ways in which duplicati can fail, but I’m not sure I understand things correctly: So, let’s assume we have a flawless backup archive and we now go ahead and randomly delete one dblock file. What happens?

  1. Whenever the backup job runs, it will fail, i.e. it will no longer backup anything, right?
  2. Duplicati will issue a message (BTW: why is this only a “message”, not a warning or an error?) complaining that it can’t find that file and is registering it as “missing remote file” (as reported in OP)
  3. Duplicati will issue one of those red error messages, advising us to repair the local database
  4. If we run repair, nothing changes (backup still fails) Why? Because
  1. If we run recreate and repair, nothing changes -> why? Same reason as above?
  2. Last resort: use the purge-broken-files option. But that doesn’t work either because

Conclusion: the random deletion of a single file in the backup renders the entire backup unusable, not only for restore but even for any further backup. Is that correct??

Bonus questions:

  1. If we had not tried to recreate our database in step 5, it would be possible to use purge-broken-files?
  2. If we had used purge-broken-files, the backup would run again, but what would be the consequences of the purge? Obviously some files/versions? Would be missing from the backup? I suppose the exact damage depends strongly on what happened to be in the deleted dblock file (if it contained chunks that were used (referenced) a lot, the damage will be bigger than if it was a unique chunk used only once, right?
  3. Now, what happens if the file(s) affected (i.e. which will disappear from the backup) still exist locally. Will Duplicati back them up again and everything will be back in order?

#12

I believe restore direct from destination would still work, it would just be slower as it would need to download dindex files to know what dblock files contained what blocks.

It would restore everything it could and, I believe, put empty blocks in files that had parts stored in the missing dblock.


#13

If that is the case, then I don’t understand why duplicati can handle the missing file in one place but not in another.


#14

Dealing with a missing file as part of a restore is simply restoring empty blocks where the missing data is. But when trying to do a backup when there is a missing file decisions would have to be made about how to handle the missing data - should it be rebuild from the current data (if the file still exists)? What if multiple versions of a file were in a single archive, should they be re-populated with current data or left as empty? Stuff like that is what makes an automatic handling, while trying to add data, difficult.

I don’t recall if it was tried or not but I’d guess that adding --no-backend-verification might allow you to continue doing backups without dealing with the missing archive file.

Of course as @kenkendk mentioned using --list-broken-files to see what would be removed by a --purge-broken-files command would resolve the outstanding issues. Unfortunately, it wouldn’t work for the OP because not only are there missing destination files but also a partially recreated local database, so there’s no longer a fully valid reference point to use for trying to reconstruct data.

I haven’t really looked into it but ideally I would think this particular situation could be avoided / reduced by making repair (or delete and repair) be disallowed (or at least verified) when destination files are missing.


#15

Yes, but you should not do that. You have no idea what data will be missing when you try to restore.

That is because you should not continue a backup on a broken foundation. If you were to do that, it is possible that you would not be able to restore the files you think you have backed up (if one chunk is shared between multiple files, and that chunk happens to be missing, all future backups will think the chunk is there and not upload it).

For restore, it still sucks if you are missing a chunk (or possibly a lot), but restoring will still give you “as much as possible”. The alternative would be to refuse restoring if there were errors, which seems mean :speak_no_evil:.


#16

2 posts were split to a new topic: Frequent “registering a missing remote file” errors