A diabolic error loop (backup reports 1 missing file but list-broken-files reports 0)

I have a large archive backup (source size ~75Gb, backup size ~55Gb) on Google Drive, which started to fail, advising:

---> System.AggregateException: Found 1 files that are missing from the remote storage, please run repair 
---> Duplicati.Library.Interface.UserInformationException: Found 1 files that are missing from the remote storage, please run repair

So, I run repair:

Failed to perform cleanup for missing file: duplicati-b4243baf755c9405b9688137e75e9e651.dblock.zip.aes, message: Repair not possible, missing 752 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage.

So, I run list-broken-files and purge-broken-files:

Found no broken filesets, but 0 missing remote files

Then, I started the backup job again and I obtain:

---> System.AggregateException: Found 1 files that are missing from the remote storage, please run repair 
---> Duplicati.Library.Interface.UserInformationException: Found 1 files that are missing from the remote storage, please run repair

And so on, endlessly!

Is my backup archive lost?

I doubt it. If you want to double check, just try doing a test restore of a file or two (you might need to use “Direct restore from backup files …”).

However, you won’t be able to continue adding backup versions to this destination until we fix the problem.

Try looking at the files on Google Drive to confirm:

  • duplicati-b4243baf755c9405b9688137e75e9e651.dblock.zip.aes really is missing
  • if it’s not missing, what is the file size (it might be 0 bytes) and date?

@Pectojin, do you know of a relatively easy way to identify all parts of a fileset based off of just one file (in this case a dblock)? I’m wondering if there’s a bug in the list-broken-files process that considers having a dindex or dlist file enough of a confirmation that it doesn’t verify the dblock file actually exists.

I can take a look at a SQL query for it when I get home, but I don’t think there’s a coded way to do it.

Something like this will give you all files with a block in the provided volume

SELECT File.Path
FROM RemoteVolume
INNER JOIN Block on Block.VolumeID = RemoteVolume.ID
INNER JOIN BlocksetEntry on BlocksetEntry.BlockID = Block.ID
INNER JOIN File on File.BlocksetID = BlocksetEntry.BlocksetID
WHERE RemoteVolume.name = "duplicati-b9d4ca1f994634106959339a66ee16072.dblock.zip.aes"
GROUP BY File.Path

In principle this list should be empty if there are no results from list-broken-files

1 Like

I thank you for your helpfulness, but I don’t know where and how to follow your clue.
Indeed, I have a life to live and I don’t want to spend my time (and yours) on some obscure error which will can occur again in some variant in the future, as well as it happened several times in the past, requiring another intricate (for me) resolution procedure.
I recognize also to have made the wrong question. I don’t need to do a full or partial restore. I only want to save what is savable of my backup archive, without being forced to upload GB and GB of data on GDrive.
If you know a sure and simple sequence of operations to do from duplicati gui, even partially destroying my backup data, I will read willingly.
Thank you for your understanding.

I suppose it was mostly for @JonMikelV in case he wanted to troubleshoot using the info.

Completely understandable.

What I would do is go into the Duplicati UI under “Restore” and select “Direct restore from backup files …”. You’ll need to instruct it how to access the backup files and provide the encryption key.
From there you can browse the files you want to restore, or just restore them all, and select where to download them to.

This option will create a new database for this restore operation so any local issues won’t affect the restore. Any file in the backup that isn’t corrupted on the destination will be restoreable.

1 Like

I’m not sure to understand. I want to continue using my backup, it it were possible to repair it. Anyway, now this seems impossible. I continually obtain “database is locked database is locked” with any operation.
Sadly, I’m creating a new backup job (just the whing I wanted to avoid) because my 55GB backup files seem shit that Duplicati is unable to reclaim.

The issue is that the local database and the remote file list are out of sync for some reason.

I don’t know why the database Repair isn’t handling it correctly, but did you try a database Recreate? Note that this can take quite a while - potentially days, but in the end you should be left with a local database built completely from what is in the remote files so they should be back in sync.

(You may want to make a backup of your current database before doing the Recreate just to aid in potential debugging.)

Yes, it was the exact thing I made (recreate: delete & repair) and your advise is right: it took two days. After that, I tried to run a new backup, but I stopped it after a lot of time since I didn’t see any advance and now I fear that it was the reason of the new error.

So I decided to split the backup job in three parts to contain any potential issue.

Anyway, in my opinion Duplicati is very very very far from being user friendly in case of errors, which in my experience happen often working with cloud destinations.

I’m sorry to hear you had to start over, but splitting a large job is a valid choice that other big-job users have chosen.

It’s certainly not as friendly as we’d like it to be, and unfortunately we still seem to be in the situation where it either works really well or has lots of issues for people.

While most people seem to be in the “works really well” camp, we haven’t yet figured out what seems to be triggering problems for the “lots of issues” people.

That’s part of the reason why we still don’t have an official stable release - the recent 2.0.3.3 beta is as close as we have come so far. Despite being beta (or even canary, prior to 2.0.3.6) it does work quite well for many people (myself included) - but as you experienced, that’s not always the case. :frowning: