Error on purge-broken-files

Hi

I have a backup that doesn’t complete because some files are missing, so I’m trying to purge them to be able to run it again, but I’m getting an error and I’m unable to solve it.

The result message is:

No broken filesets found in database, checking for missing remote files
  Listing remote folder ...
Marked 4 remote files for deletion
Found 3 broken filesets with 779 affected files, purging files
Purging 261 file(s) from fileset 22/11/2017 14:00:00
Starting purge operation


System.Exception: Unable to create a new fileset for duplicati-20171122T160000Z.dlist.zip because the resulting timestamp 22/11/2017 14:00:02 is larger than the next timestamp 22/11/2017 14:00:01
   at Duplicati.Library.Main.Operation.PurgeFilesHandler.DoRun(LocalPurgeDatabase db, IFilter filter, Action`3 filtercommand, Single pgoffset, Single pgspan)
   at Duplicati.Library.Main.Operation.PurgeBrokenFilesHandler.Run(IFilter filter)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
   at Duplicati.Library.Main.Controller.PurgeBrokenFiles(IFilter filter)
   at Duplicati.CommandLine.Commands.PurgeBrokenFiles(TextWriter outwriter, Action`1 setup, List`1 args, Dictionary`2 options, IFilter filter)
   at Duplicati.CommandLine.Program.RunCommandLine(TextWriter outwriter, TextWriter errwriter, Action`1 setup, String[] args)
Return code: 100

My version is: 2.0.2.14_canary_2017-12-22

There is any other way to solve this problem ?

Are you running any tools to automatically set your clock?

No, this computer is in a domain, the time is received from the Domain Controller, but I’ll check with the sysadmin to validate if there is any problem there.

Edit: The TI guys say the PDC is synced with ‘time.windows.com’ and spread to all the domain.

The machine is a VM, could that be a problem ?

I don’t know if it is relevant, but along with the file ‘duplicati-20171122T160000Z.dlist.zip’ there is also another file named ‘duplicati-20171122T160001Z.dlist.zip’, could be some file naming conflict ?

Checking the message in the source code I’ve found it in ‘PurgeFilesHandler.cs’, the definition of the timestamp is made by a loop (line 114), could be the case where the loop iterates two times, moving the number of seconds to 2 because of the other file with 1 in the end ?

Good detective work on the code lookup but I believe the loop is actually necessary to help avoid potential time conflicts of up to 60 seconds.

Indirectly, yes - the error is probably because of the 2nd file with the same name plus 1 second.

It looks like the way the pure process works is to try and create the new version of the archive with a file named somewhere between 1 and 60 seconds later than the original file.

In your case, Duplicati seems to be finding a file already named 1 second later than the current file so there’s no “room” to name a file between the current “needs-to-be-replaced” file and the already existing next file block.

Unfortunately, I don’t yet know much about the logic behind the file naming conventions so the best I can do at this point is see if I can get somebody more knowledegable about that side of things (like @kenkendk) involved.

Yes, that is the logic. To remove the missing files, the dlist files need to be rewritten. Due to bugs in various provider implementations, Duplicati does not overwrite files, so a new name is needed. To keep as much structure as possible Duplicati takes the timestamp and adds one second to it.

In your case, it says that 3 filesets needs to be rewritten.

I think it somehow picked the wrong offset, and added 1s to that, such that it now breaks the order.

You can try to delete the broken filesets (don’t erase the files manually). This will not require that new versions are written, and prevent you from restoring any file from this version.

To do this, run the list-broken-files command, and get the versions numbers of the broken filesets. Then run delete on each of these versions (be sure to set --no-auto-compact to avoid premature cleanups, possibly --no-backend-verification to temporarily ignore the missing remote files).

1 Like

Thanks for the answer, unfortunatelly this won’t do the trick, apparently my backup doesn’t have any filesets to be deleted.

That is the result of the list-broken-files command:

No broken filesets found in database, checking for missing remote files
  Listing remote folder ...
Marked 3 remote files for deletion
2	: 22/11/2017 14:00:00	(261 match(es))
...(long list of files)
1	: 22/11/2017 14:00:01	(258 match(es))
...(long list of files)
0	: 22/11/2017 20:00:01	(260 match(es))
...(long list of files)

Anyone have any suggestion how can I use again this backup ?

I’m stuck here, can’t backup and can’t repair, I don’t want to delete all and start again.

It was mentioned I must delete filesets, how can I do that ? The ‘delete’ command asks for version number, how can I pass the fileset ?

The version numbers are the ones listed (0,1,2). You can also use the exact timestamp instead of the version number. If you look at the filenames, beware that they are in UTC, where the delete command expects your local timezone.

Thanks for the answer, eventually I figured out how to do that with the “–version” parameter, but I’m still with an unusable backup.

When I run a backup I receive the error:

Backend verification failed, attempting automatic cleanup
Duplicati.Library.Interface.UserInformationException: Found 1 files that are missing from the remote storage, please run repair

Then I try to run repair, and get the error:

Repair cannot acquire 802 required blocks for volume duplicati-b5187b4cfcf734ccf991dc995fcdfd291.dblock.zip, which are required by the following filesets: 
This may be fixed by deleting the filesets and running repair again
Failed to perform cleanup for missing file: duplicati-b5187b4cfcf734ccf991dc995fcdfd291.dblock.zip, message: Repair not possible, missing 802 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage. => Repair not possible, missing 802 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage.

Then I run list-broken-files and get:

No broken filesets found in database, checking for missing remote files
  Listing remote folder ...
Marked 173 remote files for deletion
No broken filesets found
Return code: 0

And, besides the message, I try to run purge-broken-files and got:

No broken filesets found in database, checking for missing remote files
  Listing remote folder ...
Marked 323 remote files for deletion
Found no broken filesets, but 0 missing remote files

Then, in the next backup I receive the message I got before:

Backend verification failed, attempting automatic cleanup
Duplicati.Library.Interface.UserInformationException: Found 1 files that are missing from the remote storage, please run repair

So, I’m stucked in a loop, without being able to use that backup set.

There is anything besides discarding the backup I can do ?

This is just for testing, so it’s NOT a fix, but do things work better if you set --no-backend-verification=true?

--no-backend-verification (default `false)
If this flag is set, the local database is not compared to the remote filelist on startup. The intended usage for this option is to work correctly in cases where the filelisting is broken or unavailable.
Default value: “false”

Thanks for the suggestion, I started using it with a ‘repair’ and ‘purge-broken-files’ and got the same results above, then I tried a backup with it and it worked!

After that a ‘repair’ solved all the problemas in the backup and it looks now it’s working fine, I’ll do more tests, but I think my problem is solved.

Thanks for all the help.

I’m glad to hear that, though I wasn’t actually expecting it to solve the “can’t repair” issue. :thinking:

In your additional testing be sure to try removing the --no-backend-verification parameter and see if it still works, otherwise you’re not testing a full set of functionality.

Sure, I only used it in the first sucessful backup, after that all the operarions are as usual, without it, after many backups and restores I’m able to say the problem is solved, after all.

Still is confusing to me, I lost a dblock file in my backup, OK, it happens, media got corrupted and so on, but In my view duplicati shoud be able to handle this more smootly, giving the proper warnings and so on, but if the media is lost then is lost, part of that fileset is inacessible but life must go on.

Yes, improving the user experience is something being worked on - but it tends get a lower priority than functionality fixes, for now.

I want to jump onto this as well.
I have some issues with a very large backup which I needed to stop while doing the initial backup.

This are the references to the existing posts

Now I ended up with the following error when rerunning the backup job

2018-06-11 13:25:17 +02 - [Error-Duplicati.Library.Main.Operation.RepairHandler-CleanupMissingFileError]: Failed to perform cleanup for missing file: duplicati-b58bcb8fcefe141a2ba3a92aea3497758.dblock.zip.aes, message: Repair not possible, missing 582 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage.

I did a list-broke-files and purge-broken-files with and without the --no-backend-verification=true switch.

The results are quite strange
Log of Repair Action

Listing remote folder ...
Failed to perform cleanup for missing file: duplicati-b58bcb8fcefe141a2ba3a92aea3497758.dblock.zip.aes, message: Repair not possible, missing 582 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage. => Repair not possible, missing 582 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage.
Failed to perform cleanup for missing file: duplicati-ba740645c307f42bda0518ed10686ec33.dblock.zip.aes, message: Repair not possible, missing 592 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage. => Repair not possible, missing 592 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage.
Failed to perform cleanup for missing file: duplicati-b9b2e00ea1bfd4c4eb94f1281203bb62b.dblock.zip.aes, message: Repair not possible, missing 569 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage. => Repair not possible, missing 569 blocks.
If you want to continue working with the database, you can use the "list-broken-files" and "purge-broken-files" commands to purge the missing data from the database and the remote storage.
...
...

Log of List Action

  Listing remote folder ...
[/opt/Qmono/bin] # 

Log of Purge Action

 Listing remote folder ...
Found no broken filesets, but 0 missing remote files

Rerun Backup
At the moment I am doing a backup from the GUI the --no-backend-verification=true switch, but right now all I can see is


which confuses me once more.

Thanks for any hints.

The steps you’ve taken are about right for what you ran into (and yes, we know it should be easier to handle this situation). :wink:

When you see the “Verifying backend data” message can you try looking at the “lastPgEvent” in “About” → “System info” → “Server state properties” (near the bottom) and see that “Phase” is being processed?

Thanks, but I figured out that the website or that specific part on the top of the website just didn’t reload (just don’t know exactly).
I reopened the page and everything was fine, I hope the backup can now complete without interruption and is a functional backup afterwards :slight_smile: