Unexpected difference in fileset; unable to delete broken version

I’ve been running the latest beta for a long time now, and suddenly ran into the dreaded “Unexpected difference in fileset” problem. I upgraded to the latest canary to get better error logging, and to test if it can help me, but I’m unable to delete any version. I ran a verify on the backup and versions 1-6 were marked as having this issue.

This is what I get, trying to delete version 6:

System.Exception: Unexpected number of deleted filesets 2 vs 3

   at Duplicati.Library.Main.Database.LocalDeleteDatabase.<DropFilesetsFromTable>d__5.MoveNext()

   at System.Linq.Buffer`1..ctor(IEnumerable`1 source)

   at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)

   at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBackend, Boolean forceCompact, BackendManager sharedManager)

   at Duplicati.Library.Main.Operation.DeleteHandler.Run()

   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)

   at Duplicati.Library.Main.Controller.Delete()

   at Duplicati.CommandLine.Commands.Delete(TextWriter outwriter, Action`1 setup, List`1 args, Dictionary`2 options, IFilter filter)

   at Duplicati.CommandLine.Program.ParseCommandLine(TextWriter outwriter, Action`1 setup, Boolean& verboseErrors, String[] args)

   at Duplicati.CommandLine.Program.RunCommandLine(TextWriter outwriter, TextWriter errwriter, Action`1 setup, String[] args)
Return code: 100

Any tips for this? Thanks!

Welcome to the forum @Boulder08

The good part of that is that the improved error logging seems good.
Other issue is probably an old one with few reports, so little known…

It’s also better at making job logs for backups that fail, to save history.
Problem is it’s not yet Beta, so history might need a log-file. Got one?

Other reporting (if done) can help. About → Show log → General too.
Maybe you recall prior issues, or whether first fail was at backup end?

What destination type is this? Some have more file quirks than others.

If nothing else, there are likely clues in the database. One suspect is a
Compacting files at the backend causes an “Unexpected difference” at
end of backup that was not there at start (but is there at the next start).

These can sometimes be spotted by looking in the database, or maybe
<job> → Show log → Remote to look for dblock gets before final verify.

For less paging, you can try DB Browser for SQLite, or a bug report link.

No log file available I’m afraid, unless one is saved by default. I’m running the Duplicati service and don’t recall doing any specific changes to parameters when setting it up.

I took a look at the first version which reported an issue, but the backup run itself has finished just fine. I’m backing up to my OneDrive and everything has worked smoothly for a long time, like at least a year or so.

So should I file a bug report for not being able to delete the couple of versions which are causing issues? I don’t mind losing them since they are just backups from the last couple of days changes.

A bug report (read the link) is a sanitized (more so in the new Canary) database dump, not something for the GitHub Issues, but you can manually open an Issue if there’s not one yet. Typically the ideal issue has steps to reproduce, and lacking those, and lacking developers, sometimes issues aren’t very actionable. Occasionally, a “blind” fix without steps gets done.

is a 2016 report with some discussion, and error in other direction (1 vs 0 – yours is 2 vs 3).

A database bug report would be more immediately helpful, as there is a big issues backlog.

1 Like

but instead of doing that, an issue got filed to be a duplicate and not go anywhere. Why not do
Creating a bug report so there’s actually the possibility of some immediate help being applied?
Omitting the information is going to slow down (maybe prevent) work on the issue too, which is

1 Like

Ah, I didn’t know that there is this method to submit one. I’ll run one from there, just have to leave for work so it might be that it gets sent when I’m back.

Link to the DB bugreport is now available at GitHub.

Is the message posted? If not, please do, so I can look at it along with the bug report.
I’m also looking at the Issue which at least says it’s version 6, but lacks the message.

Last night’s scheduled run shows this. Looks pretty much the same what a verify operation also tells.

2024-03-18 03:02:12 +02 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error UserInformationException: Unexpected difference in fileset version 6: 3/9/2024 3:00:00 AM (database id: 454), found 410008 entries, but expected 410009 Unexpected difference in fileset version 5: 3/10/2024 3:00:00 AM (database id: 455), found 410352 entries, but expected 410353 Unexpected difference in fileset version 4: 3/11/2024 3:00:00 AM (database id: 456), found 410001 entries, but expected 410002 Unexpected difference in fileset version 3: 3/12/2024 3:00:00 AM (database id: 457), found 410013 entries, but expected 410014 Unexpected difference in fileset version 2: 3/13/2024 3:00:00 AM (database id: 458), found 410010 entries, but expected 410011 Unexpected difference in fileset version 1: 3/14/2024 3:00:11 AM (database id: 460), found 410010 entries, but expected 410011

So there’s like two problems. This unexpected difference thing and not being able to delete versions with the issue to move forward.

I looked at the latter while waiting for info on the former. Got a possible lead from seeing
duplicati-20240307T010000Z.dlist.zip.aes
duplicati-20240308T010000Z.dlist.zip.aes
still sitting around in the database, marked as Deleted. Maybe that’s what turned 6 into 8.
I’m referring to the issue’s Unexpected number of deleted filesets 6 vs 8 results,
where you probably did a larger delete of versions 1-6, so on the surface it expects just 6.

Deletions from a database point of view have a grace period, so some delay is intentional,
however right after the backup at 2024-03-15 03:00:00 +02, things start failing, so maybe
database cleanup didn’t happen, and repair efforts tripped over the leftover database data.

Although this is an attractive theory of explaining a difference of 2, the details are still TBD.
Developer assistance on this would be nice, but the good news is that those old entries will
not come back on database recreate, as the destination files are gone. Hold off that awhile.

Things are quite busy now, but maybe I can make headway on the other part. Help wanted.

Sorry about the delay. I have a single SQL query that’s running over 22 hours so far. Not good. Backup is about 10 times bigger than ideal for the small default blocksize, and that might hurt because block count goes up, and SQLite performance dies. I tried indexes already. I might see if adding cache helps.

I read that the default value should be increased, but too late now I guess…
I can wait for the analysis if a database rebuild should work, not much happening on the system right now so I won’t lose anything critical if something happens.

It does help to stop the hidden database reads that don’t actually get down to the drive so don’t
show up as a busy drive. If you look at the process, it was doing a million file reads per second.

Still it’s being awfully slow even with SQLite cache at 100 MB. Default is 2 MB, which is skimpy.

On the off chance that my SQL is wrong, I tested my own smaller backup. Took about 1 second.

It should work to get rid of some leftovers that might be stopping your deletes. Discussed before:

If it works, then the question is whether the version deletes will clear up unexpected differences.

Don’t count on any analysis without the actual Destination (which you have) predicting result of database rebuild. If the SQL ever finishes. It might offer some clues, but that’s not solid proof…

I guess I’ll see if I can find anything else interesting, while seeing if my query will ever complete.

It hasn’t completed yet. I’ll let it run overnight. I did get some luck with a manually trimmed down query and a lot of fiddling. I basically replicated finding one missing entry compared to expected, and also agreed that versions 1-6 have the problem. It’s not clear how one file went missing, but looking at the operation history found that there was a backup that didn’t finish. I don’t know if the computer was shut down or some other problem caused backup to bail out early. That would log About → Show log → Recorded notes, and not create the usual job log (but Canary does better).

(times are GMT)
546     Backup  1710291600      March 13, 2024 1:00:00  log erased
547     Backup  1710378010      March 14, 2024 1:00:10  log missing. What failure?
548     Backup  1710464400      March 15, 2024 1:00:00  log present. Finish above. *1
549     Backup  1710550800      March 16, 2024 1:00:00  log missing. What failure?
550     Backup  1710637210      March 17, 2024 1:00:10  log missing. What failure?
551     Test    1710663865      March 17, 2024 8:24:25  log missing. What failure?
552     Test    1710664852      March 17, 2024 8:40:52  log present. Fatal result.
553     Test    1710665493      March 17, 2024 8:51:33  log present. Fatal result.
554     Delete  1710665882      March 17, 2024 8:58:02  log erased
555     Test    1710666162      March 17, 2024 9:02:42  log present. Fatal result.
556     Delete  1710666263      March 17, 2024 9:04:23  log erased
557     Backup  1710677825      March 17, 2024 12:17:05 log erased
558     Delete  1710687192      March 17, 2024 14:53:12 log erased
559     Backup  1710687542      March 17, 2024 14:59:02 log erased
560     Backup  1710723600      March 18, 2024 1:00:00

Notes:

*1 When a backup fails in middle, as 547 seems to have done, the next backup run
attempts to clean up the mess by making a partial backup, along with the new one.

I’m especially interested in anything you can figure out about the unfinished March 14 backup.
March 15 picked up the pieces from that, but possibly didn’t do it well. Did anything work later?

The whole database job and its log seems to be completely missing from March 14th. In Event Viewer, I can just find the quite usual:

Volume Shadow Copy Service error: Unexpected error querying for the IVssWriterCallback interface.  hr = 0x80070005, Access is denied.
. This is often caused by incorrect security settings in either the writer or requestor process.

And also MicrosoftEdgeUpdate.exe has crashed around the time the backup runs. Otherwise nothing there.

The job on 15th went fine based on the log, but maybe it didn’t after all.

Be sure to also look through server log at About → Show log → Stored for activity on that day,
because the Beta doesn’t make the normal job log on a failure. The new Canary does it better.

You should be able to see cleanup from 14th in Complete log. It should be showing the below:

“2024-03-15 03:00:00 +02 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started”,
“2024-03-15 03:02:14 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()”,
“2024-03-15 03:05:20 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: (21.26 KB)”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-KeepIncompleteFile]: keeping protected incomplete remote file listed as Uploading: duplicati-20240314T010010Z.dlist.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: scheduling missing file for deletion, currently listed as Uploading: duplicati-b4a97febce9dc4d5c99c881d9c012b3e5.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-b25104236902440cab89785fcf47c3b17.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: scheduling missing file for deletion, currently listed as Uploading: duplicati-bf1f12b9549214c4c8ba9c06372312ab7.dblock.zip.aes”,“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-bb3c3ba4cc84946b5ae6279b019a4e137.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-b2fca5efe00b044ef888bf862f89438b3.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: scheduling missing file for deletion, currently listed as Uploading: duplicati-bb12084d1ccaf4aa08b13c48bbc70ac3c.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: scheduling missing file for deletion, currently listed as Uploading: duplicati-bfae99435bc2c4b7cba4a2ead00bc9f40.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-b94b8ebb6d3b84fa9978e882431245408.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-b3ace27d1fdcd457e8f1555b97adeb212.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-be895a2b4629e4e5bb3407ff28ecee1f0.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-b86bd932e6ed54234a2db21c4caf1cd88.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-becef7c187dfb4dad8acf971b9e19c74e.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-b842b85f0d31f43b4b7cec0f108450576.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-bf5ded76d11544ddbb0e2a72625643353.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-b79c7bd489da6465582d5cb2731b9ac76.dblock.zip.aes”,
“2024-03-15 03:05:21 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-b1a168aa9bb324d9c814289d9cb4a1dcf.dblock.zip.aes”

I pasted the server log from 14th here: https://pastebin.com/EG12bBFS. Can’t tell much myself based on it.

Yes, I can find that stuff in the job’s log.

Do you want me to still keep things as is or test repairing the db? I’d backup the db file beforehand, of course.

That’s a really fine thought. One of my pet peeves is that Duplicati Recreate doesn’t do it for you, however doing so would require some sort of limit on the history, and cleanup method eventually.

Database recreate can occasionally go poorly, and the other loss is that job logs are in database.

“This page is no longer available. It has either expired, been removed by its creator, or removed by one of the Pastebin staff.”

It’d be nice to have this, although I’m not sure it would change the recovery plan. My thought was maybe save the old database and push the Recreate button.

I was now able to resolve this. I had to first purge broken files as there were three of those, which prevented a database rebuild. After that, I was able to rebuild and run the backup job.

After this process, I’ll seriously consider starting from scratch and increasing the block size while I’m at it. I don’t think I’d need any of the older files that would be lost from the old backups.