What does it mean to report a duplicate path

From duplicati/LocalDatabase.cs at master · duplicati/duplicati · GitHub it is clear that the “!” is not part of the name of the directory.

Repairing the database does not solve the warning. I would avoid to recreate it.

So I excluded from backup “/etc/rc0.d/” (without ending “!”), and the warning went away. Then I added “/etc/rc0.d/” again and the warning came back. This is frustrating.

I tried to have a look at duplicati/LocalDatabase.cs at master · duplicati/duplicati · GitHub . I am not expert of mono, but I tried to read lines from 1080 to 1100. I am correct that the warning is thrown when two subsequent entries in the database, path and lastpath, are equal?
So, has anybody a clue why I get twice the warning on “/etc/rc0.d/”? It would mean I have three subsequent entries with path “/etc/rc0.d/” but I am pretty confident I do not have symlinks to “/etc/rc0.d/” in my filesystem.

Also, I would remove the “!” on line 1092, because it confuses users.

@JonMikelV sorry to ping you, I fear nobody is seeing this post, since OP did not flag it as Support

That’s ok - it’s been on my list to get back to, I just hadn’t gotten there yet. :blush:

Thanks for digging into the code! (For the record, that’s C# code with the .NET framework, mono is the tool that lets .NET run on Linux/MacOS.)

From what version did you update to If it’s or newer, you could try downgrading and see if the error goes away. That could tell us if it’s an issue stored in the database or just in the code.

Thanks, and sorry again for pinging, I noticed there are multiple open threads in the forum…

Unfortunately I migrated from previous beta to, and now I am on the new beta
In any case, the first post was in January and predates

OK - that lines up with what I’m seeing in the code where the
duplicate file" is being found in the database.

As a workaround I’m guessing we can determine the backup version with the duplicate and delete it which should make the error go away.

Of course that’s not a solution for whatever caused the problem in the first place…

Do you have access to sqlite database reader? :wink:

    ""H"".""Hash"" AS ""MetablocklistHash""
        ""F"".""Hash"" AS ""FirstMetaBlockHash"",
        ""C"".""BlocksetID"" AS ""MetaBlocksetID""
        ""FilesetEntry"" A, 
        ""File"" B, 
        ""Metadataset"" C, 
        ""Blockset"" D,
        ""BlocksetEntry"" E,
        ""Block"" F
        ""A"".""FileID"" = ""B"".""ID"" 
        AND ""B"".""MetadataID"" = ""C"".""ID"" 
        AND ""C"".""BlocksetID"" = ""D"".""ID"" 
        AND ""E"".""BlocksetID"" = ""C"".""BlocksetID""
        AND ""E"".""BlockID"" = ""F"".""ID""
        AND ""E"".""Index"" = 0
        AND (""B"".""BlocksetID"" = ? OR ""B"".""BlocksetID"" = ?) 
        AND ""A"".""FilesetID"" = ?
    ) G
   ""BlocklistHash"" H
   ""H"".""BlocksetID"" = ""G"".""MetaBlocksetID""
   ""G"".""Path"", ""H"".""Index""

Thank you. I used sqlitebrowser on my duplicati backup database. I had to substitute “” with " in your script. The result is however

0 Rows returned

Did you substitute any of the ? in the SQL with anything?

Each of those corresponds (in order) to a parameter at the end of the SQL definition. The trick is knowing what the parameter value is when the error is happening.

I recall somebody updating the error handler to allow printing the parameter values, but I don’t know if it is in regular releases yet.

Log sql variables #3314 has been in canary awhile, and it should be in the recent experimental and beta. Adding the –log-file option with –log-file-log-level=Profiling would get you a query that’s filled in, however I wonder if the code where the error occurs might be trying to use the SQL query to make a dlist file – and disliking what it got. If it made a backup anyway, it would be on the “Restore from” dropdown as number 0.

Using --log-file-log-level=Verbose level is rather noisy (but less so than Profiling), and might give an idea about how paths seemingly get picked up twice, unless somehow it’s in the imagination of the SQL query.

Example output where I updated a file date. Maybe you’ll be able to see something being noticed twice…

2018-12-03 18:24:46 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: C:\BackThisUp\test.txt
2018-12-03 18:24:47 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: C:\BackThisUp\test.txt
2018-12-03 18:24:47 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FilePreFilterProcess.FileEntry-CheckFileForChanges]: Checking file for changes C:\BackThisUp\test.txt, new: False, timestamp changed: True, size changed: False, metadatachanged: True, 12/3/2018 11:24:16 PM vs 11/30/2018 1:01:21 AM
2018-12-03 18:24:50 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileBlockProcessor.FileEntry-FileMetadataChanged]: File has only metadata changes C:\BackThisUp\test.txt

Since I do not know any SQL, I thought it was the definitive script to launch. Could you point me to a basic reference on SQL?

Actually, you got the SQL right, in this case it’s the C# variables that tripped you up. :-):-[

The best way to get what we need is to use@ @ts678’s suggestion and add the --log-file=[path] and --log-level=profiling (or verbose) parameters.

That should generate a file at [path] with more detailed information - including actual SQL commands with parameter values (not just placeholders).

That SQL can then be run against the database log you already did and should provide more expected results.

I’m also getting this message since yesterday with

Warnings: [
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!,
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!,
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!,
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!,
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!

It’s a backup job set to backup the whole disk from the root (obviously :wink: ) with just the hidden Windows system directories $RECYCLE.BIN and System Volume Information set to be excluded.

No soft or hardlink on this disk, it’s just a data disk with a bunch of document folders on it.

See the actual error message in the enclosed textfile:
err.zip (1.4 KB)

I have an identical set up backup job backing up to an off site box, this job throws no errors.

Hmm…I don’t recognize the error but I do recall having an issue trying to backup root of a drive many versions ago. I wonder if that has crept back in.

Did you update from or newer such that a test downgrade is an option?

Oh, and you mentioned a second system doing the same thing w/out errors - what differences are there in that system? I thinking things like:

  • different file system
  • running as different user
  • snapshot policy
  • symlink policy (though I know you said there weren’t any, but who knows what Windows might do when you’re not looking)
  • exclusion / inclusion filter differences

Lastly, these appear to be warning messages not errors so that should mean Duplicati is handling them and the back completes.

Have you tried a test restore to see if things are actually getting backed up?

I have two identical jobs, one has a local destination on a LAN server running Debian 9 + Minio. The second job is copied from the first job, only change is the destination being an off site server with identical setup with Debian 9 + Minio.

  • The source is the same, same hard drive, same selection, same exclusions, 100% same source settings.
  • Same snapshot settings. 100% same option settings.
  • Same user since both jobs run on the same machine, same instance of Duplicati.

The destinations are also identical linux boxes running Minio. The only difference is that the local is a virtual machine and the off site is a physical machine and copying runs through an ssh tunnel. But when logged in to linux they are identically set up.

How the two jobs ended up different, one reporting this warning and the other one happy… I do not know really. A job stopped mid run at some point maybe?

So I tried deleting the database and starting a repair job. But after it had run 20-30 hours (it was still running, I checked the live log now and then) I accidentely stopped the job (rebooted the source computer for other reasons) and… I said **** (hehehe), and deleted the database and all the destination files and restarted the job from scratch. That is running now, EDIT: Job finished. No more errors :slight_smile:

Hmm…I don’t recognize the error but I do recall having an issue trying to backup root of a drive many versions ago. I wonder if that has crept back in.

I had an odd error restoring stuff from a root drive backup, quite some time ago. I think it was some ACL oddities. But this is different.

Did you update from or newer such that a test downgrade is an option?

I upgraded from the previous beta (what was that version number?) , no canarys. Don’t dare trying downgrading on this system to hunt for what went wrong, sorry.

Glad to hear a database recreate resolved the issue for you!

The previous beta was and, yes, downgrading back to it isn’t recommended.

Recreating the database is not an option for me.

Since the offending path in my case is “/etc/rc0.d/” I excluded it from the backup. The next backup indeed shows no warning, and I could use this as a workaround.
However, using sqlitebrowser and launching the command
SELECT * FROM File WHERE Path IS "/etc/rc0.d/"
I stil get some references:

ID Path BlocksetID MetadataID
“1232385” “/etc/rc0.d/” “-100” “399173”
“1253005” “/etc/rc0.d/” “-100” “410398”
“1565760” “/etc/rc0.d/” “-100” “616285”

Is this normal @JonMikelV ? I already verified in the past that adding again “/etc/rc0.d/” to the backup would make the warning emerge again.


I suspect what you’re seeing in the database are past successfull backups of the folder itself, but not any contents.

It’s not something I would worry about. If you’ve got a retention plan they’ll probably get cleaned up over time, otherwise you can run a command to explicitly remove them from you backup (if you really want them gone).

My comment was more a contribution to debugging this issue.

So, after doing backups without any warnings for some days, I added again the culprit “/etc/rd0.d/” to the backup. To my surprise, no warning has appeared yet. I rerun
SELECT * FROM File WHERE Path IS "/etc/rc0.d/"
and got the same result as above, plus another entry (again with BlocksetID=-100).

So, at least in my case, the problem has been solved by simply removing the offending path from the backup, running a few times the job, then adding again the offending path. No need for recreating the database. I have no clue on why the warning arose at the beginning.

Thanks for the input - and process you went through to “get things working”. (I particularly like how it’s a variant of what I said I would do - What does it mean to report a duplicate path. :slight_smile: )

I wonder if this is an issue with folder metadata changing during the backup.

Ops, that’s exactly what you described! My bad, the first time I tried I
excluded it for only one run, then included it again, and that did not
solve the issue.

Nothing to apologize for, I just found it interesting - “great minds” and all that. :wink: