What does it mean to report a duplicate path


#5

After upgrading to 2.0.3.10_canary_2018-08-30 on OSX, I’m getting this error consistently on my home directory. The duplicate path is /Users/<name>/! That trailing exclamation point is not me, that is what duplicati is reporting.

I have tried removing home, saving the config, and then re-adding it. But I get the same error on every backup run.


#6

I had the same thing today, had to delete and rebuild the database to get rid of the message, I am running 2.0.3.11_canary_2018-09-05 on Windows Server 2016


#7

So not version (both 2.0.3.10 & 2.0.3.11) or OS (both MacOS & Windows) dependant, but not super common. Odd…


#8

2.0.3.12_canary_2018-10-23 just did the same thing


#9

Experimental 2.0.4.1 just did it to my backup, on Linux.

[Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: /etc/rc0.d/!

It is not an option for me to delete the database and recreate it.

Below what I see in the restore tab: is it normal to see that folder named “/” inside rc0.d? Could it be the culprit?

I do not see it in the “Source data” tab of the job, so I do not know how to remove it.


#10

I realized that the Restore operation always show such " / " folders, so this is not related to the duplicate path problem.

I tried to exclude the duplicate path from the job with either “Exclude directories whose names contain” or “Exclude folder”, applied to “/etc/rc0.d/!” with no avail.

This is not a critical issue, but having a warning at each backup, forces me to check every time whether there are other more serious warnings.


#11

From duplicati/LocalDatabase.cs at master · duplicati/duplicati · GitHub it is clear that the “!” is not part of the name of the directory.

Repairing the database does not solve the warning. I would avoid to recreate it.


#12

So I excluded from backup “/etc/rc0.d/” (without ending “!”), and the warning went away. Then I added “/etc/rc0.d/” again and the warning came back. This is frustrating.

I tried to have a look at duplicati/LocalDatabase.cs at master · duplicati/duplicati · GitHub . I am not expert of mono, but I tried to read lines from 1080 to 1100. I am correct that the warning is thrown when two subsequent entries in the database, path and lastpath, are equal?
So, has anybody a clue why I get twice the warning on “/etc/rc0.d/”? It would mean I have three subsequent entries with path “/etc/rc0.d/” but I am pretty confident I do not have symlinks to “/etc/rc0.d/” in my filesystem.

Also, I would remove the “!” on line 1092, because it confuses users.

@JonMikelV sorry to ping you, I fear nobody is seeing this post, since OP did not flag it as Support


#13

That’s ok - it’s been on my list to get back to, I just hadn’t gotten there yet. :blush:

Thanks for digging into the code! (For the record, that’s C# code with the .NET framework, mono is the tool that lets .NET run on Linux/MacOS.)

From what version did you update to 2.0.4.1? If it’s 2.0.3.12 or newer, you could try downgrading and see if the error goes away. That could tell us if it’s an issue stored in the database or just in the code.


#14

Thanks, and sorry again for pinging, I noticed there are multiple open threads in the forum…

Unfortunately I migrated from previous beta 2.0.3.3 to 2.0.4.1, and now I am on the new beta 2.0.4.5.
In any case, the first post was in January and predates 2.0.3.3.


#15

OK - that lines up with what I’m seeing in the code where the
duplicate file" is being found in the database.

As a workaround I’m guessing we can determine the backup version with the duplicate and delete it which should make the error go away.

Of course that’s not a solution for whatever caused the problem in the first place…

Do you have access to sqlite database reader? :wink:

SQL for LIST_FOLDERS_AND_SYMLINKS
SELECT
    ""G"".""BlocksetID"",
    ""G"".""ID"",
    ""G"".""Path"",
    ""G"".""Length"",
    ""G"".""FullHash"",
    ""G"".""Lastmodified"",
    ""G"".""FirstMetaBlockHash"",
    ""H"".""Hash"" AS ""MetablocklistHash""
FROM
    (
    SELECT
        ""B"".""BlocksetID"",
        ""B"".""ID"",
        ""B"".""Path"",
        ""D"".""Length"",
        ""D"".""FullHash"",
        ""A"".""Lastmodified"",
        ""F"".""Hash"" AS ""FirstMetaBlockHash"",
        ""C"".""BlocksetID"" AS ""MetaBlocksetID""
    FROM
        ""FilesetEntry"" A, 
        ""File"" B, 
        ""Metadataset"" C, 
        ""Blockset"" D,
        ""BlocksetEntry"" E,
        ""Block"" F
    WHERE 
        ""A"".""FileID"" = ""B"".""ID"" 
        AND ""B"".""MetadataID"" = ""C"".""ID"" 
        AND ""C"".""BlocksetID"" = ""D"".""ID"" 
        AND ""E"".""BlocksetID"" = ""C"".""BlocksetID""
        AND ""E"".""BlockID"" = ""F"".""ID""
        AND ""E"".""Index"" = 0
        AND (""B"".""BlocksetID"" = ? OR ""B"".""BlocksetID"" = ?) 
        AND ""A"".""FilesetID"" = ?
    ) G
LEFT OUTER JOIN
   ""BlocklistHash"" H
ON
   ""H"".""BlocksetID"" = ""G"".""MetaBlocksetID""
ORDER BY
   ""G"".""Path"", ""H"".""Index""

#16

Thank you. I used sqlitebrowser on my duplicati backup database. I had to substitute “” with " in your script. The result is however

0 Rows returned

The database has version 8 but the largest supported version is 7
#17

Did you substitute any of the ? in the SQL with anything?

Each of those corresponds (in order) to a parameter at the end of the SQL definition. The trick is knowing what the parameter value is when the error is happening.

I recall somebody updating the error handler to allow printing the parameter values, but I don’t know if it is in regular releases yet.


#18

Log sql variables #3314 has been in canary awhile, and it should be in the recent experimental and beta. Adding the –log-file option with –log-file-log-level=Profiling would get you a query that’s filled in, however I wonder if the code where the error occurs might be trying to use the SQL query to make a dlist file – and disliking what it got. If it made a backup anyway, it would be on the “Restore from” dropdown as number 0.

Using --log-file-log-level=Verbose level is rather noisy (but less so than Profiling), and might give an idea about how paths seemingly get picked up twice, unless somehow it’s in the imagination of the SQL query.

Example output where I updated a file date. Maybe you’ll be able to see something being noticed twice…

2018-12-03 18:24:46 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: C:\BackThisUp\test.txt
2018-12-03 18:24:47 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: C:\BackThisUp\test.txt
2018-12-03 18:24:47 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FilePreFilterProcess.FileEntry-CheckFileForChanges]: Checking file for changes C:\BackThisUp\test.txt, new: False, timestamp changed: True, size changed: False, metadatachanged: True, 12/3/2018 11:24:16 PM vs 11/30/2018 1:01:21 AM
2018-12-03 18:24:50 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileBlockProcessor.FileEntry-FileMetadataChanged]: File has only metadata changes C:\BackThisUp\test.txt

#19

Since I do not know any SQL, I thought it was the definitive script to launch. Could you point me to a basic reference on SQL?


#20

Actually, you got the SQL right, in this case it’s the C# variables that tripped you up. :-):-[

The best way to get what we need is to use@ @ts678’s suggestion and add the --log-file=[path] and --log-level=profiling (or verbose) parameters.

That should generate a file at [path] with more detailed information - including actual SQL commands with parameter values (not just placeholders).

That SQL can then be run against the database log you already did and should provide more expected results.


#21

I’m also getting this message since yesterday with 2.0.4.5

Warnings: [
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!,
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!,
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!,
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!,
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!
]

It’s a backup job set to backup the whole disk from the root (obviously :wink: ) with just the hidden Windows system directories $RECYCLE.BIN and System Volume Information set to be excluded.

No soft or hardlink on this disk, it’s just a data disk with a bunch of document folders on it.

See the actual error message in the enclosed textfile:
err.zip (1.4 KB)

EDIT:
I have an identical set up backup job backing up to an off site box, this job throws no errors.


#22

Hmm…I don’t recognize the error but I do recall having an issue trying to backup root of a drive many versions ago. I wonder if that has crept back in.

Did you update from 2.0.3.12 or newer such that a test downgrade is an option?

Oh, and you mentioned a second system doing the same thing w/out errors - what differences are there in that system? I thinking things like:

  • different file system
  • running as different user
  • snapshot policy
  • symlink policy (though I know you said there weren’t any, but who knows what Windows might do when you’re not looking)
  • exclusion / inclusion filter differences

Lastly, these appear to be warning messages not errors so that should mean Duplicati is handling them and the back completes.

Have you tried a test restore to see if things are actually getting backed up?


#23

I have two identical jobs, one has a local destination on a LAN server running Debian 9 + Minio. The second job is copied from the first job, only change is the destination being an off site server with identical setup with Debian 9 + Minio.

  • The source is the same, same hard drive, same selection, same exclusions, 100% same source settings.
  • Same snapshot settings. 100% same option settings.
  • Same user since both jobs run on the same machine, same instance of Duplicati.

The destinations are also identical linux boxes running Minio. The only difference is that the local is a virtual machine and the off site is a physical machine and copying runs through an ssh tunnel. But when logged in to linux they are identically set up.

How the two jobs ended up different, one reporting this warning and the other one happy… I do not know really. A job stopped mid run at some point maybe?

So I tried deleting the database and starting a repair job. But after it had run 20-30 hours (it was still running, I checked the live log now and then) I accidentely stopped the job (rebooted the source computer for other reasons) and… I said **** (hehehe), and deleted the database and all the destination files and restarted the job from scratch. That is running now, EDIT: Job finished. No more errors :slight_smile:

Hmm…I don’t recognize the error but I do recall having an issue trying to backup root of a drive many versions ago. I wonder if that has crept back in.

I had an odd error restoring stuff from a root drive backup, quite some time ago. I think it was some ACL oddities. But this is different.

Did you update from 2.0.3.12 or newer such that a test downgrade is an option?

I upgraded from the previous beta (what was that version number?) , no canarys. Don’t dare trying downgrading on this system to hunt for what went wrong, sorry.


#24

Glad to hear a database recreate resolved the issue for you!

The previous beta was 2.0.3.3 and, yes, downgrading back to it isn’t recommended.