What does it mean to report a duplicate path

I’m also getting this message since yesterday with 2.0.4.5

Warnings: [
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!,
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!,
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!,
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!,
    2018-12-09 22:15:15 +01 - [Warning-Duplicati.Library.Main.Database.LocalDatabase-DuplicatePathFound]: Duplicate path detected: D:\!
]

It’s a backup job set to backup the whole disk from the root (obviously :wink: ) with just the hidden Windows system directories $RECYCLE.BIN and System Volume Information set to be excluded.

No soft or hardlink on this disk, it’s just a data disk with a bunch of document folders on it.

See the actual error message in the enclosed textfile:
err.zip (1.4 KB)

EDIT:
I have an identical set up backup job backing up to an off site box, this job throws no errors.

Hmm…I don’t recognize the error but I do recall having an issue trying to backup root of a drive many versions ago. I wonder if that has crept back in.

Did you update from 2.0.3.12 or newer such that a test downgrade is an option?

Oh, and you mentioned a second system doing the same thing w/out errors - what differences are there in that system? I thinking things like:

  • different file system
  • running as different user
  • snapshot policy
  • symlink policy (though I know you said there weren’t any, but who knows what Windows might do when you’re not looking)
  • exclusion / inclusion filter differences

Lastly, these appear to be warning messages not errors so that should mean Duplicati is handling them and the back completes.

Have you tried a test restore to see if things are actually getting backed up?

I have two identical jobs, one has a local destination on a LAN server running Debian 9 + Minio. The second job is copied from the first job, only change is the destination being an off site server with identical setup with Debian 9 + Minio.

  • The source is the same, same hard drive, same selection, same exclusions, 100% same source settings.
  • Same snapshot settings. 100% same option settings.
  • Same user since both jobs run on the same machine, same instance of Duplicati.

The destinations are also identical linux boxes running Minio. The only difference is that the local is a virtual machine and the off site is a physical machine and copying runs through an ssh tunnel. But when logged in to linux they are identically set up.

How the two jobs ended up different, one reporting this warning and the other one happy… I do not know really. A job stopped mid run at some point maybe?

So I tried deleting the database and starting a repair job. But after it had run 20-30 hours (it was still running, I checked the live log now and then) I accidentely stopped the job (rebooted the source computer for other reasons) and… I said **** (hehehe), and deleted the database and all the destination files and restarted the job from scratch. That is running now, EDIT: Job finished. No more errors :slight_smile:

Hmm…I don’t recognize the error but I do recall having an issue trying to backup root of a drive many versions ago. I wonder if that has crept back in.

I had an odd error restoring stuff from a root drive backup, quite some time ago. I think it was some ACL oddities. But this is different.

Did you update from 2.0.3.12 or newer such that a test downgrade is an option?

I upgraded from the previous beta (what was that version number?) , no canarys. Don’t dare trying downgrading on this system to hunt for what went wrong, sorry.

Glad to hear a database recreate resolved the issue for you!

The previous beta was 2.0.3.3 and, yes, downgrading back to it isn’t recommended.

Recreating the database is not an option for me.

Since the offending path in my case is “/etc/rc0.d/” I excluded it from the backup. The next backup indeed shows no warning, and I could use this as a workaround.
However, using sqlitebrowser and launching the command
SELECT * FROM File WHERE Path IS "/etc/rc0.d/"
I stil get some references:

ID Path BlocksetID MetadataID
“1232385” “/etc/rc0.d/” “-100” “399173”
“1253005” “/etc/rc0.d/” “-100” “410398”
“1565760” “/etc/rc0.d/” “-100” “616285”

Is this normal @JonMikelV ? I already verified in the past that adding again “/etc/rc0.d/” to the backup would make the warning emerge again.

Thanks

I suspect what you’re seeing in the database are past successfull backups of the folder itself, but not any contents.

It’s not something I would worry about. If you’ve got a retention plan they’ll probably get cleaned up over time, otherwise you can run a command to explicitly remove them from you backup (if you really want them gone).

My comment was more a contribution to debugging this issue.

So, after doing backups without any warnings for some days, I added again the culprit “/etc/rd0.d/” to the backup. To my surprise, no warning has appeared yet. I rerun
SELECT * FROM File WHERE Path IS "/etc/rc0.d/"
and got the same result as above, plus another entry (again with BlocksetID=-100).

So, at least in my case, the problem has been solved by simply removing the offending path from the backup, running a few times the job, then adding again the offending path. No need for recreating the database. I have no clue on why the warning arose at the beginning.

Thanks for the input - and process you went through to “get things working”. (I particularly like how it’s a variant of what I said I would do - What does it mean to report a duplicate path. :slight_smile: )

I wonder if this is an issue with folder metadata changing during the backup.

Ops, that’s exactly what you described! My bad, the first time I tried I
excluded it for only one run, then included it again, and that did not
solve the issue.

Nothing to apologize for, I just found it interesting - “great minds” and all that. :wink:

Hi All

I had this error recently and some posts in this thread especially the SQL command helped resolve.
Version 2.0.4.5_beta_2018-11-28

It started to happen after a failed backup due to back end disconnection (local USB Drive pulled out by user during backup). The next and subsequent backups completed but gave the warning,

On running the SQL command and entering the fileset number for the failed backups I could see the duplicate directory. In this case the last column in the query MetaBlockListHash had data where as all other folders/symlinks had a null value in this field.

As this is a production server I carried my routine of deleted the versions with the warnings/errors and rerunning the backup. This deleted temporary filesets from the failed backup and completed without warning. The following backup also completed without warning.

Still don’t know what caused this but I suspect the backup was processing that folder when the disconnect of the destination drive occurred.