Metadata error, llistxattr, error: ERANGE (34)


#1

I am backing up to a couple of USB drives in Linux, but since most of my systems run Windows I want the backup drives to be easily usable by Windows machines so I’m using NTFS instead of ext4. The backups are running fine, but I’ve noticed a warning at the end:

Failed to process metadata for "/mnt/nas/Pictures/", storing empty metadata => Unable to access the file "/mnt/nas/Pictures" with method llistxattr, error: ERANGE (34)

This is after I back up /mnt/nas/Pictures/ to /meda/drivename/folder, and after doing some reading I assumed it related to advanced attributes not being supported by NTFS, but what I don’t understand is why it is looking for a file /mnt/nas/Pictures, since there’s no file there. The backup is correctly backing up things inside that folder. Anyone seen this particular issue before?

It doesn’t seem to be impacting backups (I’ll be trying a restore once the current job finishes), but I’d like to avoid the warning flag on every backup if it’s not something I need to be concerned with.


#2

Do you think what you’re running into is the same as what’s described here?


#3

Thank you for replying! Yes, I read that thread and I wasn’t sure because I don’t get an error for every file in the backup, just the top level directory for the backup source, and the error message was slightly different (ENODATA vs ERANGE). It was actually @kenkendk suggestion in that thread about it relating to the filesystem that got me thinking it may relate to the fact that this drive is NTFS and/or I may not be mounting it with whatever permission mask or options Duplicati might need.

I just don’t quite understand the error since it sounds like Duplicati is looking for a file that doesn’t exist. I’m pretty sure it is generating the same error for every backup job that has that USB drive as a destination (after work today I was going to go through and check them to be sure).


#4

I think @kenkendk pushed a commit, to try to stop these erroneous ‘error messages’, however I still get the error messages pretty much the same as before … The backups are made successfully though


#5

Do you recall which Pull request it was? None of these look to me like they’d be related…

It’s not really the same error, but here’s a recent Pull request for an issue that seemed common to NTFS destinations…


#6

Last night (discovered this morning) I just saw the same error message on a backup from the same source to Dropbox, so it has nothing to do with the destination being NTFS. I’m really confused. Must be something about the source.


#7

The latest on this error is that it happens on the same sources to both NTFS and Dropbox destinations, and it does not occur on one job where the same NAS is the source (just a different share). There are no differences in permissions or ownership between the shares that I can discern. I compared them both in a shell from the Duplicati system (mounted w/CIFS) and from the NAS itself. Size of the share doesn’t make a difference; the jobs that generate the warning range in size from 7GB to 133GB and the one that doesn’t generate the warning is 50GB. There are files in the root source path in all shares whether they generate the error or not. I’ve run ‘stat’ to compare and I can’t see any differences.


#8

Since your 50G job works we know the NAS or share processes aren’t the issue. And you already confirmed the NTFS destination isn’t the problem as it’s also happening on Dropbox.

Are the 7G and 133G jobs using the same share? Is there any overlap in the actual source content?

I’m wondering if there’s a specific file / folder in the source that has some metadata that is confusing Duplicati…


#9

Each job is using a different share as source (I have Documents, Pictures, Software, etc.). No overlap in contents. I did go back to the logs and noticed there was additional information aside from the warning in the result:

Failed to process metadata for "/mnt/nas/Pictures/", storing empty metadata
UnixSupport.File+FileAccesException: Unable to access the file "/mnt/nas/Pictures" with method llistxattr, error: ERANGE (34)
  at UnixSupport.File.GetExtendedAttributes (System.String path, System.Boolean isSymlink, System.Boolean followSymlink) [0x00077] in <3fbec333f978484785726c089e2a43ac>:0 
  at Duplicati.Library.Snapshots.SystemIOLinux.GetMetadata (System.String file, System.Boolean isSymlink, System.Boolean followSymlink) [0x0000d] in <77daa5b4404f4a3f88c47ead1428ebeb>:0 
  at Duplicati.Library.Snapshots.NoSnapshotLinux.GetMetadata (System.String file, System.Boolean isSymlink, System.Boolean followSymlink) [0x00007] in <77daa5b4404f4a3f88c47ead1428ebeb>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.GenerateMetadata (Duplicati.Library.Snapshots.ISnapshotService snapshot, System.String path, System.IO.FileAttributes attributes) [0x00027] in <118ad25945a24a3991f7b65e7a45ea1e>:0 

I see “Symlink” in the above output a few times; does that mean Duplicati is throwing a warning that it can’t get metadata from what could be a windows shortcut somewhere in the source?

Edit: I mounted one of the error-generating shares to a drive letter ( P: ) and ran:

dir /AL /S P:\

but it came back with nothing, so it appears at least there aren’t any symlinks on that share.

Another quick edit: restores appear to be working fine,


#10

I thought I’d add one more thing; when I did a test restore, there were no warnings about missing metadata from the LIST or RESTORE operations. Here’s the report from a LIST:

EncryptedFiles: False
Filesets: [
Version: 0
Time: 2/21/2018 7:03:27 PM
FileCount: 35993
FileSizes: 143615249350
]
Files: [
Path: /mnt/nas/Pictures/
Sizes: []
]
MainOperation: List
VerboseOutput: False
VerboseErrors: False
ParsedResult: Success
EndTime: 2/26/2018 9:14:23 PM
BeginTime: 2/26/2018 9:14:23 PM
Duration: 00:00:00.4138420
Messages: []
Warnings: []
Errors: []

So it only seems to affect the backup operation.


#11

Just a side note in case you didn’t already know, for a more complete restore test you might want to make sure the following parameters are enabled:

--no-local-blocks (default: false)
Duplicati will attempt to use data from source files to minimize the amount of downloaded data. Use this option to skip this optimization and only use remote data.

--patch-with-local-blocks (default: false)
Enable this option to look into other files on this machine to find existing blocks. This is a fairly slow operation but can limit the size of downloads.

Do I recall correctly reading that the two jobs that have this error have different sources but in both cases the error ONLY appears with the top level of the source?


#12

Thank you, I wasn’t aware of those settings. For:

--no-local-blocks

I can see that I would want that true, so that I’m only using “remote” blocks. But I’m confused about:

--patch-with-local-blocks

because it sounds like if that is false (default), the system will not look into other files, which is what I want. If I set that to true, the system might find blocks on another destination. Incidentally for these jobs that generate the error, the destination doesn’t matter so I think the error would happen either way, but I’ll try these out and see what happens.

Correct - the error is always /mnt/nas/Folder, never showing a filename. The mount lines in fstab are all the same. I must be missing something about the sources that are different from the share that doesn’t generate the error. I’m going to try a permissions “reset” on the NAS for one of the shares to see if that makes any difference in the backup.

Edit - this is weird, I have “–patch-with-local-blocks” set to “false” (disabled) and when I ran a restore I clearly saw “patching with local blocks” in the status bar at one point. It seems to do the opposite of the option?


#13

Hmm, that’s odd. If you feel like testing can you try 0, off, or no as described here?


You are correct about it defaulting to false but if set to true it doesn’t look for blocks in another destination. What it will do is say:

“I see, you want to restore file X but I see files A, B, and C happen to have the exact same blocks so instead of downloading all the blocks for X I can just download the unique ones I need then supplement them out of the existing local files. Note that this will use less bandwidth BUT might actually take longer to do.”


#14

I actually have it set explicitly to “false”. I’m still confused by that option but I’ll have to read up on it more carefully. Ultimately it doesn’t change the backup operation so I’m more concerned with the metadata warnings.

I tried another test backup after a permissions reset and no change.

I added the verbose option and re-ran the job in a second window with the live log set to “warning” and got this slightly different warning (basically the same but I wonder if it sheds any more light). I block-quoted it to make it easier to read since I think it’s all one line otherwise. I still don’t see an actual filename here; it’s as if it is treating the share as a file.

{“ClassName”:“UnixSupport.File+FileAccesException”,“Message”:“Unable to access the file “/mnt/nas/Pictures” with method llistxattr, error: ERANGE (34)”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:" at UnixSupport.File.GetExtendedAttributes (System.String path, System.Boolean isSymlink, System.Boolean followSymlink) [0x00077] in <3fbec333f978484785726c089e2a43ac>:0 \n at Duplicati.Library.Snapshots.SystemIOLinux.GetMetadata (System.String file, System.Boolean isSymlink, System.Boolean followSymlink) [0x0000d] in <77daa5b4404f4a3f88c47ead1428ebeb>:0 \n at Duplicati.Library.Snapshots.NoSnapshotLinux.GetMetadata (System.String file, System.Boolean isSymlink, System.Boolean followSymlink) [0x00007] in <77daa5b4404f4a3f88c47ead1428ebeb>:0 \n at Duplicati.Library.Main.Operation.BackupHandler.GenerateMetadata (Duplicati.Library.Snapshots.ISnapshotService snapshot, System.String path, System.IO.FileAttributes attributes) [0x00027] in <118ad25945a24a3991f7b65e7a45ea1e>:0 ",“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:null,“HResult”:-2146232800,“Source”:“UnixSupport”}

I tried taking all the “loose” files that were present in the root of the share and moving them to a directory and no change.

I don’t see any unusual characters in directory names at the root of the share, and the only hidden files are a Desktop.ini and Thumbs.db, which also exist on the share that doesn’t generate a warning.

I may try recreating the share to see if that makes any difference.


#15

I think you might be right on that - perhaps @kenkendk or @Pectojin might know better about how Duplicati interprets Linux mount points.


#16

For all intents and purposes Linux should be treating mounted disks as folders.

# ll /mnt
total 1
drwxr-xr-x 4 root root 4 Feb 27 00:31 external

It’s not a symlink or a file, it’s just a directory. In fact, I think you’d be unable to back up anything with Duplicati if it thought mounts were files, since technically your bootloader mounts your OS partition/disk when you start your machine, so it would also be a “file”.

That being said, it looks like it’s happening on Pictures and not the nas mount itself. What’s the output of ll /mnt/nas/ ? is it possible the Pictures folder is a symlink pointing to a path that isn’t relative to the mount point?


#17

Here is /mnt/nas:

root@home:/# ll /mnt/nas/
total 8
drwxr-xr-x 10 root root 4096 Feb 23 07:05 ./
drwxr-xr-x 8 root root 4096 Feb 25 19:06 …/
drwxr-xr-x 2 root root 0 Feb 26 17:50 Backup/
drwxr-xr-x 2 root root 0 Jan 29 06:37 Client_Files/
drwxr-xr-x 2 root root 0 Feb 20 22:59 Documents/
drwxr-xr-x 2 root root 0 Jan 29 06:36 Music/
drwxr-xr-x 2 root root 0 Feb 27 10:45 Pictures/
drwxr-xr-x 2 root root 0 Feb 27 11:37 Restore/
drwxr-xr-x 2 root root 0 Feb 23 07:14 Software/
drwxr-xr-x 2 root root 0 Nov 1 14:28 Videos/

And the mount lines in /etc/fstab:

//<IPADDR>/Pictures /mnt/nas/Pictures cifs credentials=/root/nas.credentials,vers=1.0 0 0
//<IPADDR>/Software /mnt/nas/Software cifs credentials=/root/nas.credentials,vers=1.0 0 0

I obfuscated the LAN IPs just so they don’t turn into links here; they’re the same. It really looks like everything is being handled the same way at the share/mount level from the server side, so I’m thinking at this point it’s either something inside some of the shares, or something about the shares themselves on the NAS. I think my next easy step is to recreate the shares one at a time and see if the error follows a new share (or starts to happen on a new copy of the share that doesn’t exhibit the error).

Just for kicks, on the NAS:

drwxrwxrwx+ 1 guest guest 11K Feb 27 10:45 Pictures
drwxrwxrwx+ 1 guest guest 6.0K Feb 23 07:14 Software

and

root@NAS:/data# lsattr
---------------- ./Pictures
---------------- ./Software

I’m not seeing any glaring differences so far. I’ve compared exported backup configurations from Duplicati and other than a larger dblock size on just one of the jobs that generates an error (100MB vs 50, but others are all set at 50), they are identical in terms of anything meaningful (by that I mean the job IDs and last run times are different, but everything else is the same).

I really appreciate the suggestions here, thank you.


#18

Me neither. Everything looks good to me.

I’m kind of puzzled at this error as I’ve never seen anything like it before.

Also, fun fact. a search for Unable to access the file with method llistxattr now returns this thread as #1(and #2) result on both Google and Duckduckgo for me :slight_smile:


#19

Awesome, my config has the brain teaser. :slight_smile: I will keep at it and post back if anything new comes up, just in case this comes up in the future for anyone else.


#20

I created a new share on the NAS, “Documents2”, copied all the files from /Documents into it, created the mount point and mounted the share on the Duplicati server (copying the exact line from /etc/fstab and editing). Then I copied the “Documents” Duplicati job that currently generates the error, imported it and edited to Documents2, and the backup ran without errors. I’ll leave it as a test for a couple of days and make sure nothing changes, but all I can say for now is it looks like something is different on some of the NAS shares that causes this error.