FileAccessError while processing files

Has anybody else seen this issue? This is very concerning from a backup integrity standpoint.

Yes, it is… the most obvious cause is lack of permissions to the source data. Is the Duplicati process running under a UID/GID that has at least read access to all the files in your mount point?

For what it’s worth, it’s usually more efficient to back up data AT the source. Running Duplicati on your NAS (either native Synology package, or a docker container) may give you better results. Or maybe not if you have a really low-powered NAS. Just something to consider.

Thanks for the reply!

The permissions are definitely correct, and the files are largely processed correctly. When there are errors, it’s not all of the files, and the files that are affected are not consistent.

Running Duplicati at the source, on the NAS, won’t solve the issue – the issue would still be present, just potentially masked. It also removes the Duplicati VM itself from the rest of my lab infrastructure, making it an outlier for the rest of the automation processes that I have.

I’m wondering if this is OS specific, and if it’s perhaps a Mono version issue.

I don’t think that’s necessarily true since you wouldn’t be accessing the data through the Samba share. But I understand your other point and how you want to keep Duplicati where it is.

Have you tried looking at the Synology Log Center to view events related to file access? Curious if you see any problems there that might help shed light on this.

That’s a great suggestion to look at the Syno logs, however there’s nothing untoward there. I also don’t have any trouble accessing those files from the box that has the share mounted, as the duplicati user.

Does anybody else have any insight in to this issue? It’s persisting even after the latest Canary release.

There have been several reported issues with SMB/CIFS shares with Duplicati, as well as other applications. Can you try adding cache=none to the mount options for your share in your fstab? See the below for example:

It might take you to take a look. Interrupted system call is a specific thing that you could chase with strace (I think) to see if you can see what sort of interrupt is happening, and what system call is suffering. I think there are few system calls that aren’t done by mono, so this might be a mono bug…

You could also possibly confirm that the issue is purely in FileEnumerationProcess by trying to avoid actual backup in various ways. An easy one is to do a test backup on a small file set without change. Similar testing on non-Samba area would confirm whether or not the problem only occurs on Samba.


Thanks for the link. I gave that a shot, to no avail – the issue persists.


I may have to delve in to strace’ing this, as you mentioned. I have run the backup on a set with no changes, and the problem still occurred. It’s definitely occurring in FileEnumerationProcess. I may also need to duplicate the data to a non-samba location and try that as well.

Same/similar issue for me with the Ubuntu / & versions. Different files each time I run the backup. Source files on the local HDD are fine, only source files on the Synology NAS SMB share exhibit this issue.


N.B. same job run from Windows 10 / has no issues

Hey all! Just wanted to confirm/++ from my end that this issue is (severely, sadly) affecting backups on my end.

System Info:

  • Client HW: Standard Intel + NVIDIA affair, updated drivers, etc.
  • Client OS: Manjaro, bone-stock install, latest updates as of today (via Syyu), fresh boot
  • Client SW: Duplicati v.
  • NAS: FreeNAS 11.3u2 with SMB, permissions seem fine


Attempting to back up with the command:

duplicati-cli backup "onedrivev2://<path>" "/mnt/<path>" --passphrase="<pass>" --authid="<O365-token>


When backing up, hundreds of the following error messages are seen:

Error reported while accessing file: /mnt/Media/<file_path>.mp4 => Interrupted system call

The files seem to be inconsistent as to which trigger the error.

Verification of error:

At the end of the backup, the final success message (“350 files backed up”) is far less than the actual files in the folder (~600).

Hope this report helps! I’m seriously worried about the validity of all of my backups, and am more than willing to help out with troubleshooting this in any way needed.

Welcome to the forum @diver-down

What technical skill level should this target? Truly tracking it down may get deep.

As an easier first test, do you have enough destination space for a test backup?
If so, I’m curious for a test if Advanced option skip-metadata=true changes this.

I’m not sure if this needs all 600 of your files, but I’m also not sure that it doesn’t.
Getting an error in a simpler test case makes for smaller logs and easier debug.

Can you post more error info, e.g. from About --> Show log --> Live --> Warning
Possibly you’ll have to click on the entry in order to get the stack trace out of it…
If that doesn’t work, –log-file and –log-file-log-level=warning should catch details.

Do you know if issue only happens when the source files are on an SMB share?

I experienced this same issue running Duplicati - in a CentOS 8 virtual machine that was trying to backup files shared to the machine via Samba. The backups would generate “Error reported while accessing file” warnings, and the live logs would show it was related to an Interrupted system call. I verified that the Duplicati user did indeed have read-write access to the files. Every time I ran the backup, the file access warning would be generated for different files, seemingly randomly.

This error first occured after I ran updates on the CentOS VM. I rolled back to a previous VM snapshot to undo the updates and the errors went away, so they were related to some package that was updated from the CentOS repos, not Duplicati itself. I tried downgrading my Samba client and mono-devel packages, but it didn’t solve the issue. Either I didn’t downgrade them far enough or the problem was caused by a different package.

In the end I worked around the issue by sharing the files to my Duplicati server via NFS instead of Samba.

Welcome to the forum @automatyck and thanks for sharing the workaround (sorry it came to that).

however it’d be useful to do some experiments on a system with the issue. For example, the one with
skip-metadata=true was (to get in technical details) a guess that Duplicati’s use of system calls to get information might subject it to the usual system call worry that “slow” ones may be interrupted midway.

From what I read, mono internally uses some signals for its own use. Possibly these must be avoided.
Another advanced study mentioned earlier was use of strace to see if it can name the interrupted call.

If the issue is entirely within mono interpreting code, then that’s likely beyond anything Duplicati can fix.

Not actually suggesting that you need to do this (unless you’re willing), but it’d be nice if someone does.
Interestingly, I don’t see an Issue open on this. That sometimes will get the technical experts involved…

What is always nice is a very rock-solid reproducible case that’s easy for a dev to do with a small setup. Flaky issues (which this one seems to be from the accounts) are a pain to track down (good step to fix).

I’m on Ubuntu 20.04.1 and have the same issue here. Cifs mount point, get seemingly random (different every day) list of files or directories that have a file access error. Browsing them using file manager, bringing them into an editor, browsing via terminal, all works. Just the backup fails, nothing else has any issue at all. The next day, the list of files and directories changes. Or, some could be the same, it’s just seemingly random.

So, I went with the workaround, I changed to NFS and no errors.

I am also having this same issue.
Ubuntu 20.04 accessing windows file shares. Exact same behavior. Every backup job I run the files and directories that get the warning change. Sometimes its 18 files out of 9000. Other times its 30 files out of 9000.
I have given the duplicati user full control permissions on the share so its not a permissions issue.
I’ll mention I’ve already tried the above suggestion of skip-metadata with no change. Problem persists.

Same as the problem we are discussing in this thread?

Do you see the same type of CIFS errors logged by the kernel?

CIFS VFS: Send error in read = -4
CIFS VFS: Close unmatched open

yes and yes.
adding fluff because of posting char limit.

I’ll say this. I tried the workaround by moving from CIFS to NFS.
However now I’m getting errors on any file that have special unicode characters (umlauts).
But it was 76 warnings last backup, and it was 76 warnings the next backup. So at least its consistent.

EDIT: I used FileBot to quickly rename all the offending files to remove the umlauts and re-ran the backup. Ran without errors.
Workaround of moving to NFS works.