Input/output error?

I may have found a fix - I granted the Everyone group on the Windows server Modify rights to the folder shared with NFS. So far so good and the NFS share is locked to the IP of the machine accessing it so I’m not that concerned about the workaround.

The idea was that manual access (e.g. ls) as client root over NFS to a server might also error if a server treats root UID 0 as weak instead of super powerful. If you want root to stay root, there’s typically a way to say so. Linux server information is plentiful (start at linked Wikipedia article). Windows server info is not…

The basic question is probably how to do user mapping from a Linux user to (maybe different) server user.

I’m glad you have a workaround to live with server behavior. That might be configurable but I can’t say how.

The annoying thing is that the two Raspberry OS Pi I also use Duplicati with have no such issues, just this Fedora server - though it doesn’t surprise me as I regularly coming across bizarre issues with it, so perhaps this might be the last iteration of Fedora I use and migrate to something else next time.

If they’re running the same way as root on those, then I’m not sure how they avoid server remap of root.
If they’re running as a non-root user, server mapping might differ, but they probably remap to something.

Spoke too soon:

Failed: Input/output error
Details: System.IO.IOException: Input/output error
  at Duplicati.Library.Main.BackendManager.List () [0x00049] in <9d37c106a6af4e2db9ebdc93583bad34>:0
  at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.Collections.Generic.IEnumerable`1[T] protectedFiles) [0x0000d] in <9d37c106a6af4e2db9ebdc93583bad34>:0
  at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.Collections.Generic.IEnumerable`1[T] protectedFiles) [0x00000] in <9d37c106a6af4e2db9ebdc93583bad34>:0
  at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify (Duplicati.Library.Main.BackendManager backend, System.String protectedfile) [0x0011d] in <9d37c106a6af4e2db9ebdc93583bad34>:0
  at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x01042] in <9d37c106a6af4e2db9ebdc93583bad34>:0
  at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00009] in <9d37c106a6af4e2db9ebdc93583bad34>:0
  at Duplicati.Library.Main.Controller+<>c__DisplayClass14_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x0004b] in <9d37c106a6af4e2db9ebdc93583bad34>:0
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0011c] in <9d37c106a6af4e2db9ebdc93583bad34>:0

Is there any way to use an SMB share from a Linux Duplicati install, I could not see it as an option in the job settings? Just fed up with all this NFS nonsense.

Ok, so I switched the NFS mount at boot to an SMB mount and so far so good - but I’ve been here before.

Interestingly though, while looking at the NFS mounts on the Fedora and Raspberry installs, I saw that Fedora was connecting as NFS3 (and actually throws an NFS4 error during boot), whereas the Raspberry is connecting as NFS4. Definitely need to rethink dumping Fedora but I have quite a lot running on that box that it’s not trivial.

Just point to the share that you mounted. Note though, that SMB can sometimes be unreliable.
Just as with NFS, there are lots of SMB versions and variations and tunings that may factor in.

I tried that but could not get any combination of destination format or credential format to work - I did read in the docs that in fact SMB is not supported as a destination, which is why a looked into and set up the SMB mount at boot.

Are these both talking about SMB? The latter says it was set up. The former says nothing worked.

SMB might have a similar user mapping challenge to NFS, assuming SMB is going into Windows.
I don’t have any Linux SMB, so all I can do is point to docs. Internet search would surely find more.

Mounting the Share

Alternatively, you might be able to get Fedora to use NFS version 4, if you think that might help any.

mount.nfs4(8) - Linux man page

Basically, Duplicati has neither an SMB client nor an NFS client built in. It uses the OS mechanism.
If you can get that to where a user running as Duplicati’s user seems to work, Duplicati might also.
SMB, though, is sometimes less reliable than desired, especially in the face of intermittent access.

Good luck.

Yes, both referred to SMB, and so far an SMB mount at boot in fstab is working.

Looks like Fedora has an issue because it does try to use NFS4 but logs an issue, NFS4: Couldn't follow remote path and then falls back to NFS3.

That’s the kind of message an Internet search might find more help with. I don’t have any NFS here either.

Looks like I might need to delve into finding out why NFS4 isn’t working because it seems SMB is giving some issues as well. Not sure why these two files in particular because there is a file dated between those two which it doesn’t mention:

LimitedWarnings: [
    2020-08-28 07:05:27 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-PathProcessingError]: Failed to process path: /srv/nfs/backups/huginn/huginn-backup-20200828_070007.sql,
    2020-08-28 07:05:27 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.MetadataGenerator.Metadata-MetadataProcessFailed]: Failed to process metadata for "/srv/nfs/backups/huginn/huginn-backup-20200827_070006.sql", storing empty metadata
]
LimitedErrors: []
Log data:
2020-08-28 07:05:27 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-PathProcessingError]: Failed to process path: /srv/nfs/backups/huginn/huginn-backup-20200828_070007.sql
Mono.Unix.UnixIOException: Interrupted system call [EINTR].
  at Mono.Unix.UnixMarshal.ThrowExceptionForLastError () [0x00005] in <f01ab055820c478fbd2bfa649991ba29>:0
  at Mono.Unix.UnixFileSystemInfo.GetFileSystemEntry (System.String path) [0x0000f] in <f01ab055820c478fbd2bfa649991ba29>:0
  at UnixSupport.File.GetFileType (System.String path) [0x00001] in <3a2c307381104a73ba105ad3712d666f>:0
  at Duplicati.Library.Snapshots.NoSnapshotLinux.IsBlockDevice (System.String localPath) [0x00006] in <594d27435071428ca8c4e9243bb18091>:0
  at Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess.AttributeFilter (System.String path, System.IO.FileAttributes attributes, Duplicati.Library.Snapshots.ISnapshotService snapshot, Duplicati.Library.Utility.IFilter sourcefilter, Duplicati.Library.Main.Options+HardlinkStrategy hardlinkPolicy, Duplicati.Library.Main.Options+SymlinkStrategy symlinkPolicy, System.Collections.Generic.Dictionary`2[TKey,TValue] hardlinkmap, System.IO.FileAttributes fileAttributes, Duplicati.Library.Utility.IFilter enumeratefilter, System.String[] ignorenames, System.Collections.Generic.Queue`1[T] mixinqueue) [0x00000] in <9d37c106a6af4e2db9ebdc93583bad34>:0
2020-08-28 07:05:27 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.MetadataGenerator.Metadata-MetadataProcessFailed]: Failed to process metadata for "/srv/nfs/backups/huginn/huginn-backup-20200827_070006.sql", storing empty metadata
Mono.Unix.UnixIOException: Interrupted system call [EINTR].
  at Mono.Unix.UnixMarshal.ThrowExceptionForLastError () [0x00005] in <f01ab055820c478fbd2bfa649991ba29>:0
  at Mono.Unix.UnixFileSystemInfo.GetFileSystemEntry (System.String path) [0x0000f] in <f01ab055820c478fbd2bfa649991ba29>:0
  at UnixSupport.File.GetUserGroupAndPermissions (System.String path) [0x00001] in <3a2c307381104a73ba105ad3712d666f>:0
  at Duplicati.Library.Common.IO.SystemIOLinux.GetMetadata (System.String file, System.Boolean isSymlink, System.Boolean followSymlink) [0x00068] in <529b5bd32bc049799610792c1b48b8d7>:0
  at Duplicati.Library.Snapshots.NoSnapshotLinux.GetMetadata (System.String localPath, System.Boolean isSymlink, System.Boolean followSymlink) [0x00000] in <594d27435071428ca8c4e9243bb18091>:0
  at Duplicati.Library.Main.Operation.Backup.MetadataGenerator.GenerateMetadata (System.String path, System.IO.FileAttributes attributes, Duplicati.Library.Main.Options options, Duplicati.Library.Snapshots.ISnapshotService snapshot) [0x0001b] in <9d37c106a6af4e2db9ebdc93583bad34>:0

Original issue was on the backup destination side using NFS, unable to list the files. We worked on this:

System.IO.IOException: Input/output error
      at Duplicati.Library.Main.BackendManager.List () [0x00049] in <9d37c106a6af4e2db9ebdc93583bad34>:0 

New issue is on the backup source side, which I don’t think I knew was also remote. This particular issue needs some chasing down by someone, possibly someone who can run strace to see what’s interrupted:
EDIT: skip-metadata=true was also mentioned, and is easier test. Just set the option true to see if it helps.

FileAccessError while processing files explains the need for investigation, and ideally a reliable test case.
You’re pretty busy now trying to get things running right, but if you can help investigate, it might help solve.

I’ve tried another test.
Restarted both machines, server (Windows) and client (Fedora).
Wiped the backups off the remote Windows server and recreated the folder plus the NFS share.
Deleted the database for the job and started it.
After a few hours I had a completed backup.
Waited for the next scheduled backup, it failed with Input/output error again.
Dismounted/remounted share on client.
Manually started backup.
Completed successfully.

Going to leave it like that and see if the next scheduled backup fails again. If it does, remount, backup and remount again before waiting.

Quick update. So the next day, yesterday, the scheduled backup worked which was kind of annoying as it doesn’t then explain why today it failed.

So I tried restarting just Duplicati and a re-run failed as well - so the issue is not Duplicati directly but something it does or doesn’t do is affecting the mount.

Seems that doing:

umount /srv/nfs
mount -a

solves the problem as after that the backup is fine.

Any ideas how I can further trouble shoot this? Currently the fstab mount is set as:

192.168.1.30:/MAGGIE /srv/nfs nfs bg,_netdev,nofail,tcp,async

Which is identical to the same ones I use for my two Raspberry Pi installs. I’ve also tried sync which still fails and even worse makes the backup extremely slow.

You might do better searching or asking on the wider Internet, except I’m not sure how you’d set context.
I’m not 100% clear on what error you’re seeing even in this latest post. I/O error? Source or destination?
You could try using strace to see what’s going on underneath. If destination. Duplicati logs can do more.

EDIT:

For destination error, logging at Retry level is good. For source, Verbose. Duplicati won’t log every read.
Trimming the backup down to a small size will also help, and most levels of log will make debug easier.
About --> Show log --> Live is good for easy cases. If it takes awhile, using a log file will likely be easier.
Stack traces can add context too. For a complete backup failure, try About --> Show log --> Stored too.

I/O error as per the original post.

When it happens again I will get more logging and post it here.

So destination error on a List. One thing that might help (although it may increase risk of a stuck system) is using an NFS hard mount instead of soft mount. Instead of returning an error to the process, it keeps trying.

nfs

Options supported by all versions

These options are valid to use with any NFS version.

soft / hard

Determines the recovery behavior of the NFS client after an NFS request times out. If neither option is specified (or if the hard option is specified), NFS requests are retried indefinitely. If the soft option is specified, then the NFS client fails an NFS request after retrans retransmissions have been sent, causing the NFS client to return an error to the calling application.

While looking into this, I also found an example of someone using strace to see what got an EIO error.

Bug 448479 - NFS soft mount doesn’t work as expected

Problem with using “hard” which I did consider is that if the share is not available for any reason during boot it will get held up and I could find no way to set a timeout for it. Maybe I’m mistaken and missing something.

Also, the mount doesn’t disappear for anything else that I can detect, only Duplicati seems to complain and there is nothing in the main system logs that I can find, which makes it all the more mysterious.

So I noticed that like the other Linux backups this one takes some files from the same NFS share and then adds them to the Duplicati backup. The difference being that the other machines just saves a few KB of data whereas this one with the issue it’s a few GBs. So I rethought that and made sure all the data to back up was on the machine itself and then gets transferred to the NFS share by Duplicati.

The usual happened, the next manual backup was fine, the subsequent scheduled one failed - well at least it’s a lot faster now.

So I have gone ahead and added the umount/mount of the NFS share to the start backup script, tested and now wait to see how the scheduled ones go.