Millions of files on NFS

Hello,
Using Duplicati to backup millions of small files on NFS, which takes days to complete.
Duplicati Linux server CPU, memory and network usage are very low so they do not seem as the bottleneck.
The files do not change much, but scanning each times take a long time. Any fine-tuning I can do on the Duplicati level to increase performance?
Thanks,
Yariv

If this were a Windows machine, I’d suggest using the --usn-policy option.

Unfortunately I don’t think an equivalent exists for Linux let alone remote file sources like NFS. Not sure there is any option but for Duplicati to walk the filesystem at the start of each backup.

Maybe someone else has some ideas…

Unfortunately :slight_smile: its Linux guess not much can be done…

Hello

while Duplicati can do remote backup, it’s more thought out as a tool for backing up the local computer, so the workaround could be to setup Duplicati on the NFS server.
Nonetheless, there seems to be problems with .Net and NFS:

1 Like

Thank you, I do not have access to install on the server side.
Maybe there are NFS mount tweaks that are more suitable for Duplicati?

maybe but there is no info about NFS tweaks in the Duplicati doc, so I’d advise you to read the Github issue I linked in the previous post and try to get some good from that.

Whatever you can find that helps look through lots of files and efficiently check time attribute on each.
nfs man page has hints. Probably especially make sure that you’re taking advantage of NFS caching.
nfsstat and nfsiostat and Optimizing NFS Performance and other Internet info might be helpful to you.

My question is not usage, but latency, e.g. does backup hit NFS server millions of times with queries?
Actually getting statistics might answer that (along with Internet help), then the challenge is avoiding it.
Duplicati would almost certainly run faster and more reliably on local files, but that’s not your situation.