Failed: Access to the path is denied

I have just started to get this error with one of my backups:

Failed: Access to the path "/backups/Private Videos/duplicati-b05a7e3973c8e4bdc9766aa95d39c5451.dblock.zip.aes" is denied.

Details: System.UnauthorizedAccessException: Access to the path "/backups/Private Videos/duplicati-b05a7e3973c8e4bdc9766aa95d39c5451.dblock.zip.aes" is denied.

at Duplicati.Library.Main.BackendManager.Delete (System.String remotename, System.Int64 size, System.Boolean synchronous) [0x0005c] in <ae134c5a9abb455eb7f06c134d211773>:0

at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.String protectedfile) [0x00587] in <ae134c5a9abb455eb7f06c134d211773>:0

at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.String protectedfile) [0x00000] in <ae134c5a9abb455eb7f06c134d211773>:0

at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify (Duplicati.Library.Main.BackendManager backend, System.String protectedfile) [0x000fd] in <ae134c5a9abb455eb7f06c134d211773>:0

at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x008e0] in <ae134c5a9abb455eb7f06c134d211773>:0

at Duplicati.Library.Main.Controller+<>c__DisplayClass17_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x00036] in <ae134c5a9abb455eb7f06c134d211773>:0

at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x00072] in <ae134c5a9abb455eb7f06c134d211773>:100:

Dont read to much into the location (lol) it is just vid’s of my kids! Anyway if i manually run the backup or the schedule runs i keep getting the error. The file exists and has the same permissions as all the other files.

Version - Duplicati - 2.0.3.3_beta_2018-04-02 - In an unraid docker
Storage - Netagear ReadyNAS using NFS file system

All other backup jobs seem OK.

Any help please?

Can anyone help with this one?

I’m not certain I can help, but let me give it a try. This seems to be the first case in the forum of this exact issue, and even generalizing it only found posts that weren’t really helpful, so this may take some search.

For ReadyNAS, did you mean NTFS? That itself has quite a complex access control list system.

In another topic, I think you said you use NFS. If so, the question of translation comes into play.

Can you think of anything that may have changed or happened in the lead-up to this problem?

Please supply some OS information and the Duplicati version.

Do other backups backup to /backup as well? Any others to Private Videos? What are the mount points?

If on Linux, possibly you start Duplicati with a /usr/bin command. If so, which command and as what user?

Can you (with Duplicati down, and you as same user Duplicati runs as) add a test file and delete it there?

It sure looks from the stack like a delete is failing, so trying to do a similar operation safely may be useful.

Thanks for replying.
It is a netgear ReadyNAS but it is using NFS. Nothing has changed, other than a new video being added to the backup location.

My duplicati runs on my unraid server and is the latest version - 2.0.3.3_beta_2018-04-02
I have a total of 5 backups going to /backups that are all OK.
I have full write permission to the Private Video backup location.

How are you viewing permission? Linux permissions may not be an accurate translation from NTFS, and I’m not sure if NTFS permissions are visible directly on the NAS. Possibly an SMB mount could be used. Or just try test as previously requested. I’m not seeing anything fancy in Duplicati’s delete request – yet it is failing.

I’m assuming the NTS file system reference in the original post meant NTFS.

I suppose I will assume that /backup is a single mount, with Duplicati adding subfolders onto a shared area.

One reason I’m asking about users and how you start Duplicati is to get clues if Duplicati is running as root, and by Duplicati I mean its server component, not the web browser (if in use), or the person that’s driving it.

While you personally have full permission, that doesn’t mean Duplicati does too. Still, this all USED TO work.

Note that on both Linux/UNIX and Windows, write permission does not automatically imply delete permission.

Is this a “sticky” failure? Can you say when in the sequence (e.g. visible in the GUI) the issue is happening?

The exact reason for even trying to delete the dblock is unclear. Do you intentionally limit version retention?
Unfortunately, the dblock name says very little (unlike dlist), but you could also check manually, e.g. on age.

Possibly also, something happened shortly before this began, and Duplicati is trying to clean up some mess, probably from a dblock dated just before the problem began. If your dblock is very old, maybe it aged away.

There is also a manually or automatically run compact operation that would do deletions (and appear in log).

If you can verify that your storage is actually working correctly, then sometimes the go-to cure is to do repair which is available from the commandline and the web UI. There are more specialized tools, if it’s not enough.

Sorry that was a typo the NAS is configured to use NFS files permissions. It was using SMB a few weeks ago, but i have a different error (which i think you have seen). I then switched to NFS and it has been running fine, until this error.

Backup is a single mount.

Duplicati is running with root permissions

No permissions have been changed. There is a user that i created that has full access over the /backups share. This connection is used to mount the share to unraid (which presents the storage to duplicati). Again it used to work, and still does for the other backups.

I can write and delete a file and folder in the Private Video folder.

The failure seems to happen almost immediately.

For retention i just used the default.

I have tried a repair and it fails - with the same access error

If Duplicati is running as root, then please try the described manual test request as root user. User access isn’t directly relevant, whether or not it’s full access. With NFS, root is especially a red flag because it may (unlike the local case) enjoy less access than most users unless ReadyNAS was configured away from its seeming default of squash root access. I think the manual shows a checkbox for that. Is it checked or not?

Having said that, it seems unlikely (although possible) for permissions to allow writing files but not deleting. When checking permissions, I believe it’s the folder permission that matters, not the permission on the file.

Duplicati can go a long time between deletes (but not without writes) which might explain previous success. There are even people who, for security reasons, want Duplicati to do no deletes, and it appears possible.

For the repair failure, was the access error on delete again? Are there files from repair date in the folder? What’s puzzling is whether it’s just deletes that are failing here, or writes too, and for what users and use?

I suppose it’s possible that there’s some odd Mono bug. unRAID is Slackware based, but I’m not sure that Slackware makes Mono available (although there are other sources, and I don’t know what unRAID does).

I’m not quite sure why Duplicati is trying to delete a file so early. Possibly it’s some leftover unfinished work. You can watch the server’s doings live in a different window, under About -> Show log -> Live (pick a level). Information might be a good one. Profiling can write too much. There are also log-file and log-level options.

I’d suggest setting --no-auto-compact=true to see if the error stops happening. If so, then that isolates where the issue is happening code wise.

It’s perfectly fine to run Duplicati with auto-compact disabled, it just means your destination will never be cleaned of “aged-out” versions or overly-small archives (which can happen if a backup has only a few small changes to be recorded).

--no-auto-compact
If a large number of small files are detected during a backup, or wasted space is found after deleting backups, the remote data will be compacted. Use this option to disable such automatic compacting and only compact when running the compact command.
Default value: “false”

1 Like

I deleted the backup job and started agan (keeping the folder) and it is working fine.
However i now have another job with the same error! I think my machine crashed on Tuesday night, but before backups started. I rebooted the server and the jobs ran fine, now this morning one has failed with:

Failed: Access to the path "/backups/NextCloud/duplicati-b3c5b33d8f08148f89223199a14ef1c2b.dblock.zip.aes" is denied.

Details: System.UnauthorizedAccessException: Access to the path "/backups/NextCloud/duplicati-b3c5b33d8f08148f89223199a14ef1c2b.dblock.zip.aes" is denied.

at Duplicati.Library.Main.BackendManager.Delete (System.String remotename, System.Int64 size, System.Boolean synchronous) [0x00081] in <ae134c5a9abb455eb7f06c134d211773>:0

at Duplicati.Library.Main.Operation.CompactHandler+<PerformDelete>d__7.MoveNext () [0x0006f] in <ae134c5a9abb455eb7f06c134d211773>:0

at System.Collections.Generic.List`1[T].AddEnumerable (System.Collections.Generic.IEnumerable`1[T] enumerable) [0x00059] in <2943701620b54f86b436d3ffad010412>:0

at System.Collections.Generic.List`1[T].InsertRange (System.Int32 index, System.Collections.Generic.IEnumerable`1[T] collection) [0x000f4] in <2943701620b54f86b436d3ffad010412>:0

at System.Collections.Generic.List`1[T].AddRange (System.Collections.Generic.IEnumerable`1[T] collection) [0x00000] in <2943701620b54f86b436d3ffad010412>:0

at Duplicati.Library.Main.Operation.CompactHandler.DoCompact (Duplicati.Library.Main.Database.LocalDeleteDatabase db, System.Boolean hasVerifiedBackend, System.Data.IDbTransaction& transaction, Duplicati.Library.Main.BackendManager sharedBackend) [0x005ec] in <ae134c5a9abb455eb7f06c134d211773>:0

at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired (Duplicati.Library.Main.BackendManager backend, System.Int64 lastVolumeSize) [0x00127] in <ae134c5a9abb455eb7f06c134d211773>:0

at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x008e0] in <ae134c5a9abb455eb7f06c134d211773>:0

at Duplicati.Library.Main.Controller+<>c__DisplayClass17_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x00036] in <ae134c5a9abb455eb7f06c134d211773>:0

at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x00072] in <ae134c5a9abb455eb7f06c134d211773>:0

How do i add that setting - is under advanced options (i am using the gui version).
It’s a shame as i like the idea of duplicati but it doesnt look to work in my enviornment. Every error has required the whole job to be deleted and started again, i am not sure i have managed a full week without an error.

When a job fails and you delete it to redo, do you delete the remote files too? If the remote, did deletion work?
I keep asking about deletions because it’s always on a deletion that you find the failure, as far as we’ve heard.

Is this the original job failing repeatedly, or has the issue begun to appear in the other jobs originally working? Actually, I see the backup now failing is a different one in NextCloud, and it’s doing a compact (which matters).

While Duplicati can go a long time (maybe forever) without a compact, doing so does mean the backup grows. Maybe you’re just now running through old jobs, maybe originally done on SMB before NAS, now compacting.

The no-auto-compact option is in Advanced options, Core options section, but it’s a not an ideal workaround. There may be a chance that jobs begun anew under NFS will survive without it better than jobs that switched.

One new (to me) piece of information is that ReadyNAS provides a checkbox to control file deletion capability.

http://www.downloads.netgear.com/files/GDC/READYNAS-100/READYNAS_OS_6_SM_EN.pdf

See “Set Up Access Rights to Files and Folders”, then the “Grant rename and delete privileges …” checkbox.
Relevant discussions are at Google search “grant rename and delete privileges” site:community.netgear.com
Testing can also determine whether or not this is a ReadyNAS issue. If it is, there’s a better place to get help.

While I was previously cautious about the test, having you stop Duplicati and (as root) try to add and delete a test file named differently from the files already there, maybe now you should just stop Duplicati then rename the exact file it can’t delete, then rename it back, all as root. You might get two good renames or none at all…

If you’ve been deleting remote files while deleting the backup job, do they really delete? If not, test it by hand. When your intent is a clean wipe and restart, it would seem an ideal time to safely test various ways to delete.

2 Likes

Thank you so much for all your help. It turned out that it was a permission issue on my NAS. It looks like the write permission was fine, but not delete. I played around with it, and re-ran the job and it completed without error.

Thanks for everyone’s help.

Would you consider this issue resolved with the NAS permissions update?