A cronjob ensures that every day a new folder is added (log-YYYY-MM-DD_HH-MM-SS) and the oldest one is deleted. In this case: log-2025-04-29_20-30-01 was deleted the day before.
Now, I’m regularly getting the following warnings from Duplicati – sometimes daily, sometimes every couple of days:
2025-05-06 21:30:03 +00 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: /duplicati_backup/log-2025-04-29_20-30-01/ DirectoryNotFoundException: Could not find a part of the path ‘/duplicati_backup/log-2025-04-29_20-30-01’.
2025-05-06 21:30:03 +00 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-PathProcessingErrorBlockDevice]: Failed to process path: /duplicati_backup/log-2025-04-29_20-30-01/ InvalidOperationException: Path doesn’t exist!
2025-05-06 21:30:03 +00 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: /duplicati_backup/log-2025-04-29_20-30-01/ DirectoryNotFoundException: Could not find a part of the path ‘/duplicati_backup/log-2025-04-29_20-30-01’.
2025-05-06 21:30:03 +00 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-PathProcessingErrorBlockDevice]: Failed to process path: /duplicati_backup/log-2025-04-29_20-30-01/ InvalidOperationException: Path doesn’t exist!
2025-05-06 21:30:03 +00 - [Warning-Duplicati.Library.Main.Operation.Backup.MetadataGenerator.Metadata-MetadataProcessFailed]: Failed to process metadata for “/duplicati_backup/log-2025-04-29_20-30-01/”, storing empty metadata FileAccesException: Unable to access the file “/duplicati_backup/log-2025-04-29_20-30-01” with method llistxattr, error: ENOENT (2)
2025-05-06 21:30:03 +00 - [Warning-Duplicati.Library.Main.Operation.Backup.FileBlockProcessor.FileEntry-PathProcessingFailed]: Failed to process path: /duplicati_backup/log-2025-04-29_20-30-01/ FileNotFoundException: Could not find file ‘/duplicati_backup/log-2025-04-29_20-30-01/’.
It seems like Duplicati is trying to back up a folder that has already been removed by the cronjob. Its always these six error messages.
The backup process itself continues without interruption, which is fine - but I would really like to get rid of these warning messages.
My question:
Is there a way to tell Duplicati to only back up currently existing directories and ignore already deleted ones?
Or is there a way for handling such dynamic folder structures with Duplicati? I dont have this issue on any other system with changing files.
The day before what? More importantly, how far before Duplicati backup start?
Is this a local filesystem, or something networked where file listings can linger?
Duplicati walks the filesystem. If it doesn’t see folder, it won’t attempt a backup.
A filesystem walk is not instant though, so make sure all settles before backup.
Duplicati will notice the folder’s disappearance, and consider it to be a deletion.
Is it ever on a far earlier deletion, or only on the latest delete (how far before)?
FileEnumerationProcess is the first step in a pipeline. It must have seen folder.
I’m not sure it should have passed folder on to the later pipeline stages though.
Channel Pipeline is a technical discussion if you care to get into it more deeply.
Presumably there is content written into the folder sometime after adds.
That must take at least some time. Is old folder deleted before or after?
If after, could old be deleted first, while a new folder is being prepared?
Is Duplicati run script-initiated awhile after delete or by a self-schedule?
You could ls -ldc --full-time duplicati_backup to get delete time.
Depending on folder data content and Duplicati retention setting, the whole folder scheme might not be needed. Duplicati itself can keep and restore multiple versions of your folder, however maybe your workflow likes the subfolder plan. If file data may be same between folders, at least Duplicati will deduplicate, but it still has to read the files again for contents.
It’s probably a stretch, but one thing to consider is that some process has the directories (or more likely something in them) open at the time they were deleted and that process continued to run from then through Duplicati running the backup. The filesystem can’t actually remove the file/folder while something has it open, but it WILL block new accesses to file/folder after the command to delete it.
If this were the case then “ls” of “duplicati_backup” would also still show them there at the same time that Duplicati is complaining.
I don’t think it works like that, although I hadn’t looked into the containing directory before.
Below, I have a file in the directory that’s open at the time the file gets deleted by deleting containing directory. Programs still have the file open, but it has no name in the filesystem.
$ mkdir -p /tmp/duplicati_backup/log-2025-04-29_20-30-01
$ cd /tmp/duplicati_backup/log-2025-04-29_20-30-01
$ cat > file
^Z
[1]+ Stopped cat > file
$ tail -f file
^Z
[2]+ Stopped tail -f file
$ jobs -l
[1]- 553971 Stopped cat > file
[2]+ 553974 Stopped tail -f file
$ lsof -p 553974 | grep /tmp
tail 553974 me cwd DIR 8,3 4096 1259339 /tmp/duplicati_backup/log-2025-04-29_20-30-01
tail 553974 me 3r REG 8,3 0 1179711 /tmp/duplicati_backup/log-2025-04-29_20-30-01/file
$ cd ..
$ pwd
/tmp/duplicati_backup
$ rm -rf log-2025-04-29_20-30-01
$ ls -na
total 8
drwxrwxr-x 2 1000 1000 4096 May 7 18:23 .
drwxrwxrwt 26 0 0 4096 May 7 18:21 ..
$ lsof -np 553974 | grep /tmp
tail 553974 me cwd DIR 8,3 0 1259339 /tmp/duplicati_backup/log-2025-04-29_20-30-01 (deleted)
tail 553974 me 3r REG 8,3 0 1179711 /tmp/duplicati_backup/log-2025-04-29_20-30-01/file (deleted)
$
Ok, I’ll admit that I was going off memory of something I saw on one of my systems recently. Unfortunately I don’t remember the exact details and after trying a few things I cannot recreate it. I also suspect that it was a SMB share with activities going on on both the server side and client side around some files/directories.
Hi, thank you very much for your fast help and extra information.
Your pointers helped me find the issue, the issue was that Duplicati started its backup job at exactly the same time as the cleanup cronjob ran (which deleted all files/directories older than 7 days.)
I’m happy to say that it wasnt a duplicati issue in the end but an admin issue I didn’t notice before because the system time is in a different time zone than my duplicati configuration (2hr difference).
Thank you for your responses again! Since im new here Im happy to say that ive been very happy with Duplicati in the past and was able to find help in the forum as a lurker a few times already. But Im a bit sorry for letting you work on this when the issue was this easy to fix.
I now let the jobs run with some time in between and i am pretty confident that the errors won’t appear again. If the issue repeats and this wasn’t the issue, I will get back to this thread.