Destination: macOS 10.13.6
Target: Enterprise object storage mounted with LucidLink (FUSE)
Duplicati: 2.0.4.4_canary_2018-11-14 (I tested first with 2.0.3.3_beta_2018-04-02)
MonoVersion : 5.16.0.179
I am new to Duplicati and trying to make everything work. When I complete a backup, I keep getting the following error message:
[Error-Duplicati.Library.Main.Operation.FilelistProcessor-BackendQuotaExceeded]: Backend quota has been exceeded: Using XXX.XX GB of 0 bytes (0 bytes available)
It seems as the backup completes successfully and I’m able to restore some files while testing. According to LucidLink, there are vast amounts of storage available at the target, and the local storage is fine as well. So I have no clue where to go from here.
That sounds like two different measurement mechanisms. Can your local storage measurer also measure the available storage at the LucidLink mount? This mentions some ways including an easy one using Quick Look. From the command line, perhaps you can run a df command, which perhaps runs statfs (see build #1488).
I am still experiencing this issue. I opened a case with LucidLink, who looked into the problem, looking through logfiles and ended up sending me back to Duplicati after looking through the source code as well, claiming the issue isn’t related to the storage itself.
I am currently running the latest version: 2.0.4.15_canary_2019-02-06
The issue seems to be caused by how Duplicati determines the QuotaInfo for the backend path, when the backend is a local FileSystem path. It go to the root of the path and then get the quota for that. Duplicati’s default backend (Duplicati/Library/Backend/File/FileBackend.cs) write logic implements very rudimentary quota system. I was recommend logging an issue here, explaining that sending a Duplicati backup to a FUSE mount point (which is what Lucid uses to mount) does not work.
Could you please give some information requested earlier. You can run df with no arguments, sanitize anything if necessary, then post it. Here is an OSX manual page for df. Duplicati gets sizes differently, through mono, and some testing shows a possible problem of mono filtering its drive info very heavily.
(this is Linux but will likely work on OSX. if you do GetDrives please post that)
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 486300 0 486300 0% /dev
tmpfs 101596 4876 96720 5% /run
/dev/sda1 19478204 15325704 3140020 83% /
tmpfs 507960 36360 471600 8% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 507960 0 507960 0% /sys/fs/cgroup
cgmfs 100 0 100 0% /run/cgmanager/fs
tmpfs 101596 16 101580 1% /run/user/1000
$ mcs GetDrives.cs
$ mono GetDrives.exe
Drive /
Drive type: Fixed
Volume label: /
File system: ext
Available space to current user: 3215380480 bytes
Total available space: 4252160000 bytes
Total size of drive: 19945680896 bytes
Drive /run/user/1000/gvfs
Drive type: Ram
Volume label: /run/user/1000/gvfs
File system: fuse
Available space to current user: 0 bytes
Total available space: 0 bytes
Total size of drive: 0 bytes
$
From a Duplicati code point of view, processing looks to me to be more sophisticated than LucidLink said although what’s now an IsClientPosix test used to be misleadingly named IsClientLinux, yet true for OSX. Possibly the name threw them off, or maybe they just didn’t read code right, or even maybe I’m not either.
Quota() calls GetDrive() which calls mono GetDrives(). It looks through the results to see if it can find the drive containing your path. The first problem is that GetDrives() on mono misses lots of drives. That was my first idea, although I didn’t see why / would give all zero values (and df shows that it doesn’t). I wasn’t actually expecting mono to find LucidLink and then fail to get space, but per your test, that’s what it does.
The short drive list is possibly hinted at by Xamarin Bug 11923 - System.IO.DriveInfo.GetDrives() returns a single null drive, and current version of add_drive_string (6 years later) might be here and still seems to let very little in. I’d have to look at the output of the mount command or cat /proc/mounts to see how LucidLink managed to escape the heavy filtering. Oddly I couldn’t find any complaints filed in mono issues about that, and one could debate what’s a “drive”, but to me lots of the layered UNIX/Linux/OSX mounts should qualify.
Sorry for the confusion, but I edited some of the output of GetDrive, as I didn’t think that was relevant. The other drives were actually included, together with some external drives and TimeMachine.
Thanks for clarifying. I’m now not sure why I’m missing so many, and there are other complaints around. Glad yours is working. My mono info is below although this looks like an old issue. Regardless, you have some sample code now, right from Microsoft, that perhaps LucidLink could look into to see what’s failing.
Possibly it’s mono. I didn’t see an issue filed about LucidLink though. The underlying system call is likely identical between df and mono. I guess they could trace calls to see if the call returned properly to mono.
Because the local file backend is considered quota enabled, I suspect specifying a dummy –quota-size wouldn’t override the 0 values, however it’s an easy thing to try. If it fails, the problem is below Duplicati.
You can probably get a view of what Duplicati is seeing by looking for lines like below in your per-job log:
Thanks for the assistance ts678. Instead of messing about finding the source issue, I concluded it wasn’t worth the effort. I ended up with an alternative solution with S3 supported storage instead, which has worked flawlessly with Duplicati so far.
The conclusion is: Don’t do FUSE mount points with Duplicati.