Incorrect Free Space Warnings

Everything has been working fine in my Duplicati instance until around 19 days ago when I started getting these warnings:

2024-12-18 03:03:34 +00 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-BackendQuotaNear]: Backend quota is close to being exceeded: Using 930.028 GB of 45.000 GB (10.540 GB available)

The thing is, the location I am backing up to is 2 TB in size and there are 777 GB of space remaining. The log message does not make sense.

My Duplicati instance is currently on v2.1.0.2_beta_2024-11-29 installed within a Docker container. I can see by running df -h inside my container I have these mount points with various amounts of free space:

Filesystem      Size  Used Avail Use% Mounted on
/dev/loop2       45G   34G   11G  76% /
tmpfs            64M     0   64M   0% /dev
shm              64M     0   64M   0% /dev/shm
shfs             14T  9.7T  4.0T  71% /source
/dev/sdf2       1.9T  1.1T  777G  59% /TowerBackup
/dev/loop2       45G   34G   11G  76% /etc/hosts
tmpfs           7.8G     0  7.8G   0% /proc/acpi
tmpfs           7.8G     0  7.8G   0% /sys/firmware
tmpfs           7.8G     0  7.8G   0% /sys/devices/virtual/powercap

Evidently, Duplicati is checking for free space on /etc/hosts (/dev/loop2) rather than my configured location of /TowerBackup (/dev/sdf2).

This is not a problem with Duplicati being able to “see” my /TowerBackup mount because the actual backup works fine and I am able to see restore points for every day. I have a few questions:

  • Why has this only just started happening when I have not changed my configuration? Duplicati has been working fine for well over a year before this.
  • Why is Duplicati able to correctly use the /TowerBackup for the backup but not for the free space validation?
  • What can I do to fix this?

Thanks!

Hi @pepperywasp, I assume that you are using “file” backend as destination.

There was a change on that backend on specifically on quota estimation that might be the source of this discrepancy that triggers the warning. The part that is responsible for the backup is unchanged so that explains why the actual backup its working.

  • What can I do to fix this?

You already did bringing it up. Thank you.

I have raised an issue.

Flaw in big code change. File backend reports incorrect quota on Linux & MacOS #5733

and old one still would, but you got new one.

Code has various parts. Only quota part broke.

Or you can carefully run a Canary test release, or you can wait for fix in future Beta or Stable.

Thank you I have subscribed to the issue.

Thank you, I’ll keep an eye out for this making it to a Beta release and will post back on whether this fixes my issue.

Unfortunately, I still get the warning using this option set to true.

Where did you put the option? If in job Advanced options, you also need to check it, like below:

If it doesn’t do what it says, maybe it’s a bug, but you can also see if quota-size option helps it.

I set the option here:

With limited time to deal with this I have decided to automatically bin any e-mails with this (and only this) warning and wait for a beta release that fixes the root cause.

Thanks for your help.

The problem with that plan is the root cause doesn’t get fixed until someone figures it out.
Reading the size from the wrong partition got fixed, but quota-disable still doesn’t work.
Or at least it doesn’t stop that warning on the latest Canary release, so it deserved a look.
My guess about the code change that broke the quota-disable workaround was posted:

Duplicati Warning - BackendQuotaNear

Test used Linux with a 1 MB tmpfs filesystem, and a file made using dd and /dev/urandom.
When a Warning began, I tried quota-disable=true and quota-size=1TB. What worked was
quota-warning-threshold=0

  --quota-warning-threshold (Integer): Threshold for warning about low quota
    Set a threshold for when to warn about the backend quota being nearly exceeded. It is given as a percentage, and a warning is generated if the amount of available quota is less than this percentage of
    the total backup size. If the backend does not report the quota information, this value will be ignored.
    * default value: 10

where math sounds like warning would need an available quota of less than zero to go off.

I believe the root cause of me getting the error in the first place is (as you say) the issue of using the wrong partition for the size and it appears to be fixed in #5734.

Of course, the quota-disable issue should be fixed as well. But I don’t think that is the root cause for me.

It might be the root cause for

but if you’d rather bin e-mails, fine. If you try new workaround attempt, feedback would be nice.

This appears to be fixed in the latest beta.

Hi, sorry to resurtrect this old thread, but I thought I’d add that I just had this issue on verion 2.3.0.0_stable_2026-04-14, with it falsely coming up with a warning saying I was out of space in the backend quota.

I’m running on Unraid in a docker and was able to add the string / optoin to disable th backend quota, and the error / warning has now gone:

image

Thanks for the tip, and hgopefully it will get fixed premanently somehwere down the line.

Please show the warning (as was done in original post), especially since

is a bit unusual, and I don’t know if the devs have that. More info will help.
Original post has other examples. The job log Complete log has as well:

      "TotalQuotaSpace": 999618043904,
      "FreeQuotaSpace": 24240660480,
      "AssignedQuotaSpace": -1,
      "ReportedQuotaError": false,
      "ReportedQuotaWarning": false,

from a Windows backup. Looks about like I expect, but check your system.

You can also get a “fresh” reading without a backup by using BackendTester.
You can add --reruns=1 if you like, to shorten it. Sample quota test at end:

[11:09:47 707] Checking quota...
[11:09:47 709] Free Space:  23.46 GiB
[11:09:47 709] Total Space: 930.97 GiB
[11:09:47 710] Checking DNS names used by this backend...
[11:09:47 710] Unittest complete!

You would have to do this from docker exec or similar inside the container.
While there you can also try your favorite available tools, e.g. df for values.

EDIT 1:

I’m also noticing that specifics of the destination weren’t said. That’s needed
because it’s what gives the numbers that I’m asking for which may be wrong.

hey @ts678 ,

so this is the warning message:

"2026-04-16 04:00:46 -04 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-BackendQuotaNear]: Backend quota is close to being exceeded: Using 3.043 TiB of 30.000 GiB (13.225 GiB available)"

I have attached the full log for this backup too.

backup log.zip (1.7 KB)

What is the Destination type, how big do you think it is, and what shows that?

Because the 3 TB number looks plausible (and is internally kept by Duplicati),
I’d guess 30 GB coming back from somewhere is wrong? Is anything 30 GB?
Long ago, a mono bug looked in the wrong partition. Are there several there?
If it’s not on local folder but on some sort of remote, what type of remote is it?

Interestingly, I (i.e. the OP) started seeing these errors again around the same time that @SatiricalCrab1182 resurrected this thread.

I get exactly the same error (with slightly different numbers) as my original post:

2026-04-23 04:02:36 +01 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-BackendQuotaNear]: Backend quota is close to being exceeded: Using 869.336 GiB of 55.000 GiB (16.063 GiB available)

And looking at my version 2.3.0.0_stable_2026-04-14 container’s mount points I see:

Filesystem      Size  Used Avail Use% Mounted on
/dev/loop2       55G   38G   17G  71% /
tmpfs            64M     0   64M   0% /dev
shm              64M     0   64M   0% /dev/shm
shfs             14T   14T  651G  96% /source
/dev/sdf2       1.9T  1.1T  837G  56% /TowerBackup
/dev/loop2       55G   38G   17G  71% /etc/hosts
tmpfs           7.8G     0  7.8G   0% /proc/acpi
tmpfs           7.8G     0  7.8G   0% /sys/firmware
tmpfs           7.8G     0  7.8G   0% /sys/devices/virtual/powercap

So again, Duplicati is looks like it is checking for free space on /etc/hosts (/dev/loop2) rather than my configured location of /TowerBackup (/dev/sdf2).

Is it possible that there has been a regression?

This needs dev input, but it looks like 2.3.0.0_stable_2026-04-14 rewrote it.

The old FileBackend.cs called an elaborate internal GetDrive() which is now
replaced by a call to the new GetFreeSpaceForPath(). Suspicious code is at

which seems to explain why your quota check is for /.
/dev/loop2 use looks odd to me, but it may not matter.

Thanks for the details. That might have led to the bug.
Old bug in mono (now gone) could not have reverted.

Pursuing the BackendTester idea posted above, on Linux Duplicati 2.3.0.0.
This is in a VirtualBox VM with a shared folder that’s on the Windows host.

$ duplicati-backend-tester --reruns=1 /media/sf_VirtualBox_shared_folder/tester
...
[09:43:07 633] Checking quota...
[09:43:07 636] Free Space:  6.185 GiB
[09:43:07 637] Total Space: 28.865 GiB
$ df -h
Filesystem                Size  Used Avail Use% Mounted on
tmpfs                     297M  1.3M  296M   1% /run
/dev/sda3                  29G   22G  5.5G  80% /
tmpfs                     1.5G     0  1.5G   0% /dev/shm
tmpfs                     5.0M  4.0K  5.0M   1% /run/lock
/dev/sda2                 512M  6.1M  506M   2% /boot/efi
VirtualBox_shared_folder  931G  915G   16G  99% /media/sf_VirtualBox_shared_folder
tmpfs                     297M  140K  297M   1% /run/user/1000
$ 

It looks like bug did as predicted, giving space for / rather than the path I gave.

EDIT 1:

2.2.0.3_stable_2026-01-06 on the same test seems to be reading correct path:

[10:13:12 952] Checking quota...
[10:13:12 956] Free Space:  13.415 GiB
[10:13:12 956] Total Space: 930.967 GiB

Hey, sorry for the delay in my response.

So, the backup destination location is a local unassigned disk on the unraid server; it’s 14TB with about 7TB free. It’s mapped to the path /backups in Duplicati, from /mnt/disks/.

The 3TB could be the size of the source and desination of the media backup:

I have 2 other backup schedules running, but the sizes are very different.