I’m getting this warning on every backup of one of my jobs. I’m currently on version Duplicati - 184.108.40.206_canary_2019-10-24.
I have duplicati installed on a Raspberry Pi 4, on a 64GB SD Card.
The warning is the following:
[Warning-Duplicati.Library.Main.Operation.FilelistProcessor-BackendQuotaNear]: Backend quota is close to being exceeded: Using 591,45 GB of 56,97 GB (48,11 GB available)
The backup is performed to 2TB external HDD (1.81TB usable), it has currently 1,24TB available. But in fact the SD card has 57GB in size and 49GB free, so seems to me Duplicati is checking the SD card size instead of the external HDD.
Is there any setting related to this? Is there a way to bypass this warning?
Wondering if this warning is due to the
compression-extension-file setting, that is set to the default location at
/usr/lib/duplicati/default_compressed_extensions.txt, so on the SD card.
I have some other jobs running on the same duplicati installation, the biggest one is around 141GB, so well above the size of the SD card, and none are showing this warning, but none of them has this setting defined.
There are two options related to quotas:
--quota-size, which looks like it isn’t hooked up to these warnings, unfortunately
--quota-warning-threshold, which you can use to set the percentage threshold where a warning is given (if you set it to 0, it should disable the warnings unless the SD card fills up completely)
The quota detection attempts to get the free space at the destination path, but could be missing some logic on Linux.
Thanks, I added the option
--quota-warning-threshold to this job and it does not show the warning anymore. But I’m wondering if I’m just masking an issue.
This seems related to 1 job only. I have 6 jobs configured, 4 of them exceed the size of the SD card and the warning is showing on just one of them.
The one triggering the warning has the following customized options set on the interface:
and now the
tempdir is configured to another 2TB external HDD that has 1.12TB free.
From the other 3 jobs, only one of them has a customized setting (the
check-filetime-only option), and it does not trigger the warning.
So this leaves just 2 different options that can trigger the warning:
I suspect it’s more likely a problem with how the FileBackend determines the size and available space at the destination path of the backup in question. From C#, it scans through the DriveInfos looking for one which has a prefix which matches the destination path, and seems to fall back on the root (’/’) if it doesn’t find a better one - that’s what I guess is happening here. (And maybe instead of the root fallback it should just be saying the quota couldn’t be found in that case. Or something smarter should be done - though accessing this stuff through Mono / .Net isn’t always straightforward.)
Thanks for stopping by. If you’re willing to pursue this some, I suspect part of the problem is that mono has a very restricted idea of what a drive is. If you copy and compile the GetDrives example, it doesn’t seem to find what Duplicati is after, given its current algorithm (and I forget if I thought of a better one).
Backend quota has been exceeded after apparently successful backup was my limited look into this…
I’m getting this same warning for a similar issue. I have Duplicati installed using Docker on an Ubuntu server, where the Duplicati “root” directory is on a small SSD. The actual backups target a much larger ZFS pool, so I get warnings like this on the one backup job I currently have configured that is larger than the SSD:
Warnings: [2019-11-22 01:20:37 -07 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-BackendQuotaNear]: Backend quota is close to being exceeded: Using 253.14 GB of 58.54 GB (22.49 GB available)]