Duplicati Exit Code 3

Hey, everyone.

Just wondering if anyone can help with something… I’ve run a Duplicati backup of my local machine (Debian Bookworm, 240GB drive/75GB free) to my NAS (also Bookworm) but I keep getting Exit Code 3. I can’t figure out what this means. The only documented exit codes I can find are:

duplicati-cli help returncodes

Duplicati reports the following return/exit codes:
  0 - Success
  1 - Successful operation, but no files were changed
  2 - Successful operation, but with warnings
  50 - Backup uploaded some files, but did not finish
  100 - An error occurred
  200 - Invalid commandline arguments found

If I run the “Configuration → Export → As Command-line” command from the terminal prompt, it says “Backup completed successfully!” (Remote size: 246.54 GB)

From the UI, the same job reports an error:
2023-08-18 00:29:18 +01 - [Error-Duplicati.Library.Main.Operation.FilelistProcessor-BackendQuotaExceeded]: Backend quota has been exceeded: Using 246.54 GB of 0 bytes (0 bytes available)

The destination NFS mount is on a 6TB drive which has 1.2TB free. Setting --quota-threshold=1TB does nothing. Setting --quota-threshold-warning=0 does not squash this error either.

Is there any way to get rid of it?

Thanks.

Hello

Error 3 should indeed be documented, it’s completed with errors. It should be in next release.
You can try to use the --full-results switch to see more.

The wording is wrong for the Command line, however you get a status 3 to show that there is a problem, the Web UI is more expressive here.

On the underlying problem, there are issues with used size reported for some backends, there is a pending PR to disable it, thanks for reporting this - it may involve some changes if indeed mono has also problems with NFS available size reporting - not sure since I have still to take a look at this PR.

1 Like

You can test quotas several ways. Probably the first requirement is if Linux can get them, e.g. with df.
Test not only the destination folder, but any of the higher folders on the way to root. Do any show a 0?

More background and test options:

Backend quota has been exceeded after apparently successful backup
Backend quota exceeded (I know it’s been covered, but…)

Thanks for the info. I’ve not got anything which is reporting 0bytes free… The closest is reporting 0% free, but still have 21GB available… That’s a different drive in the NFS server though. One that isn’t being touched.

Duplicati machine:
The destination directory configured in Duplicati is /storage/backup.
The /storage directory is in the local computer’s root volume and has 75GB of 240GB free.
The /storage/backup directory is an NFS mapping to SERVER:/storage/backup.

NFS server:
The storage/backup directory is a bind mount to /mnt/hd0/backup
The /mnt/hd0 directory is a device mount which has 1.2TB of 7.3TB free.
The /mnt directory is in the root volume and has 193GB of 219GB free.

However, the Duplicati machine is running AutoFS to mount the drives. This mounts the directories on demand and prevents issues when the NAS is not available. /storage/backup doesn’t actually exist on the local machine until the point at which it is accessed. It should remain mounted until the point at which it is disconnected from the NFS server, so if Duplicati is checking the quota post backup, it should get valid information, rather than just 0 bytes. If it is checking prior to running the backup, then maybe this is what is responsible for 0bytes being reported, maybe?

Although a long look by a developer would be more certain, it looks like it checks before and after, but complains only once. You could also tell me whether backup runs or stops before run with quota error.

Can you test with it already mounted to see if that helps it know the sizes?

Alternatively, use Duplicati.CommandLine.BackendTester --reruns=1 <path-to-empty-NFS-folder>
which will do a short write/read/etc. test (which should do the mount) then show space info at end.

Depending on OS setup, typing .exe name might run it. If not, type mono before path to the .exe.

Getting conflicting signals here. Above the error message code is a remote file listing and a file study.

So it looks like this is something to do with AutoFS now (sorry I didn’t spot this earlier as this entry isn’t shown in the standard df -h output):

root@Ultron:/> df -h /storage/backup
Filesystem             Size  Used Avail Use% Mounted on
EDITH:/storage/backup  7.3T  5.8T  1.2T  84% /storage/backup

root@Ultron:/storage/backup> df -h /storage
Filesystem      Size  Used Avail Use% Mounted on
/etc/auto.nfs      0     0     0    - /storage

So whenever a request to a subdirectory in /storage is made, it runs the executable script /etc/auto.nfs and passes the name of the requested subdirectory to it (in this case backup). auto.nfs then matches that to a list to determine the target directory, in this case EDITH:/storage/backup, wakes the server EDITH, then mounts the target directory.

Edit: I’ve run a backup with the drives already mounted, but it still flags it as 0 bytes. I’ve also added a quick ls /storage/backup >dev/null 2>&1 into a Duplicati startup script, to make sure everything’s mounted before Duplicati attempts to save anything, but it’s still reporting 0 bytes. It has to be due to the /storage directory being reported as /etc/auto.nfs 0 0 0 - /storage by df

Assuming you have mono-devel or mono-complete installed (you should), try first test linked above.

GetDrives.exe is reporting /storage as a RAM drive:

Drive /storage
  Drive type: Ram
  Volume label: /storage
  File system: autofs
  Available space to current user:              0 bytes
  Total available space:                        0 bytes
  Total size of drive:                          0 bytes

Duplicati.CommandLine.BackendTester.exe errors out too:

root@Ultron:/usr/lib/duplicati> mono Duplicati.CommandLine.BackendTester.exe --reruns=1 /storage/backup/duplicati_test
Starting run no 0
Generating file 0 (40.05 MB)
Generating file 1 (22.18 MB)
Generating file 2 (9.20 MB)
Generating file 3 (3.26 MB)
Generating file 4 (36.77 MB)
Generating file 5 (33.29 MB)
Generating file 6 (6.04 MB)
Generating file 7 (7.91 MB)
Generating file 8 (33.44 MB)
Generating file 9 (5.74 MB)
Uploading wrong files ...
Generating file 10 (1.01 KB)
Uploading file 0, 1.01 KB ...  done!
Uploading file 0, 1.01 KB ...  done!
Uploading file 9, 1.01 KB ...  done!
Uploading files ...
Uploading file 0, 40.05 MB ...  done!
Uploading file 1, 22.18 MB ...  done!
Uploading file 2, 9.20 MB ...  done!
Uploading file 3, 3.26 MB ...  done!
Uploading file 4, 36.77 MB ...  done!
Uploading file 5, 33.29 MB ...  done!
Uploading file 6, 6.04 MB ...  done!
Uploading file 7, 7.91 MB ...  done!
Uploading file 8, 33.44 MB ...  done!
Uploading file 9, 5.74 MB ...  done!
Renaming file 1 from vUgrPLsMGHSyviOYqqa1PilLBoHOSINhykMfMtLAsaiVFOpFzo3GprcnJDY to 2SpP5WjtA
Verifying file list ...
Downloading files
Downloading file 0 ... done
Checking hash ... done
Downloading file 1 ... done
Checking hash ... done
Downloading file 2 ... done
Checking hash ... done
Downloading file 3 ... done
Checking hash ... done
Downloading file 4 ... done
Checking hash ... done
Downloading file 5 ... done
Checking hash ... done
Downloading file 6 ... done
Checking hash ... done
Downloading file 7 ... done
Checking hash ... done
Downloading file 8 ... done
Checking hash ... done
Downloading file 9 ... done
Checking hash ... done
Deleting files...
Checking retrieval of non-existent file...
*** Retrieval of non-existent file failed: System.IO.FileNotFoundException: Could not find file "/storage/backup/duplicati_test/NonExistentFile-447ff5b9-db40-4cc2-90c5-7e3ab176fc11"
File name: '/storage/backup/duplicati_test/NonExistentFile-447ff5b9-db40-4cc2-90c5-7e3ab176fc11'
  at System.IO.FileStream..ctor (System.String path, System.IO.FileMode mode, System.IO.FileAccess access, System.IO.FileShare share, System.Int32 bufferSize, System.Boolean anonymous, System.IO.FileOptions options) [0x001ef] in <12b418a7818c4ca0893feeaaf67f1e7f>:0
  at System.IO.FileStream..ctor (System.String path, System.IO.FileMode mode, System.IO.FileAccess access, System.IO.FileShare share, System.Int32 bufferSize, System.IO.FileOptions options) [0x00000] in <12b418a7818c4ca0893feeaaf67f1e7f>:0
  at (wrapper remoting-invoke-with-check) System.IO.FileStream..ctor(string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare,int,System.IO.FileOptions)
  at System.IO.FileSystem.CopyFile (System.String sourceFullPath, System.String destFullPath, System.Boolean overwrite) [0x0002b] in <12b418a7818c4ca0893feeaaf67f1e7f>:0
  at System.IO.File.Copy (System.String sourceFileName, System.String destFileName, System.Boolean overwrite) [0x0006e] in <12b418a7818c4ca0893feeaaf67f1e7f>:0
  at Duplicati.Library.Common.IO.SystemIOLinux.FileCopy (System.String source, System.String target, System.Boolean overwrite) [0x00000] in <18a31807bb9d49ff8938883d8e8a36b7>:0
  at Duplicati.Library.Backend.File.Get (System.String remotename, System.String filename) [0x0000c] in <a11c194a0ed94b3ea73efea18edcd2ee>:0
  at Duplicati.CommandLine.BackendTester.Program.Run (System.Collections.Generic.List`1[T] args, System.Collections.Generic.Dictionary`2[TKey,TValue] options, System.Boolean first) [0x009db] in <d554180de7694d3086dc18cde64b2388>:0
*** Retrieval of non-existent file should have failed with FileMissingException
Checking quota...
Free Space:  0 bytes
Total Space: 0 bytes
Checking DNS names used by this backend...
No DNS names reported
Unittest complete!

The attempt to retrieve a non-existent file is supposed to error (and be handled), I think.

If “errors out” means it returned 0 bytes for “Free Space” and “Total Space”, that’s from
GetDrives which possibly jumped up a level (the mono bug) and found a lot of 0 values
however df did too. Did GetDrives have anything to say on mounted /storage/backup?

In some future year, mono and its bug may vanish. The newer .NET versions work better.
Meanwhile there’s hope from the code changes in a pull request that’s sitting in a queue:

Add option to disable quota and update quota size option #4993

@boredazfcuk If you are willing to risk running an experimental build, I can trigger a binary build for that PR. IMO the changes are not risky (they are only about quota calculation), but that is your call to make. It would help to have someone test the changes. That would just ignore the quota, so if it ever fills up you won’t get advanced warnings.

The alternative is to use a more “traditional” NFS mount (maybe in the execute-script-before/after to get mounting on demand) and hope that mono is able to figure out the disk size of that.

The alternative is to use a more “traditional” NFS mount (maybe in the execute-script-before/after to get mounting on demand) and hope that mono is able to figure out the disk size of that.

This is the route I’ve taken. I’ve created a startup script which wakes the NAS up and then mounts the /storage/backup directory using the mount command. The job is still misreporting the available size however. During the job execution, the df command outputs this for each of the two directories:

root@Ultron:/usr/lib/duplicati> df -h /storage
Filesystem      Size  Used Avail Use% Mounted on
/dev/nvme0n1p2  233G  147G   74G  67% /
root@Ultron:/usr/lib/duplicati> df -h /storage/backup/
Filesystem             Size  Used Avail Use% Mounted on
EDITH:/storage/backup  7.3T  5.8T  1.2T  84% /storage/backup

However, the job’s log file shows:

      "TotalQuotaSpace": 249365385216,
      "FreeQuotaSpace": 78912253952,

I’m not getting a quota warning any more, but it’s definitely reporting the local storage’s capacity in its quota information (presumably from /storage).

I’ve also mounted the drive, prior to running the backup from the terminal prompt using an exported mono command line. It still shows this same quota information, so it’s not being checked before/after the scripts are executed. I believe @ts678 has is likely correct:

that’s from GetDrives which possibly jumped up a level (the mono bug)

Can you see if GetDrives.exe shows /storage/backup if you use this method? I suspect that our method of selecting the correct drive does not work when the backup folder is the root of the drive, even if mono has the drive.

It seems GetDrives.exe doesn’t recognise the NFS mount(s) at all:

root@Ultron:/usr/lib/duplicati> mono GetDrives.exe
Drive /
  Drive type: Fixed
  Volume label: /
  File system: ext
  Available space to current user:    76655374336 bytes
  Total available space:              89397039104 bytes
  Total size of drive:               249365385216 bytes
Drive /mnt/hd1-Co-1TB
  Drive type: Fixed
  Volume label: /mnt/hd1-Co-1TB
  File system: ext
  Available space to current user:   364374114304 bytes
  Total available space:             413401018368 bytes
  Total size of drive:               963662659584 bytes
Drive /boot/efi
  Drive type: Fixed
  Volume label: /boot/efi
  File system: msdos
  Available space to current user:      474714112 bytes
  Total available space:                474714112 bytes
  Total size of drive:                  535805952 bytes

It does recognise my Docker overlay mounts though, as the rest of output has 34 entries similar to this:

Drive /var/lib/docker/overlay2/e2ae81b71a4e6ceea414f7c9e1a9234827cad668ba67159c40bf6e4d442bf948/merged
  Drive type: Unknown
  Volume label: /var/lib/docker/overlay2/e2ae81b71a4e6ceea414f7c9e1a9234827cad668ba67159c40bf6e4d442bf948/merged
  File system:
  Available space to current user:    76655374336 bytes
  Total available space:              89397039104 bytes
  Total size of drive:               249365385216 bytes