Another backup failed due to error Cannot find file

I saw another thread with this error. In my case I get

One or more errors occurred. (Could not find file “/tmp/dup-8e1225c3-d15d-421b-901e-c2198bb09df2” (Could not find file “/tmp/dup-8e1225c3-d15d-421b-901e-c2198bb09df2”) (One or more errors occurred. (The remote server returned an error: (403) Forbidden.)))

The other thread was a webdav backend, mine is Google Drive.

I see this after the failure

$ ls -lh /tmp/dup*
-rw-r–r-- 1 km km 50M Mar 11 13:18 /tmp/dup-3ba9067e-75a9-48c4-832f-e8900d8d863a
-rw-r–r-- 1 km km 50M Mar 11 13:18 /tmp/dup-63119447-2262-4878-9355-7f77198190f3
-rw-r–r-- 1 km km 50M Mar 11 13:18 /tmp/dup-9d4f423e-5f2a-423b-ad32-c1651f845919
-rw-r–r-- 1 km km 50M Mar 11 13:18 /tmp/dup-a5f27122-7a2a-4a61-b0af-fa70cb9ecbef
-rw-r–r-- 1 km km 50M Mar 11 13:18 /tmp/dup-ae0d2f27-4704-4fe2-8438-bb52cc96f795
-rw-r–r-- 1 km km 50M Mar 11 13:18 /tmp/dup-dc7e9309-9a09-4752-81aa-3a312c80e846
-rw-r–r-- 1 km km 50M Mar 11 13:18 /tmp/dup-f7c99bb4-845b-4586-ae03-0dcc02d3071e

so its using the default 50M

I’m running Duplicati - 2.0.5.1_beta_2020-01-18 on Ubuntu 16.04.

Its run fine daily for months, but failed the last two nights. I also have a separate backup of a much smaller tree, which continues to run without error. This big one spends most of its time stating a huge number of files that don’t change, so although its a huge tree not that much gets uploaded to the backend each night.

Can you confirm that you have enough space in tmp folder?

How long is a backup? Why would these /tmp files NOT be overnight? Are any source files huge?
Do you have really old files in your /tmp? Sometimes systems will delete old /tmp files periodically.
Unfortunately, Duplicati temporary file names don’t say much. Any other context you can provide?

Was this the second failure? Was the other failure similar with “Could not find file” and also “403”?

FWIW Google Drive recently threw a temporary 403 at me which exhausted my deliberately small –number-of-retries and caused problems, so I wonder if your 403 is a cause, a result, or unrelated.

The third day succeeded, more about that at the end. First about the failures

/tmp has plenty of space. Its on the root filesystem with 54GB free.

The “could not find error” was from the second day, It was from the popup in the console, since there is no log when the backup aborts I wouldn’t know where to look for the first days error now.

I am backing up a very large tree. On a typical day it takes 5 hours and reports something like

examined 1140271 (4.65 TB)
Opened 8 (31.62 GB)
Added 7 (3.02 GB)
Modified 1 (28.60 GB)
Deleted 6

Now about the third day which succeeded. I did get a warning

2020-03-12 04:50:23 -04 - [Warning-Duplicati.Library.Main.Operation.Backup.UploadSyntheticFilelist-MissingTemporaryFilelist]: Expected there to be a temporary fileset for synthetic filelist (121, duplicati-i410c5a59a2cd4d19b39ce10de9f637e7.dindex.zip.aes), but none was found?

The backup took an unusually long time a little over 8 hours, with

Examined 1140283 (4.62 TB)
Opened 58 (76.87 GB)
Added 54 (25.18 GB)
Modified 4 (51.69 GB)
Deleted 38

I don’t if the extra 3 hours had to do with the warning, or was simply because it had two extra days of work to backup.

Yes, this is a support problem. Log isn’t there when you need it. If it keeps happening, lots of logging to lots of drive space can be set up, but few people do that. Email on Error is somewhat more feasible for constant use, and picks up a little, but less than the ideal amount – most are just one-line summaries…

This is, I believe, a Duplicati bug involving a bad lookup in an attempt to upload a backup interrupted by something (a fatal error may do it). It shows the previous backup plus whatever new made it. More here where there’s talk about whether this is still needed, now that manual stopping is getting more possible.

Your experience points out that proper stoppings aren’t always possible, so then how should things go?

The good news is you’re also confirming that lack of a synthetic filelist for an interrupted day is not a big problem (relatively speaking), but you probably have a day gap instead of synthetic backup for that day.

There would probably be some amount of inspection and repair, but more work to backup is likely too… Looking at that job log compared to the usual might show some differences beyond what you’ve posted. “BytesUploaded” in “BackendStatistics” in “Complete log” might give an idea if you’re bandwidth-limited.

I’m not sure if this is going to get solved easily. Even with logs, there’s little to no logging of tmp file use, and tmp files are used for many things. I think another report had a stack trace that narrowed the scope a bit.to SpillCollector which is collecting partially filled files of blocks to finish off the backup, and which might suffer from extreme skew where a “long pole” file runs long enough that early enders get deleted. Your backup does not seem so hugely long that a tmp file clean would be it, but it’s not totally ruled out.

For the “403” question, you could see if About --> Show log --> Stored logs it. If so, on which backups? Your experience does match mine which is that the 403 is transient. I’m just not sure if it’s related here.

You know I had this problem. Did all sorts of things and could not solve it. This was backing up to a USB drive connected to a router. So I took the drive off the router, plugged it into a PC. Did a Check Disk/Repair on it. Put it back. Problem Solved!! The error is so far removed from the solution.