Could not find the file.... /tmp

I’ve read up about this error and attempted several things.

Here’s my “version” of this issue:

One or more errors occurred. (Could not find file "/home/albus/hdd1/tmp/dup-f5d31838-1031-4d68-86b7-b0054102d2a1" (Could not find file "/home/albus/hdd1/tmp/dup-f5d31838-1031-4d68-86b7-b0054102d2a1") (One or more errors occurred. (Cannot access a disposed object. Object name: 'MobileAuthenticatedStream'.)))

I back up regularly to B-2. The backup has been working OK several times. I’ve been uploading large files (some videos are 10GB or so), and I didn’t have an issue. I’m using a raspberry pi 4 which has a 500GB external mounted to it. I usually restart it once a week, apply patches, etc. Issues started to happen after the last restart.

Here’s what I tried, from what I found in this forum and in github so far:

  1. moved the /tmp/ folder from its default location (on the root of the RP, too small…) to the external hard drive, where it has room. Still ran into the same issue.
  2. checked the health of the external drive ( Ext4, encrypted) didn’t find any bad sectors.
  3. I tested the connection to B2. It is successful.
  4. I attempted to create a new backup as well. About 20GB out of about 130GB of the backup is uploaded when this issue takes place (the name of the file missing is different each time).

I don’t see any issues with the drive, including permissions (Duplicati is able to write many “chunks” into the drive’s /tmp folder - the chunks on the backup are 350mb).

I suspect maybe I have a file that is just too big to handle somehow? But how can that be, I have about 300GB of empty space on the external.

Clueless at this point… help?
Thanks!

Hello and welcome!

Have you customized the “remote volume size” setting in your backup configuration, or is it left at the default of 50MiB? Sometimes people change this value to something very large, and it can present problems when Duplicati is creating the temporary volumes on local disk. It should be left at 50MiB in most cases.

Hi @drwtsn32 ,
Yes I did. It’s at 350MB now, used to be 100MB. Both these options caused the same error to appear. It’s worth a shot, I’ll lower it back to 50MB.

Sorry, I see now you mentioned 350MB chunks in your backup. That probably isn’t the issue then, since you have 300GB free.

Can you confirm that the missing tmp file truly is not present on the disk?

I am running another backup at the moment, but when I checked this morning (after a failed attempt) the directory was empty, no files.

Maybe the disk got unmounted for some reason? USB error, etc. Check output of the dmesg command.

I don’t think that’s likely, since I tried several times. I can also inspect the disk after the backup fails and cd into the folder it’s mounted to and see my files…

Ok, yeah sounds like it’s not dismounting.

It is quite odd, because usually when Duplicati fails it leaves the temp files behind. It’s almost as if some other process might be deleting the files while Duplicati is running.

I started the last backup job after @drwtsn32 suggested reducing the chunks size back to 50MB, and it’s still going. I’m about 50% done, which is further I got before with the errors mentioned.

Doesn’t seem to make sense that the chunk size is the reason for the failure, since I have enough space on the disk (and initially, the issue was with 100MB chunk size). I’m thinking maybe the ISP is throttling me somehow, which won’t shock me, but then the error should be network related, not what I see (unless the error is writing to the remote location? Hmmm… need to look at the logs).

I also should mention I completely powered off my RP (which runs the backup jobs) and dismounted the drivers, then powered on and mounted again. Don’t think this was the issue either because the error happened about an hour into the job, not right away. As I mentioned, I scanned the disk for bad sectors and didn’t find any.

Another thing I’m wondering about. There are some files in the /tmp folder of 0 bites that haven’t been deleted or modified since the backup job started. I wonder if that’s normal.

I wouldn’t think that’s the issue based on the error. It is saying the tmp file is missing/not found, which you confirmed was actually the case.

What about filesystem errors? What filesystem are you using, anyway?

You shouldn’t be left with any tmp files once the job is completed.

FYI Backup to WeDAV / Nextcloud never completes is asking for logs of a similar case, and linking to another topic which was trying to go even further on troubleshooting, but the information return seems to have stalled.

OK, seems like backup completed, now it’s stuck at “waiting for upload to finish”. It’s been this way since 2:30PM, it’s now 9:30PM. Something’s wrong. I can’t imagine cancelling and starting all over again, it took over two days of uploads…

Please check About → Show log → Live → and set the dropdown to Verbose. Do you see any events there?

That’s where I went to check when I wrote this post.
Nothing out of the ordinary I think. It just says the latest chunk was uploaded:

Was that 2:24 PM timestamp hours ago?

Yes. At this point, yesterday… stuck at the same point.
So, if I restart, will it loose the entire backup and I’ll have to start again? It’s based on rsync, isn’t it? Should just continue where it dropped off?

Also, unrelated and out of curiosity: what software do you use to make this forum?

After waiting more than 24 hours on the upload to finish, I decided to restart the service. I couldn’t gracefully shut down because it was just stuck, waiting for the upload to finish, so I had to kill duplicati and start it again. Now I have “Detected non-empty blocksets with no associated blocks!” fatal error…

…is there a way to fix this? I really don’t want to start a 3-day backup all over again. Please help, this has been going on for a week and I can’t back up my stuff!

I’m just a volunteer on the forum and wasn’t involved in the setup, but it is GitHub - discourse/discourse: A platform for community discussion. Free, open, simple.

This is unfamiliar territory to me. Not sure if any of the blocks uploaded during an interrupted job can be re-used or not.

If you do end up starting over from scratch, you might try selecting a smaller set of data to back up, just to see if it can complete successfully. If so, then readd more data to the backup selection list and run another backup.

If you had any backups complete, you can look in Complete log in the logs for their RetryAttempts.
Although your issue seems sudden, perhaps there was a sub-critical problem that had grown worse.
number-of-retries can be raised to counter intermittent issues. About → Show log → Live → Retry is another way to see them in real-time. You can click on some of the error lines to see details on them.

An easier plan for long-term monitoring is log-file=<path> and log-file-log-level=retry or higher, but log reveals private information at higher levels such as verbose, so maybe don’t start that high right now.

This is merely a minimal sanity test that access is possible, and a directory list request does not fail.
Duplicati.CommandLine.BackendTester.exe is a somewhat better test, and doesn’t take all that long.
You would give it target URL going to an empty folder, based on URL from Export As Command-line.

Previously mentioned problem Backup to WeDAV / Nextcloud never completes was network issues exhausting retries, which somehow seems to lead to a missing Temp file, then failure of the backup.

No, but similarly-named program duplicity is.

Ideally, however there are some sanity self-checks before backup, and this failed, so no continue.
Currently, the recommended way (when it works…) to interrupt backup is “Stop after current file”.
Next Beta should handle “Stop now”. I’m not sure it handles process kills but you had little choice.

There is no sure-fire way to clear up “Detected non-empty blocksets with no associated blocks!”,
however because it’s likely a problem with a recent file backup, you could try deleting last backup.
Backups are numbered with 0 being the latest. Numbers and dates are also on Restore selector.

If you decide to try this, enter Commandline screen, change Command to delete, and change the
Commandline arguments box to --version=0. Run that and see if it cleanly deletes that version.
Verify files button can repeat the self-check that failed. Maybe test will pass after the deletion.

The other thing that sometimes works is the Repair button on the Database screen, and if you’re
willing, you can use the Create bug report before that, in case posting it will help see the issue.
Manual repairs to the database are sometimes possible, but it’s involved, so let’s try simpler ones.

Excellent idea. When the usual backup stops working, try different things to see what can still work.