Keep getting System.AggregateException and System.IO.FileNotFoundException. Need help

“No filelists found on the remote destination” made me think of the no-files-seen issue, but you proved that files were seen by list . Looking up the message text (thanks for providing it), this was actually from code attempting to recreate the database from backend files, but found it had no dlist (list of files) to start from. Because the dlist contains backup details, it can’t be finalized for uploading until the very end of the backup.

The 3 usual get operations are sampled from remote files known to the database, but no DB may mean no samples. The question becomes – where did the database go? Unfortunately, I don’t do Docker, but think it sometimes becomes an issue of where to keep persistent data. Keeping it outside the container solves the problem of losing it all when you replace the container with a new version, but it needs some special steps.

This actually made me understand the process more, but i still have no idea what caused the problem. Thanks for thinking out loud.

I have mounted the container /config folder to a folder on my NAS. I think i remember seeing .sqlite files in the folder but it is currently completely empty. Is that normal? I still have an old linuxserver/duplicati container which i haven’t used anymore since switching to duplicati/duplicati but when looking in the config folder for the old linuxserver/duplicati i do see a bunch of config files with .sqlite files.

Because you’re on a small system (what memory size?) that can run a small backup but not a larger one, possibly there is some sort of resource issue that arises. I don’t know how it leads to the original message whose source is not known, but appears to be the result of previous problems (which a log might capture).

My Synology NAS has 8GB RAM with an Intel Pentium N3710. Usually there’s only 35% of RAM is use, and the Duplicati container is not limited in resource usage.

Duplicati uses temporary file space (I guess from your error yours is at /tmp – is that inside the container?) heavily for accumulating information and staging files for upload to destination. Can you watch free space?

I have tried mounting and not mounting the /tmp file to the host filesystem because i had the same idea with free space. With both options everything that’s been going on still happens. I have about 9TB free on the NAS so it shouldn’t be running out of space when it’s mounted, and since it does the same thing when it’s not mounted i guess it’s also not a space issue in that case. (I’m not sure where the files are when they are not mounted)

Can you check the job Database tab to see its path, then figure out where that really is? Watch free space. You can also watch the database itself. Especially from a clean start, I’d expect it to start small then grow.

I do see a problem here. All databases for each backup say they are in /data/Duplicati, which is not a path i have mounted to the host and thus must be getting cleared every time the container restarts, and i have done that a couple times when trying to solve problems. I will try to mount /data to the host filesystem now and see if that makes a difference. You’ll hear about this on my next reply.

Can it be that linuxserver/duplicati uses /config and duplicati/duplicati uses /data? I had mounted /config, not /data. If this is really the cause of all of these issues (not sure how this would cause a 504) i’m going to be very frustrated.

Now that i have mounted /data to the host filesystem, my backup tasks are gone completely. That kind of sucks, but i guess i can set them up again with the exact same settings. This hasn’t happened before on container restarts so maybe the /data folder never got cleared like i said i thought had happened before.

Edit: I also suddenly don’t have an encryption option anymore?? It just says “No Encryption” instead of the 256 bit AES encryption that i used before. What can cause this??

Edit Edit: The encryption option is back after a container restart. I am so confused.

A log file was suggested earlier to try to see what got the 504 from NextCloud. That log level was at retry , which is good because it doesn’t show private info like paths, however there are higher levels if necessary.

I’m sorry i must have read over this. I’ll add the log options to the backup task and see what that produces.

You can possibly gather some information on any stuck-here situation like the “No filelists found” (is there a file with dlist in its name in the backup destination area) this way, or I suppose you could just try to restart by deleting the database (Delete button) and the corresponding NextCloud files using some manual delete.

I cannot find a file with ‘dlist’ in it’s name on the 3TB backup, but i do see one on the 300GB backup.

Edit edit edit (3 hours later): i let the 300GB backup recreate the database and it suddenly says theres 15k files locally missing. That can’t be right so im even more confused than before now. I guess i’ll redo that backup, and if it says the same for the 3TB backup i guess i’ll redo that one too. I have no clue what’s going on at this point.