Can't repair database on synology

I have a problem with repairing my duplicati database on my synology nas:

A single backup job with ~80GB of photo data - backup stored on ondrive.
I had two successful runs/versions but than the backup progress crashed due to no more space available in my temp folder. This was my mistake, as I had specified “TMP” instead of “TEMP” in the environment variable. But after this was fixed the database was broken and could not be repaired.
So I decided to delete it and run a recreation - this runs for several hours and if I want to check the progress the web server is some kind of dead (running but not serving the site - loading indefinitely).
If I kill and restart the Duplicati process I can see an error message saying that the process was not successful (but I can’t see an error there - local and remote log are both empty).
Message: “Object reference not set to an instance of an object”
And this time there definitely enough space left on temp dir and in the home dir (where Duplicati places the sqlite db) .

I also tried to export my backup-config and reinstall Duplicati (also deleted the folder under .config).
But after importing the backup-config I have the same problem again.
I have the same setup running at an other NAS(same type) with even more images (~180GB) without any problems - but there I didn’t made the mistake with the temp variable naming.
I also tracked the memory consumption over the first the hours of repair: always a bit under 80%.
Also I saw that the process bar is slowly moving til that point near the end where it stops for a longer time before the server crashes.

Some details:
NAS: Synology DS218j
RAM: 512 MB
Duplicati version: 2.0.5.1_beta_2020-01-18

Screenshot from a single run where I was able to screenshot the last screen in browser after such a crash:

Logfile - created with the commandline option and level Info (unfortunately not from the same run as the screenshot):

Edit: the backup job starting in the log at 06/10 2am is the normal weekly start of that backup job.

Can you please help me to get my backup running again? Best case without loosing the two versions online and without reuploading 80GB (that took me quite a while with my internet connection here).

If you need further information, please let me know.

Thanks in advance!

Luke

I wonder if 512MB is enough RAM in your NAS for it to do all the normal NAS things and also run Duplicati (and Mono).

I run Duplicati in a docker container on my NAS, so it’s not a direct comparison…but the container alone consumes over 512MB of RAM.

Hi @drwtsn32,

thanks for your answer!
But the first two runs where running without problems and also the other device is running without any problems with the limited ram of 512MB.
For the first hours of the repair progress I also monitored the usage level of the ram -> it was always under 80%. (The nas had no other tasks during the repair progress).

However, should I search for an other backup solution? I do not want to reupload everything and get into the same situation again. I really liked Duplicati because everything is encrypted transparently for the user.

One idea from my side: Does it make sense to copy the whole config to my desktop, change the path the the share of the nas and start the repair there? Afterwards I would copy back the restored database to the nas. Or are the databases somehow machine dependent? (and should I use a Linux system on my desktop?)

It could be that 512MB RAM isn’t an issue for normal operation and just causes problems for database recreations.

Your idea MIGHT work. I don’t know off hand if Duplicati validates the local user datafiles when doing a repair. If it doesn’t even look, then maybe it would work. You’d definitely want to run it on Linux though to most closely match the NAS.

Hi,
I had issues with memory on Synology which were due to the location of the DB. I got it fixed by moving the temp folder and the DB out of the basic volume, which typically is very small on a Synology. The setup I usually see is a root volume of a couple of GB (/dev/md0 - mounted on “/”), which is used for the OS and then the actual storage (eg /dev/md2 - mounted on “/volume1”).
What deblocked the situation was moving the database to “/volume1” and inform Duplicati about this by using --dbpath and also specify a --tempdir, because sqlite can sometimes create big intermediary files.

May be worth trying this.