Synology: all backup jobs gone after DSM update

Hi,

as described above, my NAS did an automatic update last night. Now, upon starting Duplicati again, I found that all my backup jobs were gone and I am staring at a blank screen! Also, instead of the Canary version 2018-01-23 version, I was suddenly back at Beta 2017-08-01.

FYI: I start Duplicati via command-line (“nohup mono /volume1/@appstore/Duplicati…”) in root mode to resolve some of the Synology issues.

So how did this happen - and how do I get my backup jobs back? The individual job databases are safely stored in /volume1/DuplicatiDB, so no problem there, but what happened to my Duplicati configuration?

Where does Duplicati store its configuration / jobs / etc. on Synology?

Thanx for any help getting this resolved.

Cheers,

Torsten

/root/.config/Duplicati is where you should look… But DSM sometimes deletes this as part of an upgrade… So best keep a copy elsewhere or specify another location.

You will probably have to reconfigure your backups…

I don’t know Synology (couldn’t afford the one I wanted so ended up on unRAID) but assuming @Wim_Jansen is correct that you’ll have to recreate your backups you should still be able to point them to the existing /volume1/DuplicatiDB databases so you can at least avoid having to rebuild those.

Since this is apparently a known issue with DSM updates you might want to try using this parameter to keep jobs in a location updates are less likely to overwrite.

Duplicati.Server.exe help

–server-datafolder: Duplicati needs to store a small database with all settings. Use this option to choose where the settings are stored. This option can also be set with the environment variable DUPLICATI_HOME.

Thanx @JonMikelV and @Wim_Jansen - looks like Synology actually trashed that folder during the update. I have now set up my server data folder on volume1 so that this doesn’t happen again.

BTW: does it cause any kind of problem (some interdependency effect?) to actually include this data folder in one of my backups?

Cheers,

Torsten

You’re welcome. Maybe we need a #howto for Synology “best practices”…

The individual job database is in use during the backup so you’ll likely get a file access error on that, but everything else should work just fine.

The exact same issue was reported in #3004 and #3005, so it seems last update consistently wipes the duplicati configs.

Kenneth pointed out that you can configure Duplicati with an environment variable DUPLICATI_HOME=/vol/with/space to make Duplicati store everything outside of /root/.config.

Using that environment variable might be a must for Synology users if Synology is really this unreliable with their updates.

1 Like

I had opened a support case with Synology to ask “wtf?” nicely, but I didn’t get much out of it. They’re happy to look at your logs and make sure your Synology is working, but…

I can confirm our operating system does not take into account extra third party packages installed onto the root folder of your NAS.

Unfortunately, I am not able to provide a complete breakdown on the update steps and why Duplicati was affected as we are unable to assist with third party software.

I can only recommend re-configuring this package manually following the update and / or contacting the Duplicati / Mono developers to check if a workaround exists.

My apologies for the inconvenience caused.

Maybe it would be possible to make a workaround specifically for Synology, putting the files outside /root.

Or maybe we could give a GUI option to move it (and just leaving some kind of file in /root/.config/duplicati that we check on launch to figure out where our working dir is. Then, after an upgrade, a user would possibly have to tell Duplicati where the files were located again, but the files would be okay.

1 Like

I’m not sure if this is related, but as well as the “total loss on upgrade” symptoms, I find I can’t even run Duplicati from the Synology DSM GUI any more: when I do so, even though it recognises that I have configured a backup, when I click on its configuration it lists crazy stuff like “No encryption”. I have to start it from the command line while logged in to the NAS as root for things to work right. Sigh.

I’m not sure it’s related, but let’s see if we can find out. We can always split it out to a new topic it it turns out to be.

Could you provide a sanitized version of the configuration?
E.g. by exporting your config to commandline format and manually removing sensitive info and filters if you have any.

Here it is @Pectojin, thanks:

I’ve elided the metadata in its entirety, except for the LastError which I thought might shed some light on a previous repair failure — but thinking about it that’s actually probably related to a crash on the ssh client through which I was running the previous recreate attempt. If you think the metadata might be helpful, just say and I’ll inline it.

I thought perhaps this might be a permissions issue, but DSM seems to be running Duplicati as root (and not working), same as me from the command line (which works).

Hmm, the config file looks fine. Pretty standard settings. I don’t see any reason it shouldn’t work as intended.

Are you using the command-line with the same database or without the parameter? Also do you see any specific logs on the log page?

I don’t think this specifically applies to the upgrade, but it could be some kind of database corruption.

It should be the same database: before the latest install I ran a case-insensitive find for anything named duplicati on the NAS and deleted it, and both the DSM-run instance and command-line-run instance list the same single configured backup. My manually executed command line only specifies that I can use the web interface remotely. I’ll try digging into this some more since the behaviour isn’t known or of obvious origin, but for the moment the fact that it works manually is good enough. Thanks for taking a look!

(… and no, nothing interesting in the Stored logs; would the Live feed be more informative if I were playing around to test it some more?)

You can usually get some pretty good logs out from the profiling level if you’re looking while it happens. It often helps because they’ll print the raw error, so it’s easier to search for :slight_smile: