Sounds like there may be an issue with your local database. I might suggest you try renaming the sqlite file (do not delete, just in case) and doing a repair. You can see where the job-specific sqlite file is located and what it is named by going to the main Duplicati UI, clicking the job, and then clicking the Database link. After you rename the file, go to this same screen and click Repair.
A word of caution - the version you are running has a bug that MAY cause the database recreation phase to download all dblock files and take a very long time. If your backup is large, the issue can be even more pronounced, especially if your backup data is remote. This bug has been fixed in the latest Canary versions but hasn’t yet made it to the Beta channel.
You could try database rename and repair with the 18.104.22.168 version you are currently running. You may not have the circumstances that trigger the long database recreation bug. If database recreation takes too long (say, over 8 hours) you could cancel the procedure, install the latest Canary version, and try recreation again.
Normally I would not advise using Canary in a production environment, but the latest Canary is very stable and is getting close to being promoted to experimental or beta. If you do opt to try the latest Canary, I would switch back to the beta channel once the next beta is released.
I added other files locally and after rerun the update and the error disappeared and everything ended without any error or warning.
Should I be worried?
Maybe there is any additional tests or repair that are triggered on local change of data but not triggered if local data equals to remote data? Also here: Warnings like: "Duplicate path detected" the issue was fixed either after I added data to local data OR folder contained space character on the end and removing it fixed the issue. Maybe @kenkendk can answer this?
Testing restore occasionally should be standard procedure, and even more so after odd issues happen.
Do you ever run restore tests, preferably with –no-local-blocks or direct restore to a different computer?
Nothing similar. I had Windows “blue screen” (do not remember any details) 1 month ago, but nothing similar between last successful upload and before unsuccessful one. I made a successful upload (no errors, no warnings) and after (I do not remember if I changed anything in local data) I ran the upload second time (10 mins after first successful one) and it gave me error that I have mentioned in first post. After that I changed local data and upload went without any errors.
The Verify files operation is documented in Verifying backend files as being just the tiny sampling test that happens after each backup, so pushing the button adds little. You can increase the sampling:
but in any case this is mostly a test that remote files have the expected content, by downloading them.
There’s no substitute for testing a Restore, and how heavily and how often depends on how important your backup is to you. Maybe it’s not, maybe it’s a secondary, maybe it’s primary and super-important?
It’s up to you and depends on time available and maybe download costs or limits, but I recommended
Or if the backup doesn’t matter much, you can probably just keep going and see if any errors come up.
As a special test beyond the usual, a somewhat technical approach would be to install DB Browser for SQLite, shut down Duplicati, open a copy of your database and run SQL PRAGMA integrity_check
I have battled this issue off and on over the years. Some observations:
Sometimes it does work to delete and repair the database. I have done this many times.
Sometimes my systems get in a state where the error happens immediately after or during the repair after the delete.
The timing of when backups start failing with this makes me think it could be related to patching of other packages.
I’m running on ubuntu, installed from .deb but upgraded from the GUI. Uninstalling with dpkg --remove duplicati and then installing the latest .deb file has on multiple occasions got me back to a stable state (that lasted at least several months).
Yesterday when doing the above i noticed the instructions on the website about installing mono from the mono project. I had been using the version available from standard Ubuntu repos, which was v2 or v4, versus v6 in the mono project. I added the mono project repos, upgraded mono to their latest versions, and re-installed duplicati. Crossing my fingers the recurrence of the issue was related to the old mono version, but only time will tell.
One more update - TL;DR - check your RAM for errors
Not long after posting the above message, i got the error again, on the server where i had gotten it most frequently in the past. I tried uninstalling the (older version, upgraded from gui) package and re-installing the latest (since that was one thing that worked in the past). The apt-get install command kept aborting and several troubleshooting steps later i checked the memory (linux memtester tool) and found many errors. After ordering a new stick things seem back to normal.
Incidentally i had also started getting “Message has been altered, do not trust content” messages on another server when it was backing up (via SCP) to this computer. Those seem to be gone now too.