I am plagued with “attempt to write read-only database” errors. If I try to delete/recreate a database, it runs for hours, days, or weeks (depending on the backup size) doing SQL commands and downloading Duplicati files, then fails at the end with “attempt to write read-only database”. If I try to use the database, or repair the database, it reports that “repair failed”, and suggests I delete and recreate the database again. Help!
there are tons of stuff on the Internet about this error message with Sqlite, all pointing to problems of rights, permissions, write-only temp directories. I assume that you have covered all of that already. As a wild guess, sometimes, much too often, error messages are misleading and sometimes a software gives to an impossibility to write a common basic interpretation, such as read-only destination. Maybe the real problem could be a lack of resources ? Did you monitor available disk, handles, ram ?
Recreating the DB can be a very slow process depending on your data and storage location. Most of the “work” to recreate the DB happens in your “temp” folder, if the drive containing that folder is limited in available space or has quota’s strange things can happen. You may also want to flush the contents of your temp folder in case something in there is not being overwritten as it should.
Moving beyond that I’d like to know, Did you manually delete the previous database? Are you using the GUI? Is Duplicati running as a service? What OS are you running? What’s your source/destination setup?
Thanks for your response. I appreciate that rebuilding takes FOREVER. My issue is that, at the end of the rebuild, it fails with “attempt to write read-only database”, after a week or more of rebuild. The rebuild process also drags my system down to a crawl, but I dare not reboot, because I’d have to start the rebuild all over again. Is there a way to “checkpoint” a rebuild, so that it can be restarted after a reboot?
Regarding “read-only”, I went through the Duplicati database folder and chmod 777 everything, including the folder itself. This is not a shared machine, so I’m not terribly concerned about the security exposure of 777. I’m rerunning the failing rebuild(s) to see if that helps.
I check the $TMPDIR directory, but didn’t see anything I could identify as Duplicati-related. Thanks for the suggestion.
To answer your questions: I used the GUI to delete the database. I’ve tried separate “delete” and “rebuild”, as well as “delete/rebuild”. I’ve confirmed that the delete works previously. Duplicati is running under mono-sgen64 as a process, not a service. I’m on MacOS Big Sur 11.7.1. I cannot upgrade beyond Big Sur, because later OSs are not supported on my mid-2014 MacBook Pro. My sources are both internal and external drives, backed up separately with Duplicati. The destination is Google Drive.
Thanks again… Steve
Not to my knowledge.
I’m not certain about this but maybe the read-only is coming from the DB itself, i.e. read-only access to a table in the DB vs file permissions to the DB as a whole. When you clicked to delete database did you happen to check the containing folder to see that the DB was indeed deleted. You shouldn’t have to but I’d check to verify that it is being deleted.
I’d also disable any other jobs from running during the rebuild process, sadly you’ll need to open each job to disable it’s schedule (Step 4) and be sure to then go to Step 5 and click Save. If there is a more global disable scheduled jobs I haven’t seen it.
At this point, stop any jobs, reboot, start Duplicati, disable jobs, delete the DB, verify it’s gone, start a DB rebuild, hope.
You could always copy the backup file set from Google Drive and move it to a more local folder/drive and try a local rebuild, which if nothing else will at least be quicker to fail.
Thanks again for the suggestions. Yes, I have confirmed (previously) that the DB is deleted. I’m not sure that disabling the other Duplicati backups will help, as it seems that Duplicati only runs a single job at a time, but I’ve taken your suggestion anyway. I’ve been re-running a DB rebuild for over a week now, so I’ll wait for it to complete (or fail) before rebooting.
Funny you should mention local rebuild: I’ve downloaded multiple Duplicati backup sets from a Google Drive that I was losing access to. I rebuilt the databases locally, then uploaded the Duplicati backups to a different Google Drive, and am trying to rebuild the databases from the new remote Google Drive now. Rebuilding DBs was necessary after backup set downloads and uploads because of other issues that corrupted the DBs.
I wonder: Has anyone reported a problem rebuilding a database while a Time Machine (MacOS) backup was running? I frequently see competition for local file access between Time Machine and other apps running on my Mac. Just to be safe, I think I’ll pause Time Machine while I’m rebuilding.
Thank you… Steve
You can try to generate a log if you want to see what happens.
If you want to go this way, I’d advice first and strongly to stay away of ‘profiling’ or ‘explicit only’ as it will generate humongous files (as in multi gigabytes). Second, use a terminal window. Use only the UI to export the job as command line, transfer it in a text editor, replace ‘backup’ by ‘repair’, remove the files and directories to backup, and add
Then delete the db (in the UI, not manually), and transfert the command from the editor to the terminal and launch it.
It may even provide you with a backtrace (directly in the terminal window, not the log file) in case of bad stuff happening, that could be missing from the Web UI.
My best guess is that Duplicati is colliding with Time Machine (which seems to make files “read only” when accessing them). So, since I haven’t seen this problem since I paused Time Machine while recreating a database (ran for over a month without error), I guess I can close this? Tx to all who responded… Steve
Update: I turned off Time Machine for a few weeks and never had a “read only database” error. As soon as I enabled Time Machine, I got another error. Seems to be the culprit! Any chance that Duplicati can capture this error and retry later?
Your problem seems to be a classic of database being in conflict with a system feature, on Windows it’s the antivirus - the solution here is always the same, exclude the directory from the antivirus. I have never known of a database that handles that. In your case it seems the best option.
You would probably have to use a bash script coupled with
--run-script-after to run your Duplicati backups and have it check
tmutil currentphase and a check for
.inprogess/ folders on the TM drive and if neither are happening then run the backup. That does not in anyway ensure that TM doesn’t start a job 0.5 seconds after Duplicati starts. To stop that from happening you’ll probably have to do a
tmutil stopbackup followed by a
tmutil disable while Duplicati runs then the a
tmutil enable once finished.
Thank you. I’ll try the run before/after scripts.