Database locked even when UI says "Finished!"

I landed here because I was experiencing the same issue. But there seems to be a UI bug. The database remains locked even when the UI says “Finished!” in the progress bar in the header. But the backup really isn’t finished until the progress bar goes white again, and says “Next Scheduled Task”.

That was confusing till I figured it out.

I watched my Live log in “Profiling” mode during the end of a backup run and it did appear that some of the entries below happened AFTER the status bar had changed to “Finished”.

It might just be due to a delay in getting the live log contents into the web browser, but if not then I could see where this could be confusing. Note that in my case the continued log entries after the “Finished” message showed only lasted 10 seconds or so.

Oct 8, 2017 7:05 AM: Email sent successfully using server: smtp://myServer:12345
Oct 8, 2017 7:05 AM: Whole SMTP communication: Connected to smtp://myServer:12345/?starttls=when-available
Oct 8, 2017 7:05 AM: Running Backup took 00:28:15.570
Oct 8, 2017 7:05 AM: ExecuteNonQuery: DELETE FROM "RemoteOperation" WHERE "Timestamp" < ? took 00:00:00.655
Oct 8, 2017 7:05 AM: Starting - ExecuteNonQuery: DELETE FROM "RemoteOperation" WHERE "Timestamp" < ?
Oct 8, 2017 7:05 AM: ExecuteNonQuery: DELETE FROM "LogData" WHERE "Timestamp" < ? took 00:00:00.891
Oct 8, 2017 7:05 AM: Starting - ExecuteNonQuery: DELETE FROM "LogData" WHERE "Timestamp" < ?
Oct 8, 2017 7:05 AM: ExecuteNonQuery: UPDATE "RemoteVolume" SET "VerificationCount" = MAX(1, CASE WHEN "VerificationCount" <= 0 THEN (SELECT MAX("VerificationCount") FROM "RemoteVolume") ELSE "VerificationCount" + 1 END) WHERE "Name" = ?
1 Like

I’ve noticed this as well, until the progress bar disappears the backup is still active. It makes sense that there are some routine maintenance tasks that need to be executed after the backup itself finishes. As @JonMikelV it doesn’t take too much longer once it says finished, but it is a bit confusing.

Maybe just changing the message slightly would help “Backup Finished, Performing Post Backup Maintenance” or something?

@JonMikelV - I’ll take a look at my logs next time I have a chance. I’m currently rebuilding my local backup, as I moved Duplicati from being installed as a Docker container to being installed directly on the machine.

But I remember it seeming to take more than 10 seconds for the finished message to clear.

I think @sanderson is on the right track - if post backup maintenance is happening, a message indicating that would be fantastic.

1 Like

Did you (choose not to) try just moving the appropriate sqlite file to the machine?

I did not try that. But also, after reading Choosing Sizes in Duplicati I decided I wanted to change my block and dblock settings (to 500KB/1GB) for my local backup since most of my files are media and the files are larger.

Good idea, though it sound like you’re rebuilding your local hash index from remote dblocks, but regenerating remote dblock archive files.

If that’s the case then the newer dblock sizes will only apply to new archives created after the dblock change and re-compressed dblocks. Ask if you have version history cleanup enabled you should EVENTUALLY get everything in the new sizes. I’m not if there’s a command to force a download / rebuild of all dblocks.

I’m less sure what will happen with a block size change. (isn’t 100KB the default?) Specifically, those blocks are used for duplication which, in theory, means no blocks of the new size will match any of the old size (unless the file us smaller than both block sizes).

The result is that EVERY file will be fully backed up again even if there were no changes. Note that this can be partially mitigated by detecting file changes with size or date only, not block hash lookups in which case only changed files would be fully backed up in the first change after a new block size use used

Please let me know if you DON’T notice any of this happening so I can know to revisit my understanding if blocks and dblocks. :slight_smile:

@JonMikelV - I am such a newb using Duplicati & Linux, that I just nuked everything and started from scratch. The documentation stated that you can’t change the block size after the initial creation, so that is why I nuked everything and started over.

Also, according to the documentation, the default block size is 50KB, and the default dblock size is 50MB.

I forked this to it’s own thread since I think your issue (“Finished” doesn’t realy mean finished…) is distinct from the original one.

Thanks for correcting me. I knew about the 50MB dblock size but my memory failed me on the 50KB block size. (Nope, really is 100KB.)

@kenkendk, how hard would it be to add an early-in-the-run check that looks at existing block size versus passed in parameter and errors out with an appropriate message when they don’t match?

Where did you find this information? This is incorrect, the default block size is 100KB (102400 bytes), so if some documentation indicates it’s 50 KB, this should be corrected.

It already does this, unless I don’t understand you correctly. I get this message instantly after modifying the block size and starting a new backup:
image

Confirmed - I tried adding --blocksize=110 to an existing backup (which apparently did indeed default to 100KB) and upon trying to run the backup got a message similar to yours.

Sorry about that. I need to cut down on replying from mobile devices where I don’t have as much testing access as I should have. :blush:

So unless dtpsolutions can remember where he saw the incorrect 50KB documentation, I guess the last few posts can be ignored and we can get back to discussing “Finsished!” vs. “database is no longer locked”. :slight_smile:

@JonMikelV @kees-z - I screwed up. You guys are correct, it’s 100KB. My memory on the documentation was faulty. Ultimately, when I decided to nuke everything and rebuild, I changed my block size to 500KB and my dblock size to 1GB.

I edited my post above to my true numbers, and my screwup is forever captured in the quoted reply so hopefully future readers understand what is going on.

For the OP, I think the time spent in the “Finished” part is actually the database VACUUM operation, where it rewrites the local database in a bid to reclaim space.

This step was removed from Duplicati in 2.0.2.2 (thus not fixed in the beta): Release v2.0.2.2-2.0.2.2_canary_2017-08-30 · duplicati/duplicati · GitHub

If/when you upgrade to 2.0.2.2 or beyond, please let us know if this issue goes away for you.