Recreating database logic/understanding/issue/slow

My guess is that incredibly slow database creation is due to transaction settings in SQLite.

By default, SQLite creates a transaction for every single INSERT, which means that for a million inserts (typical 100 GB data / default 100-KB blocks = 1 million of blocks) you will have a million of temporary files committed to your (SSD) hard disk followed by a million of file copy operations on your DB file (also committed, not cached in memory, since SQLite will wait for commit before it closes the transaction).

This gives exactly the behavior that others have described: a lot of disk activity (killing the SSD) with no progress and many-weeks-long DB recreate process (even for locally stored backup set, as it was in my case).

An almost one-line remedy is to wrap the processing of one dblock file (some 500 INSERTs) in begin / end, to make it one transaction. My guess is that this will speedup the process by 500 times (1 day instead of 3 weeks reported by one of the users in this thread).

What is more, in DB rebuild (including building the temporal DB in restoring with unavailable original DB), the transaction mechanism in SQLite can be turned off completely (there is a command for this), since in the case of an interruption the temporary DB will be discarded anyway. This should speed up the process by 1000 times, as some users reported.

2 Likes

That would be awesome if true. We should test it!

My concern is that the code looks like it already uses one transaction per dblock (but looking below it using Process Monitor at its -journal file shows what looks like a per-block-not-dblock close – anyone know IRP?) From log at profiling level plus --profile-all-database-queries of recreate of my Linux ISO+length 1+length 0:

2019-04-25 11:13:30 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitUpdateFilesetFromRemote]: Starting - CommitUpdateFilesetFromRemote
2019-04-25 11:13:31 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitUpdateFilesetFromRemote]: CommitUpdateFilesetFromRemote took 0:00:00:00.723
2019-04-25 11:13:51 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRecreateDb]: Starting - CommitRecreatedDb
2019-04-25 11:13:53 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRecreateDb]: CommitRecreatedDb took 0:00:00:01.993
2019-04-25 11:13:56 -04 - [Verbose-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-ProcessingAllBlocklistVolumes]: Processing all of the 35 volumes for blocklists: duplicati-b167500d2e2a941ca8c075859c0bc22f9.dblock.zip, duplicati-b16d5524a88c24fde99642f42b9faf1ad.dblock.zip, duplicati-b195763b0f55645f088585158de6d2107.dblock.zip, duplicati-b1a7ef013b20d46648d129c3c62a4494d.dblock.zip, duplicati-b1dc943b66de348009a0a2e3f5cc480bb.dblock.zip, duplicati-b1de83c2777f54ba787af25eae7179aff.dblock.zip, duplicati-b1ff26aa09502408cb3e326fe035d7f06.dblock.zip, duplicati-b2c49bc3e6b4440d5a97033ed72ada825.dblock.zip, duplicati-b2d06fe0458aa48c29ab01456fee8c085.dblock.zip, duplicati-b358f80645bba4878b0573d581f98c77e.dblock.zip, duplicati-b3cd136cb6f734c5ab4026e6ac90228a8.dblock.zip, duplicati-b3d3a14d375fe4b1580d333ae31254ea0.dblock.zip, duplicati-b3e6f89ee06a1435aa413d83ab03acdf8.dblock.zip, duplicati-b40aac901e79848898f8aa3360c2a53d9.dblock.zip, duplicati-b41721e38341e4d8fa119628cd156f072.dblock.zip, duplicati-b50333b60289b4748b6aefadde5a5b314.dblock.zip, duplicati-b5310f989cf934c228078fb3f1ac2cdda.dblock.zip, duplicati-b58caaa19663c4367b94d139d2878461d.dblock.zip, duplicati-b5b7a8303c98849928a7aeadeb511a735.dblock.zip, duplicati-b5dd98dddb9284efdba791913ac58483b.dblock.zip, duplicati-b5e0ed3bf021941618d44c34318713cc0.dblock.zip, duplicati-b614685f276a1459bb7a4ba8b5a9ecfbb.dblock.zip, duplicati-b623dd3d548594d91baeca344204f4423.dblock.zip, duplicati-b63c02ee435ab41629a68a82cc01d283c.dblock.zip, duplicati-b7dd0772ccf124ef4a136aaac7e33392f.dblock.zip, duplicati-b8aab30fb66ee4bf59d25ac36e6e71819.dblock.zip, duplicati-ba78c1e302368477697a585a76069a31a.dblock.zip, duplicati-bad20c40f66ba4e299bd8d0ad1d91aa5d.dblock.zip, duplicati-bb6c7b6bf121341c6ab5bfcbbe5f84ba7.dblock.zip, duplicati-bc1c3bfd0c7e0473e92551e68008d9cc8.dblock.zip, duplicati-bd2a2f42f628c439d99251f1423a1d696.dblock.zip, duplicati-bdca2732bc30a42a987d1aee472666426.dblock.zip, duplicati-be2badc43a0a24ff3a2749c0581016416.dblock.zip, duplicati-bf0f4c68b2e0940d9b6df908dd64c8e4a.dblock.zip, duplicati-bfdaf5015c45946b0bddcb08972312ba2.dblock.zip
2019-04-25 11:13:56 -04 - [Verbose-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-ProcessingAllBlocklistVolumes]: Processing all of the 35 volumes for blocklists
2019-04-25 11:14:51 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:14:51 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.475
2019-04-25 11:15:30 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:15:31 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.919
2019-04-25 11:15:32 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:15:33 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.634
2019-04-25 11:16:11 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:16:11 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.467
2019-04-25 11:16:30 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:16:30 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.370
2019-04-25 11:16:50 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:16:50 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.302
2019-04-25 11:17:07 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:17:08 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.942
2019-04-25 11:17:22 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:17:22 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.843
2019-04-25 11:17:40 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:17:40 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.322
2019-04-25 11:17:59 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:17:59 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.334
2019-04-25 11:18:15 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:18:16 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.708
2019-04-25 11:18:16 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:18:16 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.297
2019-04-25 11:18:29 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:18:30 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.353
2019-04-25 11:18:35 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:18:35 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.397
2019-04-25 11:18:36 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:18:36 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.249
2019-04-25 11:18:41 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:18:42 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.576
2019-04-25 11:18:43 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:18:44 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.429
2019-04-25 11:18:46 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:18:47 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.162
2019-04-25 11:18:48 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:18:49 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.351
2019-04-25 11:18:50 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:18:51 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.327
2019-04-25 11:18:54 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:18:54 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.227
2019-04-25 11:18:57 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:18:57 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.435
2019-04-25 11:19:00 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:19:00 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.272
2019-04-25 11:19:02 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:19:02 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.411
2019-04-25 11:19:06 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:19:06 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.195
2019-04-25 11:19:07 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:19:08 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.249
2019-04-25 11:19:10 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:19:11 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.369
2019-04-25 11:19:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:19:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.256
2019-04-25 11:19:22 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:19:22 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.483
2019-04-25 11:19:25 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:19:25 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.160
2019-04-25 11:19:28 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:19:29 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.424
2019-04-25 11:19:30 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:19:31 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.336
2019-04-25 11:19:33 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:19:33 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.367
2019-04-25 11:19:35 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:19:36 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.265
2019-04-25 11:19:38 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: Starting - CommitRestoredBlocklist
2019-04-25 11:19:38 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitRestoredBlocklist]: CommitRestoredBlocklist took 0:00:00:00.267

Above backup was made for my tests in Empty source file can make Recreate download all dblock files fruitlessly with huge delay #3747 where I attempt to identify the current and historical reasons behind the all-dblock processing. What should ideally happen on Recreate is it should be over by the line which (for my little backup) said CommitRecreatedDb took 0:00:00:01.993. The 35 dblocks should never have processed. In that GitHuib issue, I did make a case for efficient processing where dblocks really require processing, but I made a stronger case for that being for exceptional situations rather than almost-usual.

If anybody would like to look at the actual code (ability to read the rather complex SQL present may help):

Here to here looks like dlist processing (all in one transaction)
Here to here looks like dindex processing (all in one transaction)
Here to here looks like dblock processing (transaction per dblock, but Process Monitor looks suspicious)

That seems to be the problem.
I would provide all the logs and information i can, if properly instructed, but no one who could do something about it seems to give a fuck. Instead we get canary after canary, most of them conatining “improved code qualitiy”. Thanks Obama


1 Like

There is only so much time unpaid volunteers have each week so they work on what they can achieve that week :slight_smile:

And it’s not like there is nothing to do with 814 issues currently open on Github GitHub - duplicati/duplicati: Store securely encrypted backups in the cloud!

I am now about two weeks into my recreation vacation, I am at “OverallProgress”:0.999941468", and now it seems to have crashed.
I barely any drive or CPU activity left, it grabbed the last dblock more than half an our ago, and my log shows some errors, picture inbound.

And here the whole error as Quote:

It can not try this again. But I need that database.
Does anyone have any Ideas left?
It apparently fails to load zum “AzureAssembly”

First of all, I dont use Microsoft Azure, but if it needs these binaries should that be something to check beforehand?

Apart from the time, the error seems to be the same everytime and it is seemingly still retriyng, so maybe not to late to fix, if someone can tell me where to get a working “Microsoft.Windows.Azure.Storage.dll” and how to swap it without causing a crash


Edit 4:
If have now swapped the Microsoft.Data.Services.Client.dll for the one demanded by the the log, considering Version, lanuage and token, but for more then 40 minutes it has not retried. Can I pinch it somehow to carry on?
Remeber, whoever suggests to just retry after two weeks has a good chanche of burning in hell forever


Sure. The rest involve command line tools such as Duplicati.CommandLine.RecoveryTool.exe or scripts, but since we’ve gotten this far with the GUI restore, might as well see if we can workaround current spot. This workaround may seem ridiculous, and in a way it is (but you’ve hit more trouble spots than usual
).

Previous idea was to try to read or copy the database out of the temporary file area, but the concern was whether that could cause an error and waste a bunch of time/work/wear. While I’d in a way be happier if Duplicati had fully given up, seeing it get quieter means maybe the database is inactive and can be taken.

Typically it lives in your user’s Windows temporary files (unless you set up otherwise), so looking at your C:\Users\<user>\AppData\Local\Temp might be useful. Copying out all the files starting with dup- would collect any candidates, but before that it would be useful to install Sysinternals Process Explorer and use its Find facility to see what files are open in your Temp by Duplicati. One might be your database. I’m not going to try to predict whether it’s open at this point. If no files are open, that’s even better for file safety


Installing DB Browser for SQLite will maybe let you try out the copied candidates based, for example, on date being sometime between when restore began and now, and size not around 50 MB (default) dblock. Possibly this plan will get complicated by the default weak encryption on DB on Windows. Worth a try
 Basically, try opening suspects (read-only is good), and see if it opens and has about 18 or more tables.

If you can find the database it was working on, my guess is it’s just in the wasted (due to bug) end dblock reading phase, and not really finding what it’s looking for. There might be an in-progress flag to edit away, then we can try manually putting the database into a somewhat meaningless new job just to do a restore.

If that’s too ridiculous, another approach is a do-over download and decrypt. You can probably download faster than Duplicati with a downloader that does concurrent downloads. I think Cyberduck can do them. Decrypt with AES Crypt via File Explorer multi-select, see if RecoveryTool will index it, and then try restore. RecoveryTool looks like it has a very different index method, so can’t hit the same recreate-database bug.

If RecoveryTool can’t handle it, then there are still two standalone scripts you can try on the decrypted files. The basic goal here is to get your files back while avoiding techniques that have so far not worked well


Update: One of the last lines previous to those “Azure Assembly” errors seemed identical to what was the last log line for other successful recreations. So, after it did not do any retry on those “damaged” dll’s for more than 90 minutes, and the database in UserData has not been opened for 6 hours or so, I copied it, successfully, to another computer.
There I tried a restore, of a 60GB TrueCrypt Container out of that Backup, which worked fine. These Containers are known to go bad if even one bit flips, and it could be mounted successfully, so i guess its fine. I am currently restoring more stuff, but how can I make sure that this is database i can continue using afterwards?

I don’t have to download them from anywhere, thats the frustrating part about the slowness, the files are local, and there is no apparent hardware bottleneck, just the code. I guess you can throw all the hardware in the world at it and it would barely accelerate.

I didn’t do that because it was still going, just slow, and i didn’t want to risk my progress. The Database is now 50% smaller then it was before, I dont know how it got so compact, but it seems to work just fine.

Right now it seems as if it had successfully recreated the database, the GUI may have just crapped out telling me it. I can’t mind thinking about that poor exe working for 14 days straight. I will recreate the database of a smaller testbackup to look how a “completely” successful recreate should look like.

It all comes down again to the fact that i would just try again, if it would take 2 hours and not two weeks.

This “Azure dll” bug can actually be found in the forum, the dude just never got an answer, sadly i have lost the post. If i find it i will link it


There’a a per-file hash taken at backup time that’s compared at restore time, so it should be verified fine.

I lost track of things. So if this was a full Recreate rather than a “direct restore”, it “should” show multiple versions (I’m glad it got all the way out of Temp before things went bad). A direct restore is single-version which should in theory be faster, but also is single-use even for that restore, and won’t do more backups.

Continuing to use the database will see it get a bunch of self-checks primarily before and after a backup. Some of these are internal DB consistency checks, while a few will do some checks of the backup files. There are numerically just a few on-demand DB test tools. The REPAIR command isn’t something to run routinely, as it attempts to synchronize the database and remote. This backfires spectacularly if the local was restored from something like an old DB backup, as it finds unknown remote files, thus deletes them. Your database “should” be super-fresh but I’m not 100% sure how sane it is. Protect your backed up files (which might be hard without a copy or folder permission change) and your hard-earned database (copy). The LIST-BROKEN-FILES command is supposed to be safer, as it’s informational, for a sister command.

There may be some risk of hidden problems in older versions of either the database or the remote being discovered when compact runs and gathers various partially-filled files of various ages into new dblocks. Being most conservative would probably start again, but keep the old for history, but that has drawbacks. You can also do test restores of as many old versions as you like, and you could run the test command, which samples (including a large sample of all) the remote files to test their integrity versus DB’s view, which in your case “should” match really well after the recent Recreate. This check also runs slightly on every backup, and can be told to pick a larger test sample by setting –backup-test-samples as desired.

Over time, databases can become less efficient in space usage. See forum discussions about VACUUM.

I found two reports of the Azure problem specifically, and one as a side note. None have been figured out.

Test with –full-remote-verification Throws an Error and Hangs
Backups not running/error out after update to 2.0.4.5_beta_2018-11-28:
Metadata was reported as not changed, but still requires being added?

Yes, the last, somewhat successful run was a full heads on recreate.
I tried to just restore my files, but because i wanted to restore the majority of files out of my latest backup, it started going through all dblocks non the less, so i put the task onto a somewhat stronger machine and did a proper recreate, because that at this point seemed more senseble, than doing the same thing for one time use only.

I mangled my backupfiles somewhat by restoring a database out of the backup itself and running the repair command on it. That seemed to delete the latest two iterations, which where faulty due to bad connection and/or allready wonky machine anyway. Trying to verify this database did still not check out, so started recreating a new one.

Yeah, over my dead body maybe


That sounds interresting, i will look into it and return if questions occur.
Just from flying over the docs i don’t really get the CLI though.
Could you give me an syntax example for what the “most thorough” test would look like?

Using the Command line tools from within the Graphical User Interface may be the easy and safest path, but I can’t really give syntax because it’s a web page. Using Duplicati from the Command Line is external CLI, and the easiest way to get going there from an existing GUI backup is with Export as Command-line. Either way, you may need to adjust it from backup to your new command. Here’s one I adjusted for a test:

Duplicati.CommandLine.exe test <storage-URL> <samples> [<options>] after trimming a bunch

C:\>"C:\Program Files\Duplicati 2\Duplicati.CommandLine.exe" test "file://C:\Duplicati Backups\local test 1\\" all --backup-name="local test 1" --dbpath="C:\ProgramData\Duplicati\87697679687271856580.sqlite" --encryption-module= --compression-module=zip --dblock-size=50mb --no-encryption=true  --disable-module=console-password-input --full-remote-verification
  Listing remote folder ...
  Downloading file (1.43 KB) ...
  Downloading file (21.38 KB) ...
  Downloading file (21.39 KB) ...
  Downloading file (21.39 KB) ...
  Downloading file (20.72 KB) ...
  Downloading file (1.69 KB) ...
  Downloading file (21.87 KB) ...
  Downloading file (11.64 KB) ...
  Downloading file (608 bytes) ...
  Downloading file (31.86 KB) ...
  Downloading file (1.23 KB) ...
  Downloading file (610 bytes) ...
  Downloading file (48.08 KB) ...
  Downloading file (31.80 KB) ...
  Downloading file (609 bytes) ...
  Downloading file (40.73 KB) ...
  Downloading file (700 bytes) ...
  Downloading file (5.13 KB) ...
  Downloading file (15.14 MB) ...
  Downloading file (16.59 MB) ...
  Downloading file (15.19 MB) ...
  Downloading file (700 bytes) ...
  Downloading file (15.08 MB) ...
  Downloading file (700 bytes) ...
  Downloading file (16.38 MB) ...
Examined 25 files and found no errors

C:\>

I threw in not only all but –full-remote-verification (which sometimes goes off for people, and it’s not clear whether it’s actually a problem or just noise – figuring that out might get into way too much file dissection). Note that this is primarily a check of the remote volumes (dlist, dindex, dblock) that the backup makes, so focus isn’t on whether everything makes sense as a system that could be turned into backup and restore.

If you ever go for a restore test after having rebuilt your system, using –no-local-blocks will turn off speed enhancement of building the restored files out of any currently existing local blocks that might be suitable. This ensures that everything comes from backup files, so is a better test of a lost-drive disaster recovery.

Now that i have most of my data back and a database that seems to be sufficient for restoring stuff, i tried reinstating it for continuing backuping.
But i get the following message, regardless if I try to do a repair first:

Does that mean that because Duplicati didn’t finalize the recreated backup i would honestly have to do it again?

In the copy-the-database-early experiment that at least we didn’t have to do from Temp, the second step was to clear that flag then test and hope for the best, on the theory that it wasn’t really making progress.

The somewhat more normal way to clear repair-in-progress looks like it’s list-broken-files (hopefully you don’t have any) then purge-broken-files from your choice of GUI Commandline or true commandline based on an Export as Command-line. In either case, adjust syntax from backup to desired command. Please make a copy of your hard-earned database before you try getting it bent back to good operation.

Getting more daring, we could also open the database and remove the status, but let’s not go there first.

There’s a third option that’s even more leading-edge, basically doing field trial of the developing issue fix.

I did do, with that 99,9941468% complete database, a list-broken-files with full-result.
It found 219 matches, 218 entries of which belong to temporary files of my AntiVir - whatever -, and the last one beeing the old database from back when the world was still a better place. But that ship has sailed anyway.

I suppose i should now do purge-broken-files?
That would mess around in my backup though wouldn’t it?

I will wait for a positive/negative and furhter advice before i do that


Edit:
It did discover the --dry-run option and tried it, this was the outcome:

What can I learn from this?

list-broken-files shows life is not as wonderful as we might have liked in the recreated database, however it seems that the file list is well-understood (not sure what “whatever” refers to though). purge-broken-files is behaving worse than I expected. I can’t find any reports where it winds up at that “No transaction is active”.

Both operations run fine for me on 2.0.4.5 using GUI Commandline. Was yours GUI or Command Prompt? Command Prompt is totally separate, and it was never totally clear to me if the GUI finished its operations. Possibly if it’s still in the database (Process Explorer can show). The CLI might be bumping into its usage, however I’m not an expert in these SQLite errors. Stopping Duplicati server will ensure no collision occurs.

Holding off the actual purge-broken-files until I look over the code (or someone else comes by and explains it) seems like a good move. Meanwhile, you can still start thinking about clearing that flag via database edit. DB Browser for SQLite can first be tried on an Open Database Read Only of the database (back it up first). Probably it won’t open, because Duplicati on Windows obfuscates the databases to resist malware scans. To have Duplicati unobfuscate databases, stop tray icon or server then start using --unencrypted-database. Open the Configuration table and see if you can see repair-in-progress there. I “think” that’s where it is.

I’m sorry, it was my attempt at slang to show utter disrespect for these poor temporary files :wink:
English is not my first language


Gui Commmandline. The result is the same (–purge-broken files with --dry-run and --full-results) from the machine that died in the first place, where windows is freshly deployed and the duplicati installation is native 2.0.4.5

I will look into it.

Edit:

No, the machine has since been restarted, and did not touch the datbase for 24 hours when i stopped.
I have opened a copy of the fetched database and it does not to seem obfuscated.
I looked through my options ( I had never seen a SQL Database from the inside before), and under “Search Data”, table–>Configuration i find the entry repair-in-progress: true.

Would that be what i would have to change that to flase or are there other things i would have to make sure beforehand?

Maybe this is a hint, but I have redone my db again, this time database and temp-files into a ram-disk.
It improved the expierience from aproximately 10-12 days to 6-7 days. It did get stuck again though on the last 0.000xxx percent.

Also:
I have run purge broken files on the actual backup, and deleted the repair-in-progress stamp from the database, but when i try to contiue using it I get the following error:

These are reported sometimes, both as forum reports and also as the usual way my main backup breaks. This self-check is well-understood in terms of its testing, but how that inconsistency happens isn’t known completely. Some cases might have been improved, e.g. “Unexpected difference in fileset” errors [2.0.3.9] might have been helped by fixes for Failed put operations do not result in error #3673 that went into canary.

Unexpected difference in fileset version 4: 
found 6180 entries, but expected 6181 gives the workaround that sometimes works, where the offending version is deleted, but sometimes the problem shows up in a different version. Also, some people might not like to delete versions, because they don’t want to lose any. Copying off your backup files and database would make sure you don’t go down the path of many deletes.

I don’t think it’s been tested extensively, but another option (at the price of another Recreate) is instead of doing a full Recreate (actually a Repair without a database), limit its versions to exclude the troubled one. –version looks like it has enough syntax to it that you could do a range of 0-38,40-999999 to just avoid 39.

I have to say I’m disappointed that this issue can survive a Recreate (if I understood your steps correctly), however beyond that it’s probably a separate topic, and not related to the original slowness of a Recreate.

2.0.4.18-2.0.4.18_canary_2019-05-12 fixes at least my #3747 Issue where an empty file makes Recreate download all of the dblocks. Test, if you like, on a separate install. Canary is bleeding-edge and can break.

  • Ignoring empty remote files on restore, which speeds up recovery, thanks @pectojin
1 Like

Just found this thread after attempting to perform a restore from Google Drive of a backup of about 2mln files (~400GB) I created the other day on a remote Linux machine.

I’m also doing a test restore from said machine to itself, and while it did take a few minutes to get started, it’s been restoring and writing files for several hours now and is now at 221GB.

However, on my desktop PC, a test restore from this Drive has been stuck “Recreating database
” for hours now. From the logs, I can see it’s doing dblock downloads, then processing and lots of waiting, then more dblock downloads. At this rate, it won’t finish for days.

I’m running Duplicati 2.0.4.23_beta_2019-07-14.

Aug 5, 2019 2:34 PM: RemoteOperationGet took 0:00:00:06.435
Aug 5, 2019 2:34 PM: Backend event: Get - Completed: duplicati-b11ed9ccca6db4c968b5acc86c753bfd6.dblock.zip.aes (249.90 MB)
Aug 5, 2019 2:34 PM: Downloaded and decrypted 249.90 MB in 00:00:06.4350832, 38.83 MB/s

...

* Aug 5, 2019 2:17 PM: RemoteOperationGet took 0:00:00:04.828
* Aug 5, 2019 2:17 PM: Backend event: Get - Completed: duplicati-b823e5f3c041940a787339677a97076e1.dblock.zip.aes (249.90 MB)
* Aug 5, 2019 2:17 PM: Downloaded and decrypted 249.90 MB in 00:00:04.8280622, 51.76 MB/s

etc. with roughly 15-20 minutes of SQL and whatever else it’s doing (the raw SQL execution times based on the log output don’t seem to add up to this much time) in between.

At this point, not a single file has been restored, as far as I understand, it’s not downloading these dblocks for the purpose of restoring data, but to recreate the db. Why is it not downloading dindex o dlist files? Why is it downloading all these dblock files instead?

Here’s the last part of the raw log from Profiling debug level (whatever Duplicati is showing currently under About -> Log data): https://gist.github.com/751e1d3d197487a44f33b1c3c84d44b7.

I’ve been pretty impressed with Duplicati handling large data until I tried a restore that didn’t originate from the machine that performed the backup (along with a separate issue of a pretty slow restore dir listing).

Related:

In my opinion, this is the Achilles Heel of Duplicati - if it takes days to restore in the event of a data loss, it’s not a viable backup solution.

So is the temporary best solution to also back up the Duplicati databases themselves and then first put them in place before restoring from another machine?