Timespan to rebuild a database?

Hi there

We have to restore a large disk (nearby 3TB) using a S3-Storage backup onto an new server equipment. The database rebuild process is just running, but it seems that it takes a long time.

The incremental backup has 14 versions.

We imported a saved configuration file to rebuild the disk. Now the question comes up on how many time would it take for the database rebuild / newbuild?

Do anyone have some experience and could make a statement about this situation?

Many thanks in advance!
Rudi

1 Like

Hello

From what I know, time to build a database from scratch mainly depends on number of files times number of version when all is well and the backend is healthy (because in this case only the list and index files are accessed, and these are small files). IIRC last time I timed a database rebuild from a 25000 files x 60 versions, database building duration was about 4 minutes (with a backend accessed at 60 Mbits/s).

When you have missing or corrupted files on the backend, all bets are off since in this case some data files are to be downloaded to correct problems.

1 Like

Welcome to the forum @it-ruedi59

Weā€™ll see if anyone who sees this can speak to a similar backup. Meanwhile, some general notes.

This is especially in the 90% to 100% range of the progress bar. If before that, itā€™s probably normal.
You can look at About ā†’ Show log ā†’ Live ā†’ Verbose if you like. It should get dlist then dindex.
When it starts downloading dblock itā€™s looking for some blocks that it hasnā€™t yet found. Thatā€™s slow.

Another thing that can slow things is too many blocks. Default blocksize is good for roughly 100GB.
Please change default blocksize to at least 1MB #4629 has more. Canā€™t change on existing backup.

1 Like

Thank you very much for your replies.

The process bar is in the same position (at 70%) now as when I wrote this topic - but it looks like active. Itā€™s a little bit poor that there is no look inside the process, no look at what Duplicati is doing at the moment.

What I know about the block size (the definition was made by an IT predecessor): it is small - a little bit more than the standard block size.

The target has roundabout 800ā€™000 files. Small ones and (very) large ones - no databases.

The duplicati database has a size of 7 GB at the moment.

Iā€™m still waiting for more notes how to speed up the process.

1 Like

Did you try looking?

and you can pick whatever level you like. Even Information shows downloads. Verbose adds context about whatā€™s going on, how far you are, and whatā€™s left to do. Profiling is huge, however hard to read.

Probably roughly half are dblock files, because each dblock has its dindex. The dlist are likely only 14:

Possibly thereā€™s high change each time? Logs would show if you still have old database which has logs.
Your file count at default 50 MB size could be as much as 20 TB (got any space tools?), adding to block count and reducing speed. If old server exists you can get size on home screen of Duplicati, if you care.

The type of file matters. A dblock file should be no larger than Options screen 5 Remote volume size.
The dindex for a dblock is typically small, as it indexes the blocks of a size-limited dblock file. The small dblock files would be removed by compact although there are options that control this. Below are some:

  • --small-file-max-count=<int>
    The maximum allowed number of small files.
  • --small-file-size=<int>
    Files smaller than this size are considered to be small and will be compacted with other small files as soon as there are <small-file-max-count> of them. --small-file-size=20 means 20% of <dblock-size>.

The size of a dlist file depends on how many files you have backed up. Thereā€™s not really a set size limit.

There is not supposed to be one, though you can certainly backup the database somewhere if desired. Sometimes thereā€™s a request, but it would mean a potentially huge upload after each backup. Example:

ā€œstill waitingā€? This was never asked. If youā€™re right in the middle, thereā€™s not much you can do now, however you can at least look as explained to see if you go down the dblock downloading sad path.

You can certainly attempt to prepare for speed up on future database rebuilds, or avoid such need.
The blocksize, unfortunately, can only be changed on a new backup. Remote volume size increase possibly will help a little, but is best done either slowly or set initially. Following has some thoughts:

Choosing sizes in Duplicati

Progress bar staying still is not good, although in 90% to 100% range it can be so slow it looks still.

What does ā€œlooks like activeā€ mean, given the position is the same? What other activity signs exist? Certainly the standard OS performance tools could show something, but Duplicati logs can as well.

If old server will no longer be used (donā€™t backup from two systems at once), moving previous database would have avoided the need to recreate it. That wouldnā€™t help in disaster, e.g. if the server is destroyed.

1 Like

Thank you for all the hints and tipps! : )

Itā€™s a little bit clearer now to me. Sad is that there is no way to speed up the rebuild of the database. Unfortunately I have to use these backups which are perhaps not in a good shape (meaning a little bit of misconfigured depending on the file structure of the server) and a faulty server

The progress bar appears as freezed (Iā€™m sure that depends not on the wheater outside). Here are the last messages from the process:

And here are messages from the start time:

Does it makes sense to cancel the process and look for another way to recover the files? It is running for about 16 hours now.

1 Like

You should search back in the Live messages for lines beginning with ā€˜ProcessingBlocklistVolumesā€™.

1 Like

There are numerous long-term ways to speed it up or avoid the need. Some were specifically stated.
Short-term (as in right in the middle of a run), options are limited, however your OS has some knobs. Knowing what to tweak usually involves looking at performance tools to see how resources are used. Duplicati actually also has various exotic options, but you canā€™t change options in the middle of a run.

Is that suspect for current issue? Is it ruled out? Care to describe? System can of course affect speed. Throwing hardware at rebuild can also be long-term improvement, e.g. SSD is faster than mechanical.

I donā€™t know your current time. This is reverse-chronological, so how long has top one been at the top?
Using ExplicitOnly was not explicitly suggested, but might be doing enough to detect new activityā€¦

Iā€™d have to do some research to see if I can recognize your top message, but if start was around 7, and messages stopped at 14, and restore has run 16 hours, then your current time is 7 + 14 = 21 right now, sitting still for 21 - 14 = 7 hours? Do OS tools show any activity in CPU or disk activity in Duplicati now?

You possibly have an odd hang situation rather than performance issues, but thereā€™s not much data yet.
What OS is this, and are you familiar with performance tools to check for either an overload or no uses?

Iā€™ve been asking about the old system. No response yet. Whatā€™s left of it? What other way do you have?

Are you familiar enough with this log to have a good handle on when one can switch levels to get some summarized information, so as to see the forest rather than the trees, but I donā€™t want to risk trees loss?

Verbose might have given a good compromise-level view. I wonder if any other levels were tried before.

I couldnā€™t go back to the bottom (or beginning) of the live messages, thy are not later than Jan, 30 - 14:24.
The progress bar still isnā€™t moving.

The newest messages are:
30. Jan. 2023 14:58: Starting - ExecuteNonQuery: INSERT INTO ā€œBlockā€ (ā€œHashā€, ā€œSizeā€, ā€œVolumeIDā€) SELECT ā€œFullHashā€ AS ā€œHashā€, ā€œLengthā€ AS ā€œSizeā€, -1 AS ā€œVolumeIDā€ FROM (SELECT ā€œAā€.ā€œFullHashā€, ā€œAā€.ā€œLengthā€, CASE WHEN ā€œBā€.ā€œHashā€ IS NULL THEN ā€˜ā€™ ELSE ā€œBā€.ā€œHashā€ END AS ā€œHashā€, CASE WHEN ā€œBā€.ā€œSizeā€ is NULL THEN -1 ELSE ā€œBā€.ā€œSizeā€ END AS ā€œSizeā€ FROM (SELECT DISTINCT ā€œFullHashā€, ā€œLengthā€ FROM (SELECT ā€œBlockHashā€ AS ā€œFullHashā€, ā€œBlockSizeā€ AS ā€œLengthā€ FROM ( SELECT ā€œEā€.ā€œBlocksetIDā€, ā€œFā€.ā€œIndexā€ + (ā€œEā€.ā€œBlocklistIndexā€ * 3200) AS ā€œFullIndexā€, ā€œFā€.ā€œBlockHashā€, MIN(102400, ā€œEā€.ā€œLengthā€ - ((ā€œFā€.ā€œIndexā€ + (ā€œEā€.ā€œBlocklistIndexā€ * 3200)) * 102400)) AS ā€œBlockSizeā€, ā€œEā€.ā€œHashā€, ā€œEā€.ā€œBlocklistSizeā€, ā€œEā€.ā€œBlocklistHashā€ FROM ( SELECT * FROM ( SELECT ā€œAā€.ā€œBlocksetIDā€, ā€œAā€.ā€œIndexā€ AS ā€œBlocklistIndexā€, MIN(3200 * 32, (((ā€œBā€.ā€œLengthā€ + 102400 - 1) / 102400) - (ā€œAā€.ā€œIndexā€ * (3200))) * 32) AS ā€œBlocklistSizeā€, ā€œAā€.ā€œHashā€ AS ā€œBlocklistHashā€, ā€œBā€.ā€œLengthā€ FROM ā€œBlocklistHashā€ A, ā€œBlocksetā€ B WHERE ā€œBā€.ā€œIDā€ = ā€œAā€.ā€œBlocksetIDā€ ) C, ā€œBlockā€ D WHERE ā€œCā€.ā€œBlocklistHashā€ = ā€œDā€.ā€œHashā€ AND ā€œCā€.ā€œBlocklistSizeā€ = ā€œDā€.ā€œSizeā€ ) E, ā€œTempBlocklist-9E0F54EBF6A87E4A80A3F753C8671861ā€ F WHERE ā€œFā€.ā€œBlocklistHashā€ = ā€œEā€.ā€œHashā€ ORDER BY ā€œEā€.ā€œBlocksetIDā€, ā€œFullIndexā€ ) UNION SELECT ā€œBlockHashā€, ā€œBlockSizeā€ FROM ā€œTempSmalllist-AC57620EB252B1418A198C3C0C9C6413ā€ )) A LEFT OUTER JOIN ā€œBlockā€ B ON ā€œBā€.ā€œHashā€ = ā€œAā€.ā€œFullHashā€ AND ā€œBā€.ā€œSizeā€ = ā€œAā€.ā€œLengthā€ ) WHERE ā€œFullHashā€ != ā€œHashā€ AND ā€œLengthā€ != ā€œSizeā€

  1. Jan. 2023 14:24: ExecuteScalarInt64: SELECT ā€œVolumeIDā€ FROM ā€œBlockā€ WHERE ā€œHashā€ = ā€œfcFeIvUa2VreAexH05j5fPB9uJ/ALoUbOrtvPJzWAL0=ā€ AND ā€œSizeā€ = 102400 took 0:00:00:00.000
  2. Jan. 2023 14:24: Starting - ExecuteScalarInt64: SELECT ā€œVolumeIDā€ FROM ā€œBlockā€ WHERE ā€œHashā€ = ā€œfcFeIvUa2VreAexH05j5fPB9uJ/ALoUbOrtvPJzWAL0=ā€ AND ā€œSizeā€ = 102400
  3. Jan. 2023 14:24: ExecuteScalarInt64: SELECT ā€œVolumeIDā€ FROM ā€œBlockā€ WHERE ā€œHashā€ = ā€œZpu0wykBOaO4xs9GaB8y09VdpITla5nIqYtParvGITI=ā€ AND ā€œSizeā€ = 102400 took 0:00:00:00.021
1 Like

Yes verbose will be enough for that and less overwhelming than explicit-only (explicit only is only barely more verbose than profiling).

1 Like

as said switch back to ā€˜verboseā€™ as explicit-only is displaying too much information to be really useful in the small UI window.

1 Like

appears to maybe contradict

and what is the current time?

If messages are still moving (are they, or not?), it looks like it might still be doing a normal DB recreate.
Using Verbose level (if it comes up) would give you a very clear indication of where current phase is at.

1 Like

Current time is 17:43 or 5:43 p.m.

Last message was from 14:58 - ā€œExplicitOnlyā€

Ifā€™ve set the live log level to ā€œVerboseā€. Letā€™s see what Duplicati will write down here. For the moment the page is blank. Iā€™ll have a look to the log page later this evening.

1 Like

At the moment I see, that my reply from 5 p.m. wasnā€™t accepted by the forum robot.
Should I paste it here, again?

1 Like

Might as well try. I donā€™t know what happened before, but watch closely for messages on post retry.

1 Like

This happensā€¦ : )

1 Like

Now I try to rebuild our ā€œmissingā€ dialogueā€¦

You wrote:
There are numerous long-term ways to speed it up or avoid the need. Some were specifically stated.
Short-term (as in right in the middle of a run), options are limited, however your OS has some knobs. Knowing what to tweak usually involves looking at performance tools to see how resources are used. Duplicati actually also has various exotic options, but you canā€™t change options in the middle of a run.

I wrote:
Iā€™ve learned that the backups arenā€™t prepared well for such a ā€žmountainā€œ of data and a faulty server

You ask:
Is that suspect for current issue? Is it ruled out? Care to describe? System can of course affect speed. Throwing hardware at rebuild can also be long-term improvement, e.g. SSD is faster than mechanical.

I wrote:
The restore is running on Windows 10 (as VM) on a VMware Host. Unfortunately, the host is a single CPU one with 64GB of RAM and has mechanical HDDs inside. The VM itself is configured with 2 vCPUs and 8GB of RAM.

The task manager shows a CPU load of 5-14%, <40MB of RAM and a speed range from 2 to 40 MB/s. It looks like a living application. : )

You wrote:
I donā€™t know your current time. This is reverse-chronological, so how long has top one been at the top?
Using ExplicitOnly was not explicitly suggested, but might be doing enough to detect new activityā€¦

Iā€™d have to do some research to see if I can recognize your top message, but if start was around 7, and messages stopped at 14, and restore has run 16 hours, then your current time is 7 + 14 = 21 right now, sitting still for 21 - 14 = 7 hours? Do OS tools show any activity in CPU or disk activity in Duplicati now?
You possibly have an odd hang situation rather than performance issues, but thereā€™s not much data yet.
What OS is this, and are you familiar with performance tools to check for either an overload or no uses?

As described above itā€™s a simple Windows 10 Pro and there is no overload - duplicati is already working. Please have a look on the benchmarks above. Duplicati is the only application running on the VM - and it is - maybe you are right - a hanging situation. Although the log writes some different block information and messages which looks like an application that works through a fog of blocks.

I wrote:
look for another way to recover the files?

You wrote:
Iā€™ve been asking about the old system. No response yet. Whatā€™s left of it? What other way do you have?

I wrote:
The old system is/was a Proxmox VM with Debian Linux 10, included as a Samba Domain Member with a 4TB data-partition. After we had some restores to do the server crashed if the load was more than 150GB in a single restore.

The only way we have is to buy a new hardware, let the old system alive in a separated network, scratch all the folders and files which are verified and proof copied on the new server and a NAS.

You wrote:
You should search back in the Live messages
Are you familiar enough with this log to have a good handle on when one can switch levels to get some summarized information, so as to see the forest rather than the trees, but I donā€™t want to risk trees loss?
Verbose might have given a good compromise-level view. I wonder if any other levels were tried before.

I wrote:
It tried ā€žVerboseā€œ, ā€žExplicitOnlyā€œ and ā€žProfilingā€œ. The last two options displays some items. The other ones are blank and left blank.

1 Like

It doesnā€™t look like a pretty situation, but thanks for filling in some blanks. Thereā€™s still a conflict over

and the lack of recent live log output.

Is size still increasing? Also look for same filename with an extra suffix after the .sqlite on the right.
-journal is probably most likely, but something else might be possible. Are any other files changing?

A sensitive but complex test is to use Sysinternals Process Monitor to look at Duplicatiā€™s file accesses, primarily to test for aliveness. Thereā€™s only a small chance that low-level data will show where itā€™s atā€¦

If itā€™s still alive, you could consider copying its old database from ~user/.config/Duplicati to take a look.
That might avoid the database recreate which may or may not be stuck (it could be tested more later).

Depending on available equipment, you could carefully do tests in parallel with the current experiment.
Just make sure you donā€™t try to backup to or otherwise alter the S3 file content while testing elsewhere.
Having multiple things all pointed at one destination is dangerous, because those things could conflict.

If you want to test a database, you can carefully set up the destination and use the database screen to
either put the old file at the new assigned database path, or change the path to a copy of old database.
Restore menu showing you all of the expected versions would then be a good indicator to test further.
If test works (there should be little slowness because it has its database), then do the same elsewhere.

Going from Linux to Windows will add some complications, but youā€™re supposed to be able to restore if restore folder is explicitly given, because one canā€™t expect the Linux paths to line up with Windows use.

Copying files might also be an option, which would save any kind of restore after getting the database.

1 Like

For sure, not. But this is the normal craziness of your doing. Isnā€™t it?

The databaseā€™ size is also freezed at 7GB, same as the progress bar, and there is no file / database with .sqlite extension. An -journal exists with a size of 0KB.

Sysinternals shows many activities to a file named C:\Users\asaadmin\AppData\Local\Temp\etilqs_aRQPkC36tEkw4PB, but it looks like that the process is only working with that file.

While Iā€™m writing this lines, something happens:
DB size increased a little bit
Processor usage increased
Sysinternal crashed

And this messages appears:

1 Like

etilqs is sqlite spelled backwards. That might be an SQLite temporary file. Is it changing? I think Iā€™ve seen heavy read activity on these when total block count gets large, although I donā€™t know if Iā€™ve seen log stop.

The .7 above is why youā€™re around that far on the progress bar. The dblock is larger than 50MB default.
Pass 3 at 90% is the search of everything left. Pass 1 is milder, but this sure has a long count to finishā€¦
Long is relative though, 1000 out of 400,000 is a quarter of a percent. We donā€™t have a solid time history, however these probably wonā€™t go any faster. You can wait for volume 6 of 1044 or plan a different planā€¦

Were any of the suggested alternatives based on using the old system feasible, or is backup all there is? There are several other ways to do Duplicati restores, but Iā€™m not sure which handle large backups well.

Finding if itā€™s spending lots of time in SQLite code might be possible with Sysinternals Process Explorer, maybe run as Administrator if Duplicati is (I donā€™t know). At Duplicati, right click ā†’ Properties ā†’ Threads perhaps sorting by CPU to see if highest user sounds like itā€™s spending lots of time in some SQLite code. This wonā€™t lead to an immediate speed-up opportunity, but would at least confirm where its time is spent.

1 Like