Duplicati Database Recreate running for a week

no if you set no-auto-compact to true (which is not the default :slight_smile: )

Alright! Then my case isn’t directly related

as double negative as sometimes tricky, your answer implies that your backup jobs did include a --no-auto-compact=true, right ?

Ah sorry,I see it’s an opt-out mechanism,I did not “Opt Out” - so compact could well be part of it then

The thread seems vague about exact conditions under which dblock files get downloaded, but I have a backup with 17 such files, and my after-every-backup database recreate was only downloading dindex.

There seem to be a couple of questions here. One is why it’s using pass 3 of dblock downloading at all. After downloading all dblocks, is it still missing blocks? Is a dblock file missing? I know list-broken-files needs (undocumented) help sometimes. A damaged dblock needs to be deleted to be noticed…

On a related note,the backup jobs are now taking 2hours to run through rather than the previous 12,though that might be related to the new host

Hi!

I also have the problem that Recreate Database is seemingly never ending (at about 90% of the progress bar).

Is there a way I can install the new version/branch “duplicati-experiment-rebuildfaster” that solves this?

My OS is Ubuntu 20.04. I run Duplicati 2.0.7.1_beta_2023-05-25.

Thank you.

@Henr

you have it here:

you have to get a Github account to download it. FYI these binaries expire after 3 days.

Great!
I downloaded it and installed it without errors.
But when I login to Duplicati (in Firefox) it still says:

You are currently running Duplicati - 2.0.7.1_beta_2023-05-25

But the new version I just downloaded and installed is: duplicati_2.0.6.3-1_all.deb

The URL (in Firefox): http://localhost:8200/ngax/index.html#/about

Do I have 2 version now?
In that case how do I uninstall 2.0.7.1_beta_2023-05-25 ?

@Henr

if you run dpkg -l | grep duplicati what does it says ?

(venv) h@hr: … $ dpkg -l | grep duplicati
ii duplicati 2.0.6.3-1 all Backup client for encrypted online backups

@Henr

I think that you have used in the past automatic update of Duplicati and replacing the original Duplicati version (that succeeded) does not stop the updated version to run. In short, the patched Duplicati starts, see that a newer update is installed and runs it. I’d say that to get rid of it you need to remove or rename the updates directory that should be in the data subdirectory and restart duplicati.

I logout of Duplicati (here in Firefox).
Then I rename /home/henrik/.config/Duplicati/updates
to /home/henrik/.config/Duplicati/updates-renamed
Then I try to login and get this error:
Resource not found: http://localhost:8200/
Maybe I don’t know what you mean by restarting Duplicati.

This is the content of that directory:

(venv) h@hr:~/.config/Duplicati/updates-renamed$ ls -l
total 40
drwxr-xr-x 16 henrik henrik 12288 May 21 2021 2.0.6.1
drwxr-xr-x 17 henrik henrik 16384 Sep 4 15:33 2.0.7.1
-rw-r–r-- 1 henrik henrik 7 Sep 4 15:33 current
-rw-rw-r-- 1 henrik henrik 283 Sep 12 2020 installation.txt
-rw-rw-r-- 1 henrik henrik 417 Sep 12 2020 README.txt

@Henr

Restarting Duplicati depends on if you run it as a service (in this case systemctl restart duplicati should do the trick) or if you run it as a task; if the latter is the case, usually a right click on Duplicati icon and ‘Quit’ is the way to exit Duplicati, and there is an icon to restart it somewhere created by the setup.
You can use ‘ps’ to check if duplicati is running.

Thank you. So in my case Duplicati was running as a service. I did:

  1. Logout (in Firefox browser)
  2. systemctl restart duplicati (in Terminal)
  3. Login (in Firefox)
  4. Now I get this info:

You are currently running Duplicati - 2.0.6.3_canary_2023-09-06
Update [2.0.7.1_beta_2023-05-25] is available

I started a new Recreate Database yesterday, but again it seems to be stuck around 90% on the progress bar. I will let it run 1-2 days more.

This is the basic info about my backup:

Last successful backup: 31/08/2023 (took 02:17:37)
Source: 364.23 GB
Backup: 460.54 GB / 18 Versions

@Henr

try to observe the live log (About / show log / live, select Retry or even Verbose level) to see how fast it advances .

OK. Can you deduct anything from this?:

  • 7 Sep 2023 15:19: RemoteOperationGet took 0:00:00:08.397

  • 7 Sep 2023 15:19: Backend event: Get - Completed: duplicati-b350e312c75bc48ceaec35b6949226e45.dblock.zip.aes (49.99 MB)

  • 7 Sep 2023 15:19: Downloaded and decrypted 49.99 MB in 00:00:08.3965330, 5.95 MB/s

  • 7 Sep 2023 15:19: Starting - ExecuteNonQuery: INSERT INTO “Block” (“Hash”, “Size”, “VolumeID”) SELECT “FullHash” AS “Hash”, “Length” AS “Size”, -1 AS “VolumeID” FROM (SELECT “A”.“FullHash”, “A”.“Length”, CASE WHEN “B”.“Hash” IS NULL THEN ‘’ ELSE “B”.“Hash” END AS “Hash”, CASE WHEN “B”.“Size” is NULL THEN -1 ELSE “B”.“Size” END AS “Size” FROM (SELECT DISTINCT “FullHash”, “Length” FROM (SELECT “BlockHash” AS “FullHash”, “BlockSize” AS “Length” FROM ( SELECT “E”.“BlocksetID”, “F”.“Index” + (“E”.“BlocklistIndex” * 3200) AS “FullIndex”, “F”.“BlockHash”, MIN(102400, “E”.“Length” - ((“F”.“Index” + (“E”.“BlocklistIndex” * 3200)) * 102400)) AS “BlockSize”, “E”.“Hash”, “E”.“BlocklistSize”, “E”.“BlocklistHash” FROM ( SELECT * FROM ( SELECT “A”.“BlocksetID”, “A”.“Index” AS “BlocklistIndex”, MIN(3200 * 32, (((“B”.“Length” + 102400 - 1) / 102400) - (“A”.“Index” * (3200))) * 32) AS “BlocklistSize”, “A”.“Hash” AS “BlocklistHash”, “B”.“Length” FROM “BlocklistHash” A, “Blockset” B WHERE “B”.“ID” = “A”.“BlocksetID” ) C, “Block” D WHERE “C”.“BlocklistHash” = “D”.“Hash” AND “C”.“BlocklistSize” = “D”.“Size” ) E, “TempBlocklist_894DD94D1526D44C964F9B3396D9C215” F WHERE “F”.“BlocklistHash” = “E”.“Hash” ORDER BY “E”.“BlocksetID”, “FullIndex” ) UNION SELECT “BlockHash”, “BlockSize” FROM “TempSmalllist_0C8056CFDC9F1141B353613EFE0A9038” )) A LEFT OUTER JOIN “Block” B ON “B”.“Hash” = “A”.“FullHash” AND “B”.“Size” = “A”.“Length” ) WHERE “FullHash” != “Hash” AND “Length” != “Size”

  • 7 Sep 2023 15:19: Unexpected changes caused by block duplicati-b350dd1f34bcf4d3d860c8c9141b9eaea.dblock.zip.aes

  • 7 Sep 2023 15:19: ExecuteScalarInt64: SELECT “VolumeID” FROM “Block” WHERE “Hash” = “CklLcazfOVuqfGStMIvxY7B/s6cnX5Ty0f5NCorpB44=” AND “Size” = 102400 took 0:00:00:00.000

  • 7 Sep 2023 15:19: Starting - ExecuteScalarInt64: SELECT “VolumeID” FROM “Block” WHERE “Hash” = “CklLcazfOVuqfGStMIvxY7B/s6cnX5Ty0f5NCorpB44=” AND “Size” = 102400

  • 7 Sep 2023 15:19: ExecuteScalarInt64: SELECT “VolumeID” FROM “Block” WHERE “Hash” = “hgvUwJtv8+U/00uZPn1/0DgXjn+pZJ0kBPHU9pQWGEI=” AND “Size” = 102400 took 0:00:00:00.000

  • 7 Sep 2023 15:19: Starting - ExecuteScalarInt64: SELECT “VolumeID” FROM “Block” WHERE “Hash” = “hgvUwJtv8+U/00uZPn1/0DgXjn+pZJ0kBPHU9pQWGEI=” AND “Size” = 102400

  • 7 Sep 2023 15:19: ExecuteScalarInt64: SELECT “VolumeID” FROM “Block” WHERE “Hash” = “9eVbZ3/AKHwAH2aCGEPsQstOoiWv3njUibH+4iN51Z8=” AND “Size” = 102400 took 0:00:00:00.000

  • 7 Sep 2023 15:19: Starting - ExecuteScalarInt64: SELECT “VolumeID” FROM “Block” WHERE “Hash” = “9eVbZ3/AKHwAH2aCGEPsQstOoiWv3njUibH+4iN51Z8=” AND “Size” = 102400

Mine took 2 days rather than the 35 days it would have taken

@Henr

well, either your internet connection is very slow or your computer (Cpu) is, since taking 8 s to download and decrypt 50 MB is sluggish.

However it’s possibly not the worst part of it. If taking a random peek at the log makes you see the message ‘Unexpected changes caused by block’, it may hint at a very damaged backend. The change in this version relies on the fact that in most of the cases, only a small fraction of the blocks are badly damaged. If almost all are damaged, the modified version will not make things faster. It depends if when observing the advancement you see very rarely this message or almost every time. I can’t tell from what you posted.
Also, normally at the verbose level you should see messages such as
Pass 2 of 3, processing blocklist volume xxx of yyy

90% and beyond is the last of three passes (10% each, starting at 70%) of search for missing data.

Was Recreate an attempt to fix something? If so, describing prior history might allow more insights.