B2 cannot restore database after power off

I just started to backup my data to B2 lately.
I have 2 backups (each with its own bucket) and slow upload speed:

  1. 60G backup that was done once
  2. 170G backup.

After 2 days of running backup 2, and ~80G uploaded I had a power failure at the house.
I turned on the PC I was totally shocked to see that this backup process does not appear in the backup list!?

At least I’m trying to restore all the uploaded data.
In BackBlaze interface I see that the bucket has 4244 files.
So, I set a new backup with the same parameters, I had to create a new application key for the bucket. “test connection” has no problem.
I did not run this backup but went to database -> recreate, after ~20sec I get an error:

“No filelists found on the remote destination”

In the “remote” log there are 4241 lines that looks like this:

{"Name":"duplicati-b0028cf6b0000417686e2389d5b13d148.dblock.zip.aes","LastAccess":"2020-10-08T17:27:46Z","LastModification":"2020-10-08T17:27:46Z","Size":52330269,"IsFolder":false},

There is no other line that looks different

From about -> show log

· Oct 11, 2020 10:20 PM: Failed while executing "Repair" with id: 5

Duplicati.Library.Interface.UserInformationException: No filelists found on the remote destination at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor) at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.Run(String path, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor) at Duplicati.Library.Main.Operation.RepairHandler.RunRepairLocal(IFilter filter) at Duplicati.Library.Main.Operation.RepairHandler.Run(IFilter filter) at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method) at Duplicati.Library.Main.Controller.Repair(IFilter filter) at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

What could be the problem?
Why did the backup process disapeared from Duplicati?
Is there an option to save theses 80G / 2days of data?

I’m using Duplicati - 2.0.5.111_canary_2020-09-26

Did you ever see it? If not, it didn’t disappear, it was never added. I think it’s added after backup is done.
Stopping the backup in a controlled manner, such as using “Stop after current file” will log it as a partial.
These are also restartable just by doing another backup. Power-fail on initial backup is a messier issue.

Nothing easy, but if you want to do an experiment to reconnect the remote files to the backup, look here.
I haven’t used it in exactly your case, but I did just try it out on a backup where I deleted the remote dlist, which is similar. The dlist file records all the files backed up, and it too is recorded only at end of backup.

Although unexpected backup interruptions do happen, the initial backup is the most sensitive because a filelist (a remote file with dlist and a date in its name – look for one in finished backup) hasn’t uploaded.
Having the initial backup be a small one (critical files or just random) first prevents the no-filelist problem.

EDIT 1:
If you want to try the experiment, I can give a tentative list of directions beyond what the linked topic gave.
It’s somewhat involved, but can probably be finished in less than an hour (if it works at all) after starting it.

EDIT 2:
Default parallel uploading these days adds some uncertainty (compared to my test) to what got uploaded. This may cause the Recreate to download more, but some Internet connections download faster. Yours?

EDIT 3:
Not so sure the extra downloads can happen (which is good because B2 charges). I got an error instead.
Steps list is something like this, however it certainly hasn’t been tested with your exact situation. Care to?

  • If backup is encrypted, install AES Crypt to decrypt and encrypt .zip files as needed.

  • Create a backup to a local folder of some small file. Don’t bother encrypting. Run it.

  • Open the dlist .zip in its folder, copy filelist.json somewhere, drag it into notepad.

  • Edit the file so contents are the two characters []. Save it then drag to update .zip.

  • If the remote backup is encrypted (.zip.aes), use AES Crypt to also encrypt this file.

  • Copy the file to the remote. Though it names no files, it will avoid missing filelist.

  • Edit the remote backup job to have no-auto-compact checked, to avoid unwanted actions.

  • Press the Recreate button on the Database screen. Watch progress bar advance to right.

  • If a dindex file got to the destination ahead of its associated, dblock, error is like
    2020-10-11 19:07:44 -04 - [Error-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-b42c95467115040cb9d6f5fecd3e41e33.dblock.zip by duplicati-ibf2836edd79e4a3899c61513ff871d4d.dindex.zip, but not found in list, registering a missing remote file
    Bad reference can be fixed by deleting the dindex file. Repeat until Recreate is fine.

  • Run the remote backup, and the hope is that it will reattach to already uploaded data.
    The database Recreate should have recorded what was remote already, so it should know.
    The standard deduplication feature reuses known blocks in new backups to avoid upload.
    At some point during backup, uploaded blocks will run out, and new blocks will upload.

  • If backup finishes, try a test restore with no-local-blocks set to ensure downloading.

  • You can remove the no-auto-compact option if all is well. Usually you want compacting.

1 Like

Thanks for the fully detailed information.
I did it all, It run for some time and ended without any error, but the backup still writes that “Last successful backup: Never“

In the exlicitOnly live log the last lines are:>

•  Oct 12, 2020 11:45 AM: The operation Repair has completed 
•  Oct 12, 2020 11:45 AM: Running Repair took 0:00:54:54.827 
•  Oct 12, 2020 11:45 AM: ExecuteNonQuery: PRAGMA optimize took 0:00:00:00.000 
•  Oct 12, 2020 11:45 AM: Starting - ExecuteNonQuery: PRAGMA optimize 
•  Oct 12, 2020 11:45 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual") took 0:00:00:00.000 
•  Oct 12, 2020 11:45 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual") 
•  Oct 12, 2020 11:45 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "BlocksetID", "Index", COUNT(*) AS "EC" FROM "BlocklistHash" GROUP BY "BlocksetID", "Index") WHERE "EC" > 1) took 0:00:00:00.000 
•  Oct 12, 2020 11:45 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "BlocksetID", "Index", COUNT(*) AS "EC" FROM "BlocklistHash" GROUP BY "BlocksetID", "Index") WHERE "EC" > 1) 
•  Oct 12, 2020 11:45 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "PrefixID", "Path", "BlocksetID", "MetadataID", COUNT(*) as "Duplicates" FROM "FileLookup" GROUP BY "PrefixID", "Path", "BlocksetID", "MetadataID") WHERE "Duplicates" > 1 took 0:00:00:00.000 
•  Oct 12, 2020 11:45 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "PrefixID", "Path", "BlocksetID", "MetadataID", COUNT(*) as "Duplicates" FROM "FileLookup" GROUP BY "PrefixID", "Path", "BlocksetID", "MetadataID") WHERE "Duplicates" > 1 
•  Oct 12, 2020 11:45 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM ( SELECT DISTINCT c1 FROM (SELECT COUNT(*) AS "C1" FROM (SELECT DISTINCT "BlocksetID" FROM "Metadataset") UNION SELECT COUNT(*) AS "C1" FROM "Metadataset" )) took 0:00:00:00.001 
•  Oct 12, 2020 11:45 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM ( SELECT DISTINCT c1 FROM (SELECT COUNT(*) AS "C1" FROM (SELECT DISTINCT "BlocksetID" FROM "Metadataset") UNION SELECT COUNT(*) AS "C1" FROM "Metadataset" )) 
•  Oct 12, 2020 11:45 AM: ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 0:00:00:00.000 
•  Oct 12, 2020 11:45 AM: Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 

So does it means that I need to delete and start over with the backup?
This time I will first make a backup of only 1 file….

so I have no view of what happened. I guess I could have said what live log level to watch at.
Did you get any logs in the backup’s Show log list or the server’s list at About → Show log?

That’s puzzling because the final two steps were the test restore and putting settings back.
Maybe you mean you ran until the backup ended, or did you stop after Recreate or Repair?
Did you watch the progress bar for that? Was this running at about 11:45 AM? Log did say:

Oct 12, 2020 11:45 AM: The operation Repair has completed

and in far too short a time (1 hour) to have uploaded the rest of the 170 GB - 80 GB = 90 GB.
However I can’t tell if that was the Recreate button (which might call itself Repair) or backup.
One hour might be a reasonable time for Repair to run. Again – was progress bar watched?

If you never ran a backup, that would explain why there has never been a successful backup.
It seems like about half the backup ran in two days, so I expect another two days for the rest.

There may be some cases where running backup does a repair as part of its startup though.
If that ran, there would ordinarily be log messages not just about Repair ending without error.

If the Recreate ran for an hour, then the next step was to push the Run now link of the backup.
Did you ever get to anything that looked like a backup display on the home screen and status?
What happens if you Run now the backup again? See anything resembling a backup running?

If you want to keep a live log up, About → Show log → Live → Verbose might show it either
studying your source files then backing them up, or it might show some other activity instead.
About → Show log → Live → Information would be a less noisy log if you want to start there.

So I don’t know what you did and what happened. If you want to just start fresh, that’s also fine.
B2 at least won’t charge for the uploads, but you will repeat the first two days of the uploading.
The experiment was intended to try to get that data back in the database to allow continuation.

1 Like

Sorry for being unclear.
I only ran the database → “recreate (delete and repair)”

I did not constantly looked at it. it took ~1hr and the progress bar filled over time.
I tought that after the process is done, It will be possible to see/restore the files that are allready in B2, but if I try “restore files” then the list is empty.
The .sqlite file for this backup is 238M which is reasnoble as the .sqlite of the same folder to a local NAS is 467M.
So this sound good, but why don’t I have the option to restore these files?

under backup → show log, general → complete log:

{
“MainOperation”: “Repair”,
“RecreateDatabaseResults”: {
“MainOperation”: “Repair”,
“ParsedResult”: “Success”,
“Version”: “2.0.5.111 (2.0.5.111_canary_2020-09-26)”,
“EndTime”: “2020-10-12T13:00:17.9003258Z”,
“BeginTime”: “2020-10-12T12:09:15.7778633Z”,
“Duration”: “00:51:02.1224625”,
“MessagesActualLength”: 0,
“WarningsActualLength”: 0,
“ErrorsActualLength”: 0,
“Messages”: null,
“Warnings”: null,
“Errors”: null,
“BackendStatistics”: {
“RemoteCalls”: 2136,
“BytesUploaded”: 0,
“BytesDownloaded”: 78959293,
“FilesUploaded”: 0,
“FilesDownloaded”: 2120,
“FilesDeleted”: 0,
“FoldersCreated”: 0,
“RetryAttempts”: 15,
“UnknownFileSize”: 0,
“UnknownFileCount”: 0,
“KnownFileCount”: 0,
“KnownFileSize”: 0,
“LastBackupDate”: “0001-01-01T00:00:00”,
“BackupListCount”: 0,
“TotalQuotaSpace”: 0,
“FreeQuotaSpace”: 0,
“AssignedQuotaSpace”: 0,
“ReportedQuotaError”: false,
“ReportedQuotaWarning”: false,
“MainOperation”: “Repair”,
“ParsedResult”: “Success”,
“Version”: “2.0.5.111 (2.0.5.111_canary_2020-09-26)”,
“EndTime”: “0001-01-01T00:00:00”,
“BeginTime”: “2020-10-12T12:09:15.7358767Z”,
“Duration”: “00:00:00”,
“MessagesActualLength”: 0,
“WarningsActualLength”: 0,
“ErrorsActualLength”: 0,
“Messages”: null,
“Warnings”: null,
“Errors”: null
}
},
“ParsedResult”: “Success”,
“Version”: “2.0.5.111 (2.0.5.111_canary_2020-09-26)”,
“EndTime”: “2020-10-12T13:00:18.0652732Z”,
“BeginTime”: “2020-10-12T12:09:15.7358767Z”,
“Duration”: “00:51:02.3293965”,
“MessagesActualLength”: 4277,
“WarningsActualLength”: 0,
“ErrorsActualLength”: 0,
“Messages”: [
“2020-10-12 15:09:15 +03 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Repair has started”,
“2020-10-12 15:09:19 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()”,
“2020-10-12 15:09:26 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: (4.14 KB)”,
“2020-10-12 15:16:42 +03 - [Information-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-RebuildStarted]: Rebuild database started, downloading 1 filelists”,
“2020-10-12 15:16:42 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20201012T074733Z.dlist.zip.aes (802 bytes)”,
“2020-10-12 15:16:44 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20201012T074733Z.dlist.zip.aes (802 bytes)”,
“2020-10-12 15:16:44 +03 - [Information-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-FilelistsRestored]: Filelists restored, downloading 2119 index files”,
“2020-10-12 15:16:44 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-i000dbdf05d694334addaeebb1ea20cb1.dindex.zip.aes (64.00 KB)”,
“2020-10-12 15:16:45 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-i000dbdf05d694334addaeebb1ea20cb1.dindex.zip.aes (64.00 KB)”,
“2020-10-12 15:16:45 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-i003185aecfff4e28a64684561ea393af.dindex.zip.aes (38.54 KB)”,
“2020-10-12 15:16:45 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-i003185aecfff4e28a64684561ea393af.dindex.zip.aes (38.54 KB)”,
“2020-10-12 15:16:45 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-i003f44a62bb147adac067c89456bb925.dindex.zip.aes (17.97 KB)”,
“2020-10-12 15:16:46 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-i003f44a62bb147adac067c89456bb925.dindex.zip.aes (17.97 KB)”,
“2020-10-12 15:16:46 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-i004104d07bd14aaf96bbe8102dafcd5c.dindex.zip.aes (39.33 KB)”,
“2020-10-12 15:16:47 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-i004104d07bd14aaf96bbe8102dafcd5c.dindex.zip.aes (39.33 KB)”,
“2020-10-12 15:16:47 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-i00a3fb122f3941cb9bb196708e6b74a4.dindex.zip.aes (37.53 KB)”,
“2020-10-12 15:16:47 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-i00a3fb122f3941cb9bb196708e6b74a4.dindex.zip.aes (37.53 KB)”,
“2020-10-12 15:16:47 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-i0106bd4150ad4d17a085910ad94d5949.dindex.zip.aes (35.11 KB)”,
“2020-10-12 15:16:49 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-i0106bd4150ad4d17a085910ad94d5949.dindex.zip.aes (35.11 KB)”,
“2020-10-12 15:16:49 +03 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-i010c0fe27ed346c5adc16ff1452dcbdf.dindex.zip.aes (62.84 KB)”
],
“Warnings”: ,
“Errors”: ,
“BackendStatistics”: {
“RemoteCalls”: 2136,
“BytesUploaded”: 0,
“BytesDownloaded”: 78959293,
“FilesUploaded”: 0,
“FilesDownloaded”: 2120,
“FilesDeleted”: 0,
“FoldersCreated”: 0,
“RetryAttempts”: 15,
“UnknownFileSize”: 0,
“UnknownFileCount”: 0,
“KnownFileCount”: 0,
“KnownFileSize”: 0,
“LastBackupDate”: “0001-01-01T00:00:00”,
“BackupListCount”: 0,
“TotalQuotaSpace”: 0,
“FreeQuotaSpace”: 0,
“AssignedQuotaSpace”: 0,
“ReportedQuotaError”: false,
“ReportedQuotaWarning”: false,
“MainOperation”: “Repair”,
“ParsedResult”: “Success”,
“Version”: “2.0.5.111 (2.0.5.111_canary_2020-09-26)”,
“EndTime”: “0001-01-01T00:00:00”,
“BeginTime”: “2020-10-12T12:09:15.7358767Z”,
“Duration”: “00:00:00”,
“MessagesActualLength”: 0,
“WarningsActualLength”: 0,
“ErrorsActualLength”: 0,
“Messages”: null,
“Warnings”: null,
“Errors”: null
}
}

and under remote there are many lines like this:

Oct 12, 2020 4:00 PM: get duplicati-ifffd87cc93434870980c31c6a32a699a.dindex.zip.aes
Oct 12, 2020 4:00 PM: get duplicati-ifff5a828a61045e981b5a602366b0e16.dindex.zip.aes

(And the time changed as I made anoter test of the “recreate (delete and repair)”)

I did not tryied to make a “run now” as it will take 4 to 6 days, and I wish to know that I’m doing the right process before starting this.
If you think it may help, I can rerun the recreate process with a specific log level.

I’m starting to think that doing “run now” might be the correct next step…

That’s because the dlist file you uploaded had its file list replaced with an empty list [] by you.
This results in a database that has lots of file data, but not attached to any actual source files.
Running the backup should regenerate prior list, reattach anything uploaded, then keep going.

Of the 80 GB in 4241 files uploaded, you downloaded about half the files and 1/10 of 1% of data.
The bulk of the data is the dblock files. Each has its smaller dindex (so divide by 2) describing it.
You found the get requests for the dindex files. Before that would be the get for your empty dlist.

I thought it did about half in 2 days, so how do you guess 4 to 6? You’re not starting all over again.

Fine so far, except not finished.

Yes. I think you’re confirming the below was a successful Recreate, right? If so, run the backup:

If you want to stop early to see if you can see what you expected to see (but did not) for restore,
using the Stop button (circle with X in it, at right of status bar) then Stop after current file
“should” (after some delay to finish things) stop and let you look at the backup marked partial.
I’m not certain how well it has been tested in this exact scenario. Worst that can happen is that
you’ll do the backup from scratch, which would have been needed if this experiment didn’t work.

EDIT:

So you seemingly stopped after the “until Recreate is fine.”, and there are three steps left to run.

1 Like

YES!

I started the run now, and its running very fast on the files.
The file count is running very fast down while the speed is only 4 bytes/s
So he will probably do so until he finished the 80G that are already in the server.

@ts678 Thanks for all of this!!