Backup job failed after days but won't restart

I get the error message “Detected non-empty blocksets with no associated blocks” with the options to ‘Show’ or ‘Dismiss’.

If I try to restart the job I just get the same message again.

I’ve generated a Bug Report:

Thanks.

After reading some more posts here I tried repairing the database - which seemed to complete without errors - but it made no difference.

Then I tried ‘delete and repair’ which failed saying there was no file list on the remote site. Clicking ‘Show’ produced a huge list of files!

Any ideas, please, or do I just have to start again?

I’ve tried ‘repair’ and ‘delete & repair’ without success. It looks like there is a file missing on the remote store which, I presume, is the filelist that the application is looking for.

Is there anyway to scan the remote store and re-create the file list?

Thanks.

The filelist would be in a file with dlist and a backup date in its file name. Dlist uploads at end of backup, however the backups seem to have had a run of incompletions. I think I see 4 backup starts, with the first interrupted somehow. It’s listed as partial. One that’s not listed as partial ran next, then I’m not sure about final two. The missing file was 1412492696 bytes long, and vanished from source before second backup. There’s no entry for it in the file table, otherwise I could have you look in your database for a file like that…

Can you say anything about prior history of this backup? Any stop that you have to do is best done by GUI stop button then Stop after current file, not Stop now (there’s a bug fixed in Canary but not in Beta) and also not by killing the process or restarting the system underneath it. They sometimes make messes.

Though things may be different now, the database bug report also looked incomplete, as files were listed as Uploading or Deleting, which is normally a temporary state. They should settle at Verified and Deleted.

What sort of destination is this, and is it (and your network) reliable? The database delete might have lost historical logs (and the DB bug report doesn’t even have any redacted ones – again, how did these end?).

The two dlist (filelist) files that I suspect were in formation listed as Temporary, but you can check remote:

duplicati-20201116T150914Z.dlist.zip.aes
duplicati-20201117T182431Z.dlist.zip.aes

These would be the only place in the destination that would know the file list. If not there, the list isn’t there. Ordinarily an intact database can regenerate the dlist files, but I’m not sure you still have a good database.

Do you have any other backups? If so, look at the job log in Complete log for RetryAttempts. Some are normal, but too many can also cause a backup to exceed its number-of-retries and fail before normal end. Viewing the Duplicati Server Logs at Home → About → Show log (and click) usually catches such errors.

Before just redoing the whole backup, I’d like a better idea of the problem history on prior tries. Starting up somewhat gradually might also be a good idea if you can pick a smaller section of files for the first backup.

Seeing that complete and leave a dlist file will give you a log so you can see if you got retries, and will also make sure you have at least one dlist uploaded. Then add more files, and try to do graceful stop as above.

Ordinarily no stop is needed, but especially on big initial backup, sometimes people need to do other work.

Thanks for that. I’ve done several smaller backups (before and since) successfully and, to the best of my knowledge, my network is pretty stable. This job did stop once - nothing to do with me - but when I attempted to run it again it appeared to pick up from where it had left off (which I thought was good at the time). I had no idea there had been other interruptions.

It is the biggest backup I’ve tried so far so it could be a glitch or revealing some other issue I haven’t come across before.

I’ll delete everything associated with this backup and start it again. If it stops I’ll collect the log at that point rather than restart it and see if there’s more information in the log and report back here.

Thanks for the feedback.

If you have logs left (e.g. if it was a different backup), please look at RetryAttempts value as requested.
Anything over the Internet is a bit unstable, and if you’re on a public storage, they also occasionally fail.

Any clues given? Perhaps you deleted the job log (and there may have been no log), but see server log.

The backup history in UTC looks like the below. Maybe the stop was on first one, and the restart ran for days then reported the error message you cite at end of backup, and the next two reported at the start?

Backup	Monday, November 16, 2020 3:09:14 PM
Backup	Tuesday, November 17, 2020 6:24:31 PM
Backup	Friday, November 20, 2020 11:43:41 AM
Backup	Friday, November 20, 2020 11:44:47 AM

Can you confirm or amend? I think the same consistency check runs before and after a backup. Seeing consistency failures hang around and break future starts is normal, unless the issue is fixed in between. Seeing a backup pass its before-start check and fail later in the same run is worrisome if that happened.

Look both for a job log and the server log, however neither is as good a record as setting up a debug log.
Advanced options log-file=<path> at log-file-log-level=retry might be a good plan (there are higher levels).

My new attempt succeeded! It would have been nice to have repaired/resumed the first job and saved some time but in light of the breaks in the first job not surprising it couldn’t be retrieved.

And thanks to your feedback, ts678, I know what to look for in the future.

Much appreciated.