Large job stopped halfway. Stuck in loops, either "database is locked" or "No filelists found on the remote destination" EDIT: auto-cleanup bug?

Running 2.0.5.1_beta_2020-01-18 on Windows 10

A large backup job’s first run was stopped half way by a reboot. Now I’m stuck in endless loops. Since the whole job takes 10-15 days from scratch to upload and I was at 7 days when the pc rebooted I’d like to continue were I was.

.

Setting concurrency-max-threads to 1 does not help
Setting concurrency-block-hashers to 1 does not help
Setting concurrency-compressors to 1 does not help
Changing source to one single file that is 100% sure not locked does not help

.

LOOP ONE:

  • Delete database
  • Repair database

GET: No filelists found on the remote destination

  • Start backup

GET: The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.

  • Delete database
  • Repair database

GET: No filelists found on the remote destination

.

LOOP TWO:

  • Delete database
  • Start backup

GET: The process cannot access the file because it is being used by another process.

  • Delete database

GET: The process cannot access the file ‘C:\Users\MagnusT\AppData\Local\Duplicati\HXCBHDNVMN.sqlite’ because it is being used by another process.

  • Restart Duplicati
  • Delete database
  • Start backup

GET: The process cannot access the file because it is being used by another process.

.

LOGFILE EXAMPLE

2021-03-01 21:57:50 +01 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-iff96aa746a1b4d18ae27608b48a40172.dindex.zip.aes
2021-03-01 21:57:50 +01 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-iffaa8f8c8ff0420e9a262f84677f3e2e.dindex.zip.aes
2021-03-01 21:57:50 +01 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-ifff6dc5a8a014fa689a4d553411f3e86.dindex.zip.aes
2021-03-01 21:57:50 +01 - [Error-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraRemoteFiles]: Found 3296 remote files that are not recorded in local storage, please run repair
2021-03-01 21:57:50 +01 - [Warning-Duplicati.Library.Main.Operation.BackupHandler-BackendVerifyFailedAttemptingCleanup]: Backend verification failed, attempting automatic cleanup
Duplicati.Library.Interface.UserInformationException: Found 3296 remote files that are not recorded in local storage, please run repair
   at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, String protectedfile)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile)
2021-03-01 21:58:20 +01 - [Warning-Duplicati.Library.Main.Operation.RepairHandler-FailedToReadLocalDatabase]: Failed to read local db C:\Users\MagnusT\AppData\Local\Duplicati\HXCBHDNVMN.sqlite, error: database is locked
database is locked
code = Busy (5), message = System.Data.SQLite.SQLiteException (0x800007AF): database is locked
database is locked
   at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt)
   at System.Data.SQLite.SQLiteDataReader.NextResult()
   at System.Data.SQLite.SQLiteDataReader..ctor(SQLiteCommand cmd, CommandBehavior behave)
   at System.Data.SQLite.SQLiteCommand.ExecuteReader(CommandBehavior behavior)
   at Duplicati.Library.Main.Database.ExtensionMethods.ExecuteScalarInt64(IDbCommand self, Boolean writeLog, String cmd, Int64 defaultvalue, Object[] values)
   at Duplicati.Library.Main.Database.LocalDatabase..ctor(IDbConnection connection, String operation)
   at Duplicati.Library.Main.Operation.RepairHandler.Run(IFilter filter)
2021-03-01 21:58:20 +01 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.IO.IOException: The process cannot access the file because it is being used by another process.
   at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
   at System.IO.File.InternalMove(String sourceFileName, String destFileName, Boolean checkHost)
   at Duplicati.Library.Main.Operation.RepairHandler.Run(IFilter filter)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile)
   at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()

OK, found why the database locks itself out when running the job:

auto-cleanup

When auto-cleanup is set to on I always get the “The process cannot access the file because it is being used by another process.” on the database itself.

When auto-cleanup is turned off I have no problems with any locked file

.

= = = = = = = = = = = = = = = = = = = = = = = =

.

But I’m still locked in loops.

LOOP ONE:

  • Delete database
  • Repair database

GET: No filelists found on the remote destination

  • Start backup

GET: The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.

  • Delete database
  • Repair database

GET: No filelists found on the remote destination

.

LOOP TWO:

  • Delete database
  • Start backup

GET: Found 3296 remote files that are not recorded in local storage, please run repair

  • Repair database

GET: No filelists found on the remote destination

  • Delete database
  • Start backup

GET: Found 3296 remote files that are not recorded in local storage, please run repair

First runs are hard because they’re long, so do-it-yourself checkpoints (start small, finish, then add) help.

On top of that, this was not a clean end. For 2.0.5.1, clean would use “Stop after current file”. Such is life.

Clean stop would take the time to finish uploads in progress, and upload dlist file, avoiding “No filesets”.

What’s the destination? Some might be able to lave partly-written files. Others might “simply” omit files…

There’s an experimental technique to reclaim some of your uploaded source file blocks from dblock files, however it’d be nice to know your file integrity situation and whether you’re lucky enough to have a dindex per dblock after the abrupt stop. If you saved DB after original hard stop, it might have clues about status.

It’s possible, but difficult, to get an idea of backup integrity without a DB, but it needs access or download. Ordinarily this is the kind of thing that the DB plus Repair take care of, except Repair needs a backup, but there’s not one yet… Easiest and safest path is start fresh, get a small backup, then continue to add data.

It’s up to you though.

Thanks for your reply! Yeah, it seems the first backup run needs to finish else it gets very complicated :slight_smile: So I restarted it from scratch. I have two identical jobs, one doing a local backup and this one doing a remote location backup of the same sources. I can only transfer at 20-40Mbit/s so it takes a while. Anyway, now I know to start the job with a few files so it finishes and then add a bunch of TB to the job :sweat_smile:


But what about the very odd behaviour that autocleanup when turned on prevents some jobs from running at all. With multiple processes trying to use the db and the whole job cancelling itself with an error message because the db is locked, and actually keeping it locked till you restart Duplicati! I’d suspect it only happens if there is a lot to cleanup or something like that so the autocleanup is locking the file for too long time? Or the other way around, autocleanup can’t read the database because it’s locked by the backup job? Anyway, a really really unclear hard to decode error message and a very undesired behaviour.

Other than that, all great! Thanks for everyone’s efforts!!!

Is this reproducible from scratch? If so, please file an Issue with steps, so a developer might look at it.

Pretty sure it is, it was very clear the autocleanup being turned on was the culprit.

To reproduce I would need to set up a backup set in need of a long cleanup. Any suggestions of the easiest way to create such a scenario?

If you mean a long Repair after DB Delete, you could try deleting a dindex file from destination, making Duplicati download all the dblock files. You can verify the plan in About → Show log → Live → Verbose.

No, a cleanup that “autocleanup” does. I need to create a situation where that cleanup takes a while.

autoclenaup: “If a backup is interrupted there will likely be partial files present on the backend. Using this flag, Duplicati will automatically remove such files when encountered.”

I don’t know how to create a lot of “partial files present on the backend”

It looks like a Repair to me. See the code. Repair can do the sort of file removal you mentioned, AFAIK.

It’s hard to say what you got in the original reboot. Some destinations are more able to write partial files, however that’s kind of a generic phrasing that likely refers to various mess from a hard-stopped backup.

If you look at the Repair link, you see that after a DB delete, it turns into a Recreate, so I told you a slow one. I can’t match up your original because it was never saved or posted, but you know what happened.