Backup stalls and never ends

Hi,

Backup details:
System: Synology DS418. Realtek rtd1296
Storage: Wasabi (S3)
Duplicati version The problem began with 2.0.5.1 but I’ve tried 2.0.6.3 and v2.0.6.103-2.0.6.103_canary_2022-06-12

There is another DS418 on the same network with the same problem. There are two DS420 on the same network with the same storage with no problems.

The backup begins to upload files and freeze in no more than 30minutes. I’ve tried to wait 5 days with no success.

2022-06-15 13:33:04 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b3d059f1edad5475c9ca72f994106593b.dblock.zip.aes (49.99 MB)
2022-06-15 13:33:04 +02 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO “Remotevolume” (“OperationID”, “Name”, “Type”, “State”, “Size”, “VerificationCount”, “DeleteGraceTime”) VALUES (1, “duplicati-ba52bad7c4c8a4321880748bdd26ecacf.dblock.zip.aes”, “Blocks”, “Temporary”, -1, 0, 0); SELECT last_insert_rowid();
2022-06-15 13:33:04 +02 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO “Remotevolume” (“OperationID”, “Name”, “Type”, “State”, “Size”, “VerificationCount”, “DeleteGraceTime”) VALUES (1, “duplicati-ba52bad7c4c8a4321880748bdd26ecacf.dblock.zip.aes”, “Blocks”, “Temporary”, -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2022-06-15 13:33:08 +02 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAddBlockToOutputFlush
2022-06-15 13:33:09 +02 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAddBlockToOutputFlush took 0:00:00:00.371

Temp dir (cleaned previous to backup):

RemoteVolume table (local database and remote staorage folder were deleted previous to backup):

What I’ve tried so far:

  • Update from intial 2.0.5.1 to 2.0.6.3 and to last canary
  • Set http-operation-timeout to 1minute
  • Changed blocksize to 10MB

It seems like connections get hung.

Any idea?

Duplicati has a possible no loop exit issue where the backup will never end… It sounds like you’re running into this.

Anyone who doesn’t actually deeply debug the application code may think its a timeout issue. Think I did originally guess that as well lol. But, a timeout on the code loops is a poor fix. Does make it exit (at least on the main WhenAll that the forever-ness happens inside of) but not a good way to go because of the true nature of the situation. Eg an error happens and Duplicati should cleanly deal with it. Currently, certain errors happen and it gets stuck and just keeps looping in a broken state. It is possible at some point something else would break and make it exit but for all purposes its forever.

The variable to workaround it should be different for every different issue. Could be to not backup certain files. Maybe its to use a different service such as Google Drive instead of Dropbox, SSH instead of Webdav or whatever. Could be a permission issue or remote issue. Etc.

No idea in your case. But major differences should help. Its an issue though that people will keep running into. They have for years already completely unknowingly.