Warning at end of job after Upload Retry

Occasionally one of my machines will give a warning at the end of the backup. I checked the logs and here is the relevant part:

  "Messages": [
<snip>
    "2019-11-16 19:00:10 -08 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-20191117T030000Z.dlist.zip.aes (241.59 KB)",
    "2019-11-16 19:00:20 -08 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-20191117T030000Z.dlist.zip.aes (241.59 KB)",
    "2019-11-16 19:00:20 -08 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-20191117T030001Z.dlist.zip.aes (241.59 KB)",
    "2019-11-16 19:00:20 -08 - [Information-Duplicati.Library.Main.Operation.Backup.BackendUploader-RenameRemoteTargetFile]: Renaming \"duplicati-20191117T030000Z.dlist.zip.aes\" to \"duplicati-20191117T030001Z.dlist.zip.aes\"",
    "2019-11-16 19:00:20 -08 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-20191117T030001Z.dlist.zip.aes (241.59 KB)",
    "2019-11-16 19:00:21 -08 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-20191117T030001Z.dlist.zip.aes (241.59 KB)",
<snip>
  ],
  "Warnings": [
    "2019-11-16 19:00:22 -08 - [Warning-Duplicati.Library.Main.BackendManager-DeleteRemoteFileFailed]: Delete operation failed for duplicati-20191117T030000Z.dlist.zip.aes with FileNotFound, listing contents"
  ],

It looks like when an upload fails, Duplicati renames the remote file incrementing the filename by 1 second and tries the upload again. And then at the end of the job it tries to delete the failed remote file upload, but that generates an error because the file was renamed earlier.

I’m not sure I understand this behavior. Why does it rename a failed upload on the remote side? I could see it renaming on the local side and then trying the upload again using the newer filename. Then the cleanup at the end of the job (where it deletes partial/failed uploaded files) would succeed without error.

Any thoughts from someone more familiar with Duplicati’s intended logic here?

Note - I’m using B2 for the back end. I’m running 2.0.4.34 but I was seeing this issue with previous versions as well. Not sure exactly which version this started - I’m guilty of not looking closely at the intermittent issue on this machine until today.

I think that’s what it does. Look at your job’s → Show log → Remote or DB’s RemoteOperation table.

Remote filename naming conventions

To prevent a verified bug in Apache WebDAV (and potentially other services), files are never overwritten. If an upload fails, a new name is generated, and the upload is retried. For the dblock and dindex files, a new random name is generated, but for the dlist filename this is not possible as it conveys information. The timestamp is increased by a second for each attempt to create a unique name, which only affects the accuracy of the timestamps when listing the remote backup sets.

Added auto-rename logic to work around faulty Apache WebDAV backends is probably the first code.

The manifest file duplicati-full-manifestmanifest.aes indicates that there should be xx volumes, but the file list indicates xx #645 might be clearest issue, though something similar is mentioned in other ones:

Creating too many files in single directory #881

Warning: Found a partial file // Files missing #841

I’m not sure if this always worked this way in terms of the Warning. Testing 2.0.4.5 would be worthwhile, because if this is a recent addition due to code rearranging, users are going to be annoyed by the noise. My own backup gets these, and it’s annoying (but less annoying than a totally broken backup would be).

Code has moved around some, though I don’t know if its design has changed. Here is the current code:

Ok I think you’re right. I looked at the source code a bit last night before I made the post, but now I dug deeper. The entry I saw in the log labeled as ‘RenameRemoteTargetFile’ doesn’t actually rename the remote side but merely updates the local database references. It also creates a dummy reference to the old (failed) filename so that it will be deleted later:

So in my case - when I see the warning message on the jobs - I’m guessing the upload failed, but there’s not even a partial file on the back end. So when deletion occurs it “fails” which causes the warning flag to be set.

Checking the remote log for the job, it looks like Duplicati tries to delete that failed file 2 times. Not sure why it would do this twice, maybe the second attempt is what is causing the warning flag to be triggered.

I’ll try to troubleshoot more but it’s difficult as this is a very intermittent problem.

Behavior probably varies. Cloud object storage frequently has additional integrity checks on transfers.

Uploading describes the SHA1 that B2 uses. FTP, OTOH, usually just relies on the TCP/IP end-of-file.
EDIT: which possibly means FTP is more likely to make a partial file, which would then miss the issue.

You could possibly make it on demand by intentionally causing an upload error. For some storage types, disconnecting a network cable or turning off WiFi or using ifconfig on Linux might do it, but this can have unintended (maybe) side effects. Backblaze B2 can require DNS lookups to spread file uploads around.

Better control of upload errors can be had with a local server of some sort (e.g. FTP, SFTP, whatever), if the server can be easily turned on and off. Turning it off in the middle of a transfer causes the error, then 10 seconds later (or whatever –retry-delay is set to) the retry happens but you can turn server on earlier.

Thanks for the idea. This machine is an AWS EC2 instance so I can’t disconnect the NIC, but I may be able to simulate the problem using a network security group rule enabled at just the right time.

Below is a chunk of profiling log (plus --profile-all-database-queries=true) from this warning yesterday.
Does B2 ever throw you an annoying System.Exception: 500 - internal_error: incident id?
Sounds sort of worrisome. Their support didn’t have much to say though, and a retry seems to solve it.

2019-11-17 21:05:46 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UploadNewFileset]: Starting - Uploading a new fileset
2019-11-17 21:05:47 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitUpdateRemoteVolume
2019-11-17 21:05:59 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitUpdateRemoteVolume took 0:00:00:12.054
2019-11-17 21:05:59 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Backup.UploadRealFilelist-UploadNewFileset]: Uploading a new fileset took 0:00:00:12.432
2019-11-17 21:05:59 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitUpdateRemoteVolume
2019-11-17 21:05:59 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitUpdateRemoteVolume took 0:00:00:00.000
2019-11-17 21:05:59 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-20191118T020418Z.dlist.zip.aes (43.33 KB)
2019-11-17 21:06:29 -05 - [Retry-Duplicati.Library.Main.Operation.Backup.BackendUploader-RetryPut]: Operation Put with file duplicati-20191118T020418Z.dlist.zip.aes attempt 1 of 10 failed with message: 500 - internal_error: incident id 1d72a594a6df-DHeLY3nnQXCaCOz/txS/Fg==
System.Exception: 500 - internal_error: incident id 1d72a594a6df-DHeLY3nnQXCaCOz/txS/Fg==
   at Duplicati.Library.Backend.Backblaze.B2AuthHelper.AttemptParseAndThrowException(Exception ex)
   at Duplicati.Library.JSONWebHelper.<GetResponseAsync>d__37.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Duplicati.Library.JSONWebHelper.<ReadJSONResponseAsync>d__27`1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Duplicati.Library.JSONWebHelper.<GetJSONDataAsync>d__22`1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Duplicati.Library.Backend.Backblaze.B2.<PutAsync>d__28.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<DoPut>d__21.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<>c__DisplayClass17_0.<<UploadFileAsync>b__0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Duplicati.Library.Main.Operation.Backup.BackendUploader.<DoWithRetry>d__18.MoveNext()
2019-11-17 21:06:30 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Retrying: duplicati-20191118T020418Z.dlist.zip.aes (43.33 KB)
2019-11-17 21:06:37 -05 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 27.65 MB in 00:00:51.1953085, 553.10 KB/s
2019-11-17 21:06:37 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b8f6e59b1b6824da7b288adcd0c813d09.dblock.zip.aes (27.65 MB)
2019-11-17 21:06:37 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAfterUpload
2019-11-17 21:06:37 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAfterUpload took 0:00:00:00.189
2019-11-17 21:06:37 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (16, "duplicati-id6c457eb8e114d6bbb0b4f2fd6c94f26.dindex.zip.aes", "Index", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
2019-11-17 21:06:37 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (16, "duplicati-id6c457eb8e114d6bbb0b4f2fd6c94f26.dindex.zip.aes", "Index", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2019-11-17 21:06:37 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-id6c457eb8e114d6bbb0b4f2fd6c94f26.dindex.zip.aes (38.62 KB)
2019-11-17 21:06:39 -05 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 38.62 KB in 00:00:01.4411052, 26.80 KB/s
2019-11-17 21:06:39 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-id6c457eb8e114d6bbb0b4f2fd6c94f26.dindex.zip.aes (38.62 KB)
2019-11-17 21:06:39 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAfterUpload
2019-11-17 21:06:39 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAfterUpload took 0:00:00:00.372
2019-11-17 21:06:40 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-20191118T020418Z.dlist.zip.aes (43.33 KB)
2019-11-17 21:06:40 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Rename: duplicati-20191118T020419Z.dlist.zip.aes (43.33 KB)
2019-11-17 21:06:40 -05 - [Information-Duplicati.Library.Main.Operation.Backup.BackendUploader-RenameRemoteTargetFile]: Renaming "duplicati-20191118T020418Z.dlist.zip.aes" to "duplicati-20191118T020419Z.dlist.zip.aes"
2019-11-17 21:06:40 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: UPDATE "Remotevolume" SET "Name" = "duplicati-20191118T020419Z.dlist.zip.aes" WHERE "Name" = "duplicati-20191118T020418Z.dlist.zip.aes"
2019-11-17 21:06:40 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: UPDATE "Remotevolume" SET "Name" = "duplicati-20191118T020419Z.dlist.zip.aes" WHERE "Name" = "duplicati-20191118T020418Z.dlist.zip.aes" took 0:00:00:00.000
2019-11-17 21:06:40 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalar]: Starting - ExecuteScalar: SELECT "Type" FROM "Remotevolume" WHERE "Name" = "duplicati-20191118T020419Z.dlist.zip.aes"
2019-11-17 21:06:40 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalar]: ExecuteScalar: SELECT "Type" FROM "Remotevolume" WHERE "Name" = "duplicati-20191118T020419Z.dlist.zip.aes" took 0:00:00:00.000
2019-11-17 21:06:40 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (16, "duplicati-20191118T020418Z.dlist.zip.aes", "Files", "Deleting", -1, 0, 0); SELECT last_insert_rowid();
2019-11-17 21:06:40 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (16, "duplicati-20191118T020418Z.dlist.zip.aes", "Files", "Deleting", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2019-11-17 21:06:40 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-20191118T020419Z.dlist.zip.aes (43.33 KB)
2019-11-17 21:06:43 -05 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 43.33 KB in 00:00:03.8674156, 11.20 KB/s
2019-11-17 21:06:43 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-20191118T020419Z.dlist.zip.aes (43.33 KB)
2019-11-17 21:06:43 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAfterUpload
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAfterUpload took 0:00:00:00.327
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: Starting - CommitAfterUpload
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.Common.DatabaseCommon-CommitTransactionAsync]: CommitAfterUpload took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT "ID", "IsFullBackup", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT "ID", "IsFullBackup", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT "ID", "IsFullBackup", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT "ID", "IsFullBackup", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT "ID", "IsFullBackup", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT "ID", "IsFullBackup", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Information-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-StartCheck]: Start checking if backups can be removed
2019-11-17 21:06:44 -05 - [Information-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-FramesAndIntervals]: Time frames and intervals pairs: 1.00:00:00 / Keep all, 7.00:00:00 / 1.00:00:00, 28.00:00:00 / 7.00:00:00, 365.00:00:00 / 31.00:00:00
2019-11-17 21:06:44 -05 - [Information-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-BackupList]: Backups to consider: 11/17/2019 3:52:46 PM, 11/13/2019 6:46:12 AM, 11/7/2019 9:21:06 AM, 10/29/2019 8:41:09 PM, 10/20/2019 2:48:41 PM
2019-11-17 21:06:44 -05 - [Profiling-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-NextTimeAndFrame]: Next time frame and interval pair: 1.00:00:00 / Keep all
2019-11-17 21:06:44 -05 - [Profiling-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-BackupsInFrame]: Backups in this time frame: 11/17/2019 3:52:46 PM
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT "IsFullBackup" FROM "Fileset" WHERE "Timestamp" = 1574023966
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT "IsFullBackup" FROM "Fileset" WHERE "Timestamp" = 1574023966 took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-KeepBackups]: Keeping  backup: 11/17/2019 3:52:46 PM
2019-11-17 21:06:44 -05 - [Profiling-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-NextTimeAndFrame]: Next time frame and interval pair: 7.00:00:00 / 1.00:00:00
2019-11-17 21:06:44 -05 - [Profiling-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-BackupsInFrame]: Backups in this time frame: 11/13/2019 6:46:12 AM
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT "IsFullBackup" FROM "Fileset" WHERE "Timestamp" = 1573645572
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT "IsFullBackup" FROM "Fileset" WHERE "Timestamp" = 1573645572 took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-KeepBackups]: Keeping  backup: 11/13/2019 6:46:12 AM
2019-11-17 21:06:44 -05 - [Profiling-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-NextTimeAndFrame]: Next time frame and interval pair: 28.00:00:00 / 7.00:00:00
2019-11-17 21:06:44 -05 - [Profiling-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-BackupsInFrame]: Backups in this time frame: 10/29/2019 8:41:09 PM, 11/7/2019 9:21:06 AM
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT "IsFullBackup" FROM "Fileset" WHERE "Timestamp" = 1572396069
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT "IsFullBackup" FROM "Fileset" WHERE "Timestamp" = 1572396069 took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-KeepBackups]: Keeping  backup: 10/29/2019 8:41:09 PM
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT "IsFullBackup" FROM "Fileset" WHERE "Timestamp" = 1573136466
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT "IsFullBackup" FROM "Fileset" WHERE "Timestamp" = 1573136466 took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-KeepBackups]: Keeping  backup: 11/7/2019 9:21:06 AM
2019-11-17 21:06:44 -05 - [Profiling-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-NextTimeAndFrame]: Next time frame and interval pair: 365.00:00:00 / 31.00:00:00
2019-11-17 21:06:44 -05 - [Profiling-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-BackupsInFrame]: Backups in this time frame: 10/20/2019 2:48:41 PM
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT "IsFullBackup" FROM "Fileset" WHERE "Timestamp" = 1571597321
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT "IsFullBackup" FROM "Fileset" WHERE "Timestamp" = 1571597321 took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-KeepBackups]: Keeping  backup: 10/20/2019 2:48:41 PM
2019-11-17 21:06:44 -05 - [Information-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-BackupsToDelete]: Backups outside of all time frames and thus getting deleted: 
2019-11-17 21:06:44 -05 - [Information-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-AllBackupsToDelete]: All backups to delete: 
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "FilesetEntry" WHERE "FilesetID" NOT IN (SELECT DISTINCT "ID" FROM "Fileset")
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "FilesetEntry" WHERE "FilesetID" NOT IN (SELECT DISTINCT "ID" FROM "Fileset") took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "ChangeJournalData" WHERE "FilesetID" NOT IN (SELECT DISTINCT "ID" FROM "Fileset")
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "ChangeJournalData" WHERE "FilesetID" NOT IN (SELECT DISTINCT "ID" FROM "Fileset") took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "FileLookup" WHERE "ID" NOT IN (SELECT DISTINCT "FileID" FROM "FilesetEntry") 
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "FileLookup" WHERE "ID" NOT IN (SELECT DISTINCT "FileID" FROM "FilesetEntry")  took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "Metadataset" WHERE "ID" NOT IN (SELECT DISTINCT "MetadataID" FROM "FileLookup") 
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "Metadataset" WHERE "ID" NOT IN (SELECT DISTINCT "MetadataID" FROM "FileLookup")  took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "Blockset" WHERE "ID" NOT IN (SELECT DISTINCT "BlocksetID" FROM "FileLookup" UNION SELECT DISTINCT "BlocksetID" FROM "Metadataset") 
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "Blockset" WHERE "ID" NOT IN (SELECT DISTINCT "BlocksetID" FROM "FileLookup" UNION SELECT DISTINCT "BlocksetID" FROM "Metadataset")  took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "BlocksetEntry" WHERE "BlocksetID" NOT IN (SELECT DISTINCT "ID" FROM "Blockset") 
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "BlocksetEntry" WHERE "BlocksetID" NOT IN (SELECT DISTINCT "ID" FROM "Blockset")  took 0:00:00:00.002
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "BlocklistHash" WHERE "BlocksetID" NOT IN (SELECT DISTINCT "ID" FROM "Blockset") 
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "BlocklistHash" WHERE "BlocksetID" NOT IN (SELECT DISTINCT "ID" FROM "Blockset")  took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: INSERT INTO "DeletedBlock" ("Hash", "Size", "VolumeID") SELECT "Hash", "Size", "VolumeID" FROM "Block" WHERE "ID" NOT IN (SELECT DISTINCT "BlockID" AS "BlockID" FROM "BlocksetEntry" UNION SELECT DISTINCT "ID" FROM "Block", "BlocklistHash" WHERE "Block"."Hash" = "BlocklistHash"."Hash") 
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: INSERT INTO "DeletedBlock" ("Hash", "Size", "VolumeID") SELECT "Hash", "Size", "VolumeID" FROM "Block" WHERE "ID" NOT IN (SELECT DISTINCT "BlockID" AS "BlockID" FROM "BlocksetEntry" UNION SELECT DISTINCT "ID" FROM "Block", "BlocklistHash" WHERE "Block"."Hash" = "BlocklistHash"."Hash")  took 0:00:00:00.008
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "Block" WHERE "ID" NOT IN (SELECT DISTINCT "BlockID" FROM "BlocksetEntry" UNION SELECT DISTINCT "ID" FROM "Block", "BlocklistHash" WHERE "Block"."Hash" = "BlocklistHash"."Hash") 
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "Block" WHERE "ID" NOT IN (SELECT DISTINCT "BlockID" FROM "BlocksetEntry" UNION SELECT DISTINCT "ID" FROM "Block", "BlocklistHash" WHERE "Block"."Hash" = "BlocklistHash"."Hash")  took 0:00:00:00.008
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: UPDATE "RemoteVolume" SET "State" = "Deleting" WHERE "Type" = "Files" AND "State" IN ("Uploaded", "Verified") AND "ID" NOT IN (SELECT "VolumeID" FROM "Fileset") 
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: UPDATE "RemoteVolume" SET "State" = "Deleting" WHERE "Type" = "Files" AND "State" IN ("Uploaded", "Verified") AND "ID" NOT IN (SELECT "VolumeID" FROM "Fileset")  took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT "Name", "Size" FROM "RemoteVolume" WHERE "Type" = "Files" AND "State" = "Deleting" 
2019-11-17 21:06:44 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT "Name", "Size" FROM "RemoteVolume" WHERE "Type" = "Files" AND "State" = "Deleting"  took 0:00:00:00.000
2019-11-17 21:06:44 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.BackendManager-RemoteOperationDelete]: Starting - RemoteOperationDelete
2019-11-17 21:06:44 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-20191118T020418Z.dlist.zip.aes ()
2019-11-17 21:06:46 -05 - [Warning-Duplicati.Library.Main.BackendManager-DeleteRemoteFileFailed]: Delete operation failed for duplicati-20191118T020418Z.dlist.zip.aes with FileNotFound, listing contents
Duplicati.Library.Interface.FileMissingException: The requested file does not exist
   at Duplicati.Library.Backend.Backblaze.B2.Delete(String remotename)
   at Duplicati.Library.Main.BackendManager.DoDelete(FileEntryItem item)
2019-11-17 21:06:46 -05 - [Information-Duplicati.Library.Main.BackendManager-DeleteRemoteFileSuccess]: Listing indicates file duplicati-20191118T020418Z.dlist.zip.aes is deleted correctly
2019-11-17 21:06:46 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.BackendManager-RemoteOperationDelete]: RemoteOperationDelete took 0:00:00:02.276
2019-11-17 21:06:46 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.BackendManager-RemoteOperationNothing]: Starting - RemoteOperationNothing
2019-11-17 21:06:46 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.BackendManager-RemoteOperationNothing]: RemoteOperationNothing took 0:00:00:00.000
2019-11-17 21:06:46 -05 - [Information-Duplicati.Library.Main.Operation.DeleteHandler-DeleteResults]: Deleted 1 remote fileset(s)
2019-11-17 21:06:46 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.BackupHandler-CommitFinalizingBackup]: Starting - CommitFinalizingBackup
2019-11-17 21:06:47 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.BackupHandler-CommitFinalizingBackup]: CommitFinalizingBackup took 0:00:00:00.253

I will have to check. Last night I set up a VM that continually backed up to B2 once every minute and was able to produce the problem 3 times in about 8 hours. (This was without intentionally trying to break the network connection.)

Would be interesting if the problem is the same on my end…

Fortunately I was running Duplicati in debug mode from within Visual Studio, so it generated a nice Duplicati.debug.log for me. I searched and found the three errors from B2 that correlate with the 3 warnings I had last night…

2019-11-17 23:50:02 -08 - [Retry-Duplicati.Library.Main.Operation.Backup.BackendUploader-RetryPut]: Operation Put with file duplicati-b5b6880529e34453cbbc88cb49c2d23cb.dblock.zip.aes attempt 1 of 5 failed with message: 503 - service_unavailable: c001_v0001105_t0033 is too busy
2019-11-18 00:45:02 -08 - [Retry-Duplicati.Library.Main.Operation.Backup.BackendUploader-RetryPut]: Operation Put with file duplicati-20191118T084500Z.dlist.zip.aes attempt 1 of 5 failed with message: 503 - service_unavailable: c001_v0001105_t0042 is too busy
2019-11-18 04:20:02 -08 - [Retry-Duplicati.Library.Main.Operation.Backup.BackendUploader-RetryPut]: Operation Put with file duplicati-20191118T122000Z.dlist.zip.aes attempt 1 of 5 failed with message: 503 - service_unavailable: c001_v0001130_t0051 is too busy

So this confirms to me that it’s definitely a B2 issue - load on their end is too high in my case.

I think it makes more sense for Duplicati to NOT mark the job with a “warning” if it attempts to delete a file that it later confirms through a file listing has already been removed. I’ll look into it…

Thanks again for the pointer!

This looks trivial to fix. I’m going to try my idea and run hundreds of backups again tonight and see if it solves the issue.

503 is a very-documented special-but-normal case where a load-balancer sent client elsewhere.

Some other services load-balance using equipment in their data center, and client is uninvolved.

What To Do When You Get a B2 503 (or 500) Server Error

Calling the API

B2 Integration Checklist

Duplicati’s retry mechanism for 5xx on upload looks like it’s here. Above link also has a way to test it:

To test your code, add a X-Bz-Test-Mode: fail_some_uploads header to the b2_upload_file API call. This will cause intermittent artificial failures, allowing you to test the resiliency of your code.

Duplicati’s seems to work. The retry happens silently – up until the delete code gives a noisy warning.
They don’t need special logs to diagnose – regular logs are fine once one knows that the issue exists, however the lack of questions in forum makes me suspect that perhaps the bug was added in Canary.

Excellent, I’m going to try it. I had no failures during the past 10 hours of continuous backups, so being able to artificially cause failures like this will be very helpful.

Could be something in the B2 backend was recently changed to expose this issue, but I’m confident there is a logical place to fix this in BackendManager.cs around line 1100 (which hasn’t changed for years).

Instead of flagging a warning on deletion failure BEFORE verifying with file listing, wait until after the listing test is complete and it confirms the file is still present. If the file is NOT present then issue no warning at all.

Will submit a PR if it works.

My tweak to BackendManager.cs works, but I noticed we only get to this point in the code when a dlist file upload files. Never dblock or dindex, which is quite curious. I’d like to figure out why that is and will do more investigation later…