Is there any specific reason to rename the generated backup file and then try to remove it using it’s original/old name (which it does not exist anymore with it’s old name)? I’ve been running backup job since 9 days and it hasn’t happened before. (backup job runs daily).
I don’t know if I should check something or not… I’m confused.
...
"2019-11-26 06:37:53 +03 - [Information-Duplicati.Library.Main.Operation.Backup.BackendUploader-RenameRemoteTargetFile]: Renaming \"duplicati-20191126T033000Z.dlist.zip.aes\" to \"duplicati-20191126T033001Z.dlist.zip.aes\"",
...
"Warnings": [
"2019-11-26 06:38:06 +03 - [Warning-Duplicati.Library.Main.BackendManager-DeleteRemoteFileFailed]: Delete operation failed for duplicati-20191126T033000Z.dlist.zip.aes with FileNotFound, listing contents"
],
This can happen when the upload fails for some reason and is retried. For dlist files the filename is altered slightly (timestamp in filename incremented by 1 second) and the upload tried again. At the end of the job Duplicati tries to delete any partial/failed upload from the backend.
I suspect for some types of upload failures, there is no partial file on the backend so the deletion attempt fails and produces a Warning result for the job.
If so, i’m eagerly waiting for the next release. (It would be nice to have rename reason logged or upload errors - i couldn’t find any mark about uploading failure in the logs).
You may need to enable verbose logging to see it, but there will be indications that the upload failed and Duplicati retried.
I, too, saw this with B2. Network issues are not to blame in these cases. Instead it seems to be a ‘normal’ part of the B2 services - it gives the application an error code with the intent that it try again, the purpose of which is to basically redirect the write to a different location in B2 infrastructure.
Many thanks again for the detailed information. Do you suggest to move the backend somewhere else more stable? Like AWS S3 or just skipping those kind of warnings with keep using B2 is ok? (We backup production servers).
Personally I have no problems with B2’s reliability and don’t hesitate recommending them over AWS S3. This “error” condition is normal part of how they designed their API so not anything to cause concern.
2019-11-26 12:30:38 -05 - [Retry-Duplicati.Library.Main.Operation.Backup.BackendUploader-RetryPut]: Operation Put with file duplicati-20191126T173000Z.dlist.zip.aes attempt 1 of 10 failed with message: 503 - service_unavailable: c001_v0001113_t0003 is too busy
2019-11-26 18:30:34 -05 - [Retry-Duplicati.Library.Main.Operation.Backup.BackendUploader-RetryPut]: Operation Put with file duplicati-20191126T233000Z.dlist.zip.aes attempt 1 of 10 failed with message: 503 - service_unavailable: c001_v0001120_t0046 is too busy
were my two so far today from an hourly backup. Duplicati goes elsewhere, and main impact is noise.
Are you running a Beta release? I’ve been wondering whether this is an old bug or a Canary addition.