Unclear file renaming reason and warning message on backup job

Is there any specific reason to rename the generated backup file and then try to remove it using it’s original/old name (which it does not exist anymore with it’s old name)? I’ve been running backup job since 9 days and it hasn’t happened before. (backup job runs daily).

I don’t know if I should check something or not… I’m confused.

...

"2019-11-26 06:37:53 +03 - [Information-Duplicati.Library.Main.Operation.Backup.BackendUploader-RenameRemoteTargetFile]: Renaming \"duplicati-20191126T033000Z.dlist.zip.aes\" to \"duplicati-20191126T033001Z.dlist.zip.aes\"",

...

"Warnings": [
    "2019-11-26 06:38:06 +03 - [Warning-Duplicati.Library.Main.BackendManager-DeleteRemoteFileFailed]: Delete operation failed for duplicati-20191126T033000Z.dlist.zip.aes with FileNotFound, listing contents"
],

@hdogan - welcome to the forum!

This can happen when the upload fails for some reason and is retried. For dlist files the filename is altered slightly (timestamp in filename incremented by 1 second) and the upload tried again. At the end of the job Duplicati tries to delete any partial/failed upload from the backend.

I suspect for some types of upload failures, there is no partial file on the backend so the deletion attempt fails and produces a Warning result for the job.

I submitted a code change 6 days ago to resolve this annoyance: Only warn if backend delete failure is confirmed by drwtsn32x · Pull Request #3993 · duplicati/duplicati · GitHub

It should be included in the next Canary version, and soon hopefully the next Beta.

Out of curiosity: are you using B2 for the backend?

Thanks!

If so, i’m eagerly waiting for the next release. (It would be nice to have rename reason logged or upload errors - i couldn’t find any mark about uploading failure in the logs).

Yes, we’re using B2 for the backend.

You may need to enable verbose logging to see it, but there will be indications that the upload failed and Duplicati retried.

I, too, saw this with B2. Network issues are not to blame in these cases. Instead it seems to be a ‘normal’ part of the B2 services - it gives the application an error code with the intent that it try again, the purpose of which is to basically redirect the write to a different location in B2 infrastructure.

For more background, here’s a thread I started about this issue 9 days ago. @ts678 is the one who dug up the info about this being normal on B2:

Many thanks again for the detailed information. Do you suggest to move the backend somewhere else more stable? Like AWS S3 or just skipping those kind of warnings with keep using B2 is ok? (We backup production servers).

You’re welcome!

Personally I have no problems with B2’s reliability and don’t hesitate recommending them over AWS S3. This “error” condition is normal part of how they designed their API so not anything to cause concern.

Although it’d be better if the default log had detail, one can set up a –log-file at –log-file-log-level=Retry

2019-11-26 12:30:38 -05 - [Retry-Duplicati.Library.Main.Operation.Backup.BackendUploader-RetryPut]: Operation Put with file duplicati-20191126T173000Z.dlist.zip.aes attempt 1 of 10 failed with message: 503 - service_unavailable: c001_v0001113_t0003 is too busy

2019-11-26 18:30:34 -05 - [Retry-Duplicati.Library.Main.Operation.Backup.BackendUploader-RetryPut]: Operation Put with file duplicati-20191126T233000Z.dlist.zip.aes attempt 1 of 10 failed with message: 503 - service_unavailable: c001_v0001120_t0046 is too busy

were my two so far today from an hourly backup. Duplicati goes elsewhere, and main impact is noise.
Are you running a Beta release? I’ve been wondering whether this is an old bug or a Canary addition.

I’ve set file log level to Information. Is it possible to set it with multiple levels? Like Information, Retry?

We’re using Canary version on Ubuntu 18.04 LTS.

Only implicitly, because it’s a level. Retry also implies Information and others such as Warning and Error.
–log-file-log-level vs. –log-level (and –log-file-log-filter), and, how many log files are there, and, what are the different logs, and, what is the meaning of the levels is the summary of a good discussion about this.