( $50 Bounty ) Attempting deletion of files before retention span is over

I will start this off with the fact that I need help with this as soon as possible (whether this is a config issue on my end, a bug that needs to be fixed, a “feature” that needs a disable flag, or something else entirely, I do not know) I will payout a $50 bounty to the person who can help me, and donate an additional $50 to the Duplicati project (or if you prefer I can give $100 as a donation instead).

The issue I am having is the error: “Compliance for object does not allow deletion”.

Duplicati is running on Windows 7 and 10, as a service, and had been running without issue (besides the normal warnings when backing up the entire C drive) for just under 3 months.

I use an s3 compatible storage service that allows for data immutability, which I have set for 1 month. Duplicati is set to retain backups for 2 months. To me, this would mean that no files would be deleted until they are over two months old, therefore well past the immutability period. However, starting about a week ago, a client notified me that the above error was occurring, and Duplicati was trying to delete files less than one month old (I have verified that the config is set to two months).

If there is no way to fix this, I will unfortunately need to find a different backup system, as data immutability for a period is required for better ransomware protection.

Any help at all is appreciated, and if I need to post this as a GitHub issue instead, please let me know.

You would be absolutely correct except you’re probably not thinking of Duplicati’s compaction process. Duplicati will sometimes repackage volumes in an effort to save space and reduce file count.

You can disable this behavior by using the –no-auto-compact option.

Thank you for the quick response.

I have updated my configs, but will not know if this fixes the issue until the files it is currently trying to delete have been deleted. Or do you recommend cutting my losses and starting a new backup?

When I see a backup go through without errors (could be up to a month…), I will mark your answer, and reward the bounty.

The setting will go into effect right away, unless you mean you have an active backup job currently running. In that case Duplicati MAY do a compaction at the end. The next time a backup starts though, it will know not to attempt a compaction.

Hope this helps!

I am still getting the following in the log: “removing remote file listed as Deleting: duplicati-b2*****************************7c.dblock.zip.aes” which then obviously fails repeatedly, and the backup stops with an error.

Is this not related to compaction then?

Oh I see, maybe because the deletions failed (after prior compaction attempt), Duplicati will continue to try and delete the files it thinks are pending deletion. I’m not very familiar with that part of the code so I’m not sure if that’s what’s going on.

Is there no way to temporarily allow the deletions? Starting over seems a bit of an overkill, but I suppose it would also solve the issue.

Unfortunately disabling immutability even for a few minutes would be a breach of contract with my clients, so I guess I will just have to wait it out and hope it works.

In any case, I will assume that this will eventually solve the issue and will mark your reply as the solution.

Please send me a DM (or reply, as I cannot seem to find a message feature) with how you want to handle the bounty.

Yes, I’m confident it will resolve your issue. But until those files are deletable, it sounds like Duplicati will refuse to do any more backups.

I’m just a volunteer on this forum and not a main member of the Duplicati team, but I think the information on this page is still accurate if you want to donate to the project. I have used the PayPal donation method myself in the past. I’m sure they will appreciate your support!

I think that’s right. I was using that to my advantage in 403 error during compact forgot a dindex file deletion, getting Missing file error next run. #4129 to force a cleanup after the compact didn’t handle errors correctly.

This is a problem because if you never compact, wasted space created by deleted files never goes away. Having it grow without bound might be kind of hard on the storage budget, and clutter up Duplicati as well. Having some sort of maintenance window might have been possible, but not with a restriction like above.

If you wonder about the compact concept and how things are stored, see How the backup process works which shows how features such as uploading only changes, and deduplicating similar file versions occur. Problem is that they don’t fit your model well because of the heavy processing from source to the backup.

So yes you can cover this largely by never running compact, but is the cost of indefinite buildup tolerable?

There’s another time when a delete “might” be tried. If file upload returns an error, Duplicati retries with a new file name, and may try to delete the file that returned the error. There’s some logic to look, but I’m not sure if it’s delete-first or look-first. Just saying potentially this is another issue (but not a proven issue yet).