Database recreate taking an infeasibly long time chasing S3 object locks

Technical Details:

Current version: 2.3.0.1_stable_2026-04-24
Upgrading from: 2.0.7.0_experimental_2023-05-25
Platform: Ubuntu 24.04 x84_64 Docker Compose

The Ask:

Can I disable Duplicati’s S3 behavior where it tries to acquire an object lock on a bucket that doesn’t have locking enabled?

Summary:

I’m a long-time user of Duplicati who is currently upgrading to a new hosting platform. This includes an update from 2.0.x to 2.3.x. I’m liking the improvements so far, but I’m trying to recreate the databases on the new system and encountering a strange issue.

My backups are currently in S3, having originally been created with the “S3-Compatible” destination. The backups vary in size, but for the sake of this example, we can assume a 410 MB set of documents with a 250 MB max block size.

I’d read in advance about the new object locking functionality with S3, so I ensured that the s3:GetObjectRetention permission was applied to the user in advance, but it’s worth noting that the S3 bucket in question does not have locking enabled. When I recreate the database, the status hangs on “Reading Lock Info” and takes about 45 minutes to complete. On log review I see a series of GetObjectLockFailed warnings saying that the process fails to resolve AWS credentials. However, I can say with certainty that the credentials are correct because new backups function as expected and the database eventually recreates with the correct file list. This problem becomes untenable quickly, however, at larger backup sizes. I attempted a rebuild with a 67 GB backup, and it failed to complete overnight. I have a 3.3 TB backup that likely would take months to complete.

This seems vaguely similar to this issue reported in March, although the error is different. I suppose the big question is whether it’s possible to disable this behavior (or whether I’m missing something obvious).

I’m the reporter of that github issue. I was finally able to get my databases recreated by setting the following advanced settings on my backup:

  • Turn off refresh lock information during repair. I thought this was going to be the silver bullet, but It didn’t actually seem to help, sadly — Duplicati still tried to fetch lock information.
  • Set the retry delay to 0.
  • Set HTTP time to wait between retries to 0, too.

Those last two are scary to me as someone who knows how networks work, but after setting these values, I was able to recreate my database in finite time (several hours, I think?)

Yes, it is possible. You can set the advanced option --repair-refresh-lock-info=false.
This recreates the database without asking for locking information.

How did you try that? The current repair call does not include the backup settings (but maybe it should?) it does however include the “global” advanced settings.

The current repair call does not include the backup settings (but maybe it should?) it does however include the “global” advanced settings.

Aha! I set this on the backup settings, not globally. That explains why it didn’t have any effect.

Hm, this is interesting. I wonder if it’s some variation of this creating my issue, because the repair operation seems to imply there are no AWS credentials set whatsoever. Disabling the behavior is definitely a good fix, but I’ll admit that I’d rather have Duplicati just grok that there’s no locking and truck through without hanging for 2 minutes per object.

Yes, I agree.

There are multiple reports of locking causing large delays, and I think most people are not using locking anyway.

The issue is that if you are using locking you certainly want the lock info restored as you will otherwise get errors when attempting to delete locked files.

However, we can look at the backup job where the database is attached and see if --lock-duration is set and use that to toggle the --repair-refresh-lock-info if the option is not explicitly set.

That way it will work for most use cases without having to ask “Do you want to read lock info”.

Alternative suggestions appreciated :slight_smile: