V2.1.0.3_beta Issue: b2_list_file_versions spike. Downgrade to v2.1.0.2_beta?

I’m affected by the spike in Backblaze B2 “b2_list_file_versions” Type C transactions detailed in this Github Issue:

It looks like it is still being worked out, and it seems to me the stable v2.1.0.4 release (Release: 2.1.0.4 (Stable) 2025-01-31) is likely affected too since the changelog only mentions a minor WebDAV timeouts tweak versus 2.1.0.3_beta.

What would it take to downgrade from v2.1.0.3_beta to v2.1.0.2_beta until a fix is released?

In that issue there was also the suggestion to try and disable the test downloads:

I just tried setting --backup-test-percentage=0 and it’s running now.

Maybe that would be easier for you than a downgrade. A downgrade should also work by just installing the old version since the database schema has not changed since 2.1.0.

I did see the suggestion about --backup-test-percentage=0, but there wasn’t a confirmation yet from the issue author whether it worked for them.

However there was a confirmation that downgrading worked, so thought will explore that option first since I didn’t want to risk another multiple-hours run.

Good to hear downgrading is simple between these versions, will try running the v2.1.0.2_beta setup file. Thanks!

As mentioned, there are no steps required to switch between the two, but the issue was introduced with 2.1.0.1, so it likely will not make a difference.

You would need to downgrade to 2.0.8.1, which is described here:

But as @Jojo-1000 mentions, you should be able to fix it by setting the test percentage to zero, so you only test 3 files on each backup.

I’m still on 2.1.0.3. I now set the backup-test-percentage to zero and verified backup-test-samples, it was set to the default of 1. The run is now 35 minutes, versus 4+ hours when the former was 0.1%.

There are still some errors during verification with 0%, but down to 7 errors instead of 55 with 0.1%. Attaching logs from both.

logs.zip (4.9 KB)

This job used to take much lesser time however, around 2-3 minutes on 2.0.8.1, and then 8 minutes on 2.1.0.2 with whatever the default values were for backup-test-percentage then.

I’ll try downgrading to 2.1.0.2 and then to 2.0.8.1.

Update after downgrading to 2.1.0.2: I retained test percentage as 0. The run took around 6 minutes and no errors.

That seems to support the original idea that something changed in 2.1.0.3.
Given all the download trouble, I wonder how well compact or restore run?

EDIT 2:

GitHub issue linked from original post here found this, matching tests here:

UPDATE: confirmed that the error does not happen in v2.1.0.2_beta.

I have to say I had trouble figuring out where to find 2.1.0.3 GitHub source.
Generally I go for the tag, but should I be looking at a branch for releases?

Fixed not creating releases on master when building from a branch #5936
was me wondering why the source release didn’t seem to match the binary.

EDIT 3:

This is new in 2.1.0.3. I wonder if 30 seconds needs more for big backups?

C:\Duplicati\duplicati-2.1.0.3_beta_2025-01-22-win-x64-gui>Duplicati.CommandLine help read-write-timeout
  --read-write-timeout (Timespan): Set the read/write timeout for the connection
    The read/write timeout is the maximum amount of time to wait for any activity during a transfer. If no activity is detected for this period, the connection is considered broken and the transfer is
    aborted. Set to 0s to disabled
    * default value: 30s

The GitHub issue also said that the problem was associated with big backup.

Just backup a large dataset to Backblaze B2

Yes, due to the incorrect tagging of the 2.1.0.1, 2.1.0.2, and 2.1.0.3 versions, for this one you should go by branch.

Then it is pretty much confirmed that the problem is the timeout.
I checked the changelog again, and sure enough, the timeout code was introduced in 2.1.0.3.

@tejus if you follow the github issue for this problem, the suggestion is to set --read-write-timeout=5m to expand it a bit, or --read-write-timeout=0 to disable it completely.

Setting it to 0 should make it behave the same as 2.1.0.2, but I would like to get confirmation that this workaround fixes the problem.