B2 has now S3 compatible API

Check out this news

I guess that means one less backend to have to have a development path for which means more time for feature development. Yippie

(provided that the semantics are sane and no oddness that leads to extra charges I suppose…)

Great! However, it looks like the S3 endpoint is only available for new buckets.

image

Yeh that is a bit of a twinkle. I can imagine if someone had a large backup already; moving over would be a real hassle and thereby the need for having the Backblaze backend would continue (perhaps indefinitely!!!)

I think this is great, but mostly for applications that lack native B2 support. Duplicati’s native B2 support seems to be really good - I have no issues with it!

It’s possibly less hassle (but for what gains?) using New Feature — Bucket to Bucket Copies.

This all occurs without having to download or re-upload any data.

In addition to the third-party tools they mention, it seems to be in their own command line tool:

Add support for b2 copy file #586

I haven’t tried any of this two-day-old feature yet. If anyone wants to, feel free to give it a test.
Duplicati actually has two different S3 backends to use. Amazon’s and more recently MinIO’s:

image

Some supposedly “S3 compatible” storage is less than perfectly compatible with Amazon S3:

Problem using Oracle S3 compatible storage

‘s3 compatible’ storage does not work with Signature version 4 #3970

Some compatibility issues might be due to Amazon SDK, but I’ve heard no MinIO test results.

On a personal level none; but for community, if people transition over, it would presumably save maintanance effort if the B2 backend can be deprecated.

https://usage-reporter.duplicati.com/ shows more B2 API users than S3 API users, and I shudder at the idea of leading them all through a technically challenging migration. Keeping B2 may be easier, though possibly when someone tests their S3 API, some S3 benefits may emerge that aren’t currently clear…

One technical flaw in the current B2 code is that retries aren’t done quite as they want. S3 may fix that.

That isn’t quite a valid comparison right?

If I wanted to use B2 as storage - I had to choose B2 backend (and if I was using some other backend, I couldn’t choose B2) — that usage-reporter results just says there is a lot of backends out there which are aren’t B2 compatible… …

That being said; I agree - a switch might be painful and i guess that is how legacy systems end up hanging around forever… … …

It seemed valid to me, otherwise I wouldn’t have said it. Looking at the “2020 week 17” results, I see:

70084 Backblaze
67045 S3 compatible

The numbers look valid and support the concern about pushing a more-used option onto a less-used.

S3 is pretty clearly going to stick around, in spite of occasional issues with not-quite-compatible “S3”.
One good/bad thing about the S3 libraries is they’re third-party, so Duplicati can’t really change them.
I hope someone volunteers to try Backblaze S3 while they’re still in Beta, and maybe more receptive.

Exactly, although sometimes a storage vendor forces the issue. Amazon Cloud Drive caused 2.0.4.23 which was done just to issue a warning that it was going away. Original OneDrive API end was messy.

Anyway - let me give me a shot and see how it goes…

1 Like

Well works. Duplicati throws a “bucket name should prepend username”. Not sure what that means but after a ‘no’ things could go.

Backp was pretty amazing - 30Mb/s. Schedule for an overnight incremental but dont expect any problems since the main backup fairly standard and will test restores tom morning.

Working with Amazon S3 Buckets

Backblaze has the same “globally unique” restriction for B2 API buckets. I’d expect S3 API to follow.

What you need to know about B2 Bucket names

Oh I knew about the global namespace already; so I had a longish psudedo random name already. I think the prepending caused an error coz the subsequent name was too long - so that is why I said no to the prepending to proceed

Good Afternoon all,

I am running Duplicati - 2.0.4.5_beta_2018-11-28 on my Asustor NAS.

Currently I have a backup job running to Amazon S3 every night perfectly.

With the announcement of Backblaze being compatible with S3 I tried setting up a new S3 compatible bucket and can connect to this bucket successfully using S3 Browser on Windows using the credentials.

When I try and setup a new backup job in Duplicati the test connection hangs and never connects. Below are the settings I use:

In the boxes I am specifying my credentials instead of “keyID” :slight_smile:

I am not setting any advanced options.

@Kelly_Trinh I am selecting no when asked “bucket name should prepend username”

Does anybody have any ideas?

Thanks!

Ian