Google Cloud costs

I want to use Google Cloud with Duplicati. Details of Google’s pricing are here: https://cloud.google.com/storage/pricing

You pay for at-rest storage, for operations and for egress. So my question is how many transactions and how much egress can I expect Duplicati to need? Can I optimize it?

Obviously I’m not including the cost of restoring data, just incremental backups.

For example using Archive storage in Europe:
Storage: $0.0012 per GB per month
Operations: $0.50 per 10,000
Egress: $0.17 per GB

Note that the egress cost includes both the network cost and the retrieval cost!

My understanding is that Duplicati will download one block for verification each time the backup is run, so if I used a block size of say 512MB and ran the backup once a week it would cost $0.34 in egress fees, negligible operation fees and the data storage fee.

That seems quite cheap. Have I got that right?

Why not use Jottacloud personal storage?
I’m using that. Pretty fast, unlimited storage. Works great with Duplicati.
It is at a fixed rate, so no surprises with higher bills than expected.

That looks interesting but the web site is rather vague. They say they can flag accounts for “extreme bandwidth/storage” usage but don’t indicate what that is. They also throttle after 5TB although I probably don’t need more than that.

How much data are you storing there? What kind of bandwidth are you consuming? If it’s “unlimited” I’d be tempted to have backups on a much more regular schedule with more frequent data checks.

I’m currently storing 11TB at Jottacloud. Got the message about lowering the bandwidth. But other than that no issues.

1 Like

Welcome to the forum @aestetix

Compacting files at the backend will cause some egress cost unless you set –no-auto-compact=true, however setting that would raise storage cost because wasted space from deleted backups will grow.

The COMPACT command can be tuned. Egress is unpredictable and depends on source file change. You can attempt to optimize it, but it’s probably a tradeoff between more egress and waste buildup…

You get some default logging about compact, and you can set up a log file if you really want to study.

image

2020-02-11 20:43:17 -05 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting because there is 26.83% wasted space and the limit is 25%
...
2020-02-11 21:27:11 -05 - [Information-Duplicati.Library.Main.Operation.CompactHandler-CompactResults]: Downloaded 15 file(s) with a total size of 494.30 MB, deleted 30 file(s) with a total size of 494.87 MB, and compacted to 2 file(s) with a size of 9.28 MB, which reduced storage by 28 file(s) and 485.59 MB

If the elapsed time worries you, it’s because this test is throttled to 200KB/sec, so things take awhile…

1 Like

How slow are we talking?

I’m currently rebuilding a back-up, so can’t run it for you.
I have a 100Mbit/s upload speed, at first it was taking around 80Mbit/s if I remember correctly. (CPU/Hard disk was the limit I think, you can run a speedtest to Jottacloud to see what you are capable to send to Jottacloud)
Right now I have to check, but I didn’t really notice a huge lowering in bandwidth cap. It’s not brought down to a couple of Kbit/s or something. I think more around 10-15Mbit/s.

But I can check it later today for you after the rebuild of a different back-up is done.

edit:
According to this article it reduces the bandwidth respectively to the amount of data you have above the 5TB.

“The reduction in the upload speed depends on the total amount of storage. A user with just over five terabytes will hardly notice a difference, while a user with 20 terabytes will notice a larger reduction in the upload speed,” he adds.

Source: https://norwaytoday.info/news/jottacloud-5-tb/

Thanks. 10Mb would be great compared to what Spideroak gives you all the time.

Just did a test. Created a 1.1GB file (just zipped a bunch of data together).
Still getting peaks at 80mbit/s. :smiley: :sunglasses: Haha
To put it in a time perspective: Checking a whole archive of 108GB (where basically nothing has changed) + the new 1.1GB file took only 5 minutes to upload.

So no wonder why I don’t notice any differences…

Specs PC:
Windows 10 Pro 1909
Intel i5-4440 @ 3.10GHz
File was on a SATA-SSD

:face_with_monocle: :nerd_face: Here’s the interesting part:

Jottacloud has multiple types of storage locations in 1 account:

  • synced (I think for syncing photo’s from a phone-app)
  • Backed up (back-up from PC’s with the jottacloud application. You just see plain files there)
  • Archive (This is where Duplicati drops it’s files)
  • Shared

So my guess is that Jottacloud is not reducing the upload speed when the destination is to an archive location! :thinking:
I tested the same file and uploaded it as a regular file into a folder, and there I got a 6-7-ish MBit/s upload speed. So that upload speed is clearly reduced.

and pssssst: Don’t tell Jottacloud!!! :zipper_mouth_face:

1 Like