Recommended settings for Google Team Drive

Omg there is a file count limit on gdrive?

FIrst of all, make sure not to go with the god-mode token, it is a real hurdle to migrate (basically you have to reupload), once Google takes away that everything-goes token possibility:

You will have google drive 403 errors, if you use the latest beta like I do.
So I recommend --number-of-retries=10 --retry-delay=20

You wrote you never want to restore, still I share my experience and decision. It is not just the restore time to be taken into consideration, but also the db to work with.

I have ~5.3 TB backed up to google drive.
I was testing more before doing my full backup because I have read horror-stories about restoration. Yes, 1-2 days is totally ok for me, but I saw 10 day horrors. I have read about the horrible db-performance hit after a certain number of items, also the more blocks, the bigger is the db.
The db size itself (although proportional to the amount of backed up data) isn’t a great contributor of problem, but the restoration is.

BTW if it comes to that, I don’t recommend restoring directly from the remote gdrive backend, the restoration process is a 2-pass thing, first the files then the metadata. The metadata is supposed to be veeeery small changes, yet it is consulting all the remote blocks, so in the end it will download every block twice.
=> strategically I will download backend to be local before restoration, so I only care about backup performance and internal blocksize

The other trap is to not backing up the db, or not accounting for its size. Although not catastrophic, if you only have your blocks, you can recreate the db, but it will be significant time. Also, the bigger the db, the more "jumps’. Sorry for the word, am no db expert.
=> blocksize strategy: striving for smaller db means bigger blocksize
=> duplicati runtime strategy: have the db on non-spinning disks, I invested in NVMe for SSD cache, as my environment is a NAS

So I wanted to test how problematic is it, if I have block the size of 5 MB. I wanted to see if it penalizes the restoration. You can see in the table I attach, that basically having bigger blocksize helps.
Notable differences:

  • the db size having 10kB blocksize was 852 MB, with 5MB it dropped to 54 MB.
  • introducing SSD cache reduced the db repair (fixed blocksize) to 66%.
  • changing the blocksize from 10kb to 5MB reduced the db repair to 50%
  • blocksize does not effect the “Target file is patched with some local data” part of the restore or “restore integrity” part, but SSD cache does
  • metadata recording (second phase of restore) is equally bad performance no matter the ssd cache or blocksize. It is still better when restoring from local backend than restoring from gdrive although I did’nt save the gdrive restore log back then.

Also after deciding for the final parameters I had a 8-10 days long initial backup of 5 TB. The backup is 3.38 TB (my wife duplicates like mad), the DB is 469 MB big.

Read more: