This is the expected outcome for reasons explained, which is that Duplicati has the best chance of uploading a small set of changes if it has individual files (mostly unchanged) to look at, not one big one which may dramatically increase upload because things shifted. But only you have the upload amounts which you can see in a job log’s Complete log as "BytesUploaded". You can compare.
So a two file backup, both .gz which probably means deduplication can’t do much for the reasons explained which is that everything shifts. The total is only slightly above a suitable size for default blocksize (now 100 KB, going to 1 MB next), but you can probably speed it up a little with a larger blocksize anyway, as the main reason for a small one is better deduplication for easier scenarios.
So if the 278 GB is the worst-case upload size, that’s about 6 hours, so the math seems to match.
I use neither immich nor PostgreSQL. but if the 160GB tar.gz came from the database, you might be stuck with it. If it’s your own packaging, having more and smaller files might reduce uploading.