One 30GB copy with 50GB MEGA storage

I need to backup a 30gb music folder to 50gb MEGA account. I just need one copy to exist on MEGA. But will it work? Because I imagine it will upload the thing first and then delete older one but that would already require 60gb of new+old, and I only have 50.

Also right now I have the folder directly uploaded to MEGA via megasync. The issue with megasync and why I want to use duplicati is that megasync is unreliable as there is no way to set up one-way sync, so it keeps redownloading files I deleted or changed.
So since I already have the files uploaded, is it possible to not use volumes and just keep working with direct files that already exist on MEGA? There isn’t much use for volumes either as mp3s are already compressed as well. And that would eliminate the issue where 50gb isn’t enough to backup a 30gb thing.

No, there is no way to configure Duplicati to make it replicate files. It’s backup only.

Welcome to the forum @carnap2

What older one? The megasync? Duplicati won’t delete source files. It backs them up to destination.
50 GB free space should be enough to backup the 30 GB of source files if they’re not changing a lot.
Changes are uploaded at each backup and kept until they’re no longer needed. Space usage grows.

If you prefer sync over backup, some programs can deal with MEGA. Rclone can (very configurably).
There’s no GUI or scheduler though. If you want to look for a different sync program, one might exist.
Note that sync may propagate local damage (e.g. by malware) to the remote copy, so it’s not as safe.

What I mean is that lets say duplicati made a 30GB backup to MEGA, and lets say something changed in my local folder, so it will have to upload my updated 30GB local folder to the MEGA, while the 30GB of old backup is already on MEGA, so at some point it will use 30+30GB so how will it work then? Because I imagine it can’t delete the files first, that would mean I loose my backup if my internet goes out

That’s not how it works. Just changes are uploaded. Changes are detected with default 100 KB blocks.
“Something changed” could upload roughly as little as a new dlist file that IDs the files and the blocks.
Run a small test, and on the home page see if backup size doubles. Small change gives small change.
This is exactly why volumes are used. They hold a load of individual blocks changed since last backup.

Features

Incremental backups
Duplicati performs a full backup initially. Afterwards, Duplicati updates the initial backup by adding the changed data only. That means, if only tiny parts of a huge file have changed, only those tiny parts are added to the backup. This saves time and space and the backup size usually grows slowly.

The backup process explained

When a backup is made, only changed parts of files are sent to the destination.

How the backup process works (more technical)

This is all block based. There are no file copies to delete. Often a changed file will have some old data along with some new data. The new data generates new blocks. The old data might use the old blocks.

If you don’t opt to keep all versions forever, version deletions may eventually make a block not-needed.
Compacting files at the backend will eventually run to repackage the still-used blocks into new volumes.

thank you, I am still a bit concerned about some use cases. If I mass-update idv3 tags, that will make tiny changes to most files in my folder, how would duplicati react to that? And then if I add a file to the middle, wouldn’t that shift all the blocks after the file so that they have to be redownloaded? Its like there is a sequence 123 456 789 0ab and then I add new thing in the middle and it becomes 123 4n5 678 90a so all blocks get shifted 1 forward.

It depends on what that does to the file, and I’ll leave it to you to research further or test a sample file.
I’m not finding anything on idv3, but if that was ID3v1 or ID3v2, Wikipedia ID3 article says ID3v1 gets changes at end of file (which Duplicati would like), but ID3v2 is at the start which throws alignment off afterwards, as would anything showing up in the actual middle that would cause a lot of “new” blocks.

I’m not familiar with MP3 with ID3. If this is basically audio editing of an album which is in a single MP3 using ID3 to identify track boundaries, then I guess everything slides over and throws off deduplication. Possibly there’s a more subtle way, for example one could add something in the middle of the track list, while occupying storage at the end of the MP3 (or maybe that will confuse MP3 players – I don’t know).