Don't it re-sync?

Yes i know i figured, and it works when using shares instead, the only thing it bothers me is how it uploads in small files, i perfer actual files instead, like hbs3 , i womder how duplicati will perform when reaching the 50tb of 50mb files also the way it uploads it puts everything in 1 folder instead , maby it would be lighter if each folder comtains its own files instead , but then i have to do each folder separately, would be good if duplocati upload in sub folders like

Fansubs > completed > a > show name > *.zip
instead everything in base folder fansubs > *.zip

Right ? I can be wrong , at the moment how hbs3 uploads is the way it is, in the shares

No but you already found out that it’s 100 KB by default. Blocksize is the value that can’t change later.
If you think your maximum backup ever will be 30 TB, blocksize should be something like 10 - 30 MB.

Choosing sizes in Duplicati talks about sizes, however remote volume size can change later (though repackaging via compact may or may not be done on existing files), so I’m worrying about it less now.

If you don’t like 50 MB remote volume size, make it bigger. Might be a good idea anyway for big backup.
The above guide on sizes explains why making it larger can hurt you due to making inefficient restores, however keeping it small does make a lot of files in backup folder. Maybe subfolders will exist someday.

You could consider splitting your backup into a few smaller ones, if there’s a reasonable way to do that.
This will reduce the need to increase all the sizes in order to ease strain on Duplicati, Google Drive, etc.
You know about Google Drive’s 750 GB per day upload limit, right? If you have fast Internet, beware of it.

You might be missing the reason for blocks. Among other things it gets deduplication and small uploads when small changes are made, as opposed to, say, uploading an entire slightly changed large file again.

Features talks about this and other things. Block-based backup is what most advanced backups do now. Drawback is the complexity. Some people prefer to use lots of space and keep many copies of their files.

The backup process explained explains simply. How the backup process works provides technical detail.

If you’re saying that you want a zip file per source file, that’s totally contrary to a deduplicated block-based backup where a single .zip file has a group of blocks that may be part of a number of different source files.

If Duplicati is ever able to add subfolders these won’t likely be source folder names, which are stored in a similar way to files, thus get timestamps, securitity attributes, and no character sensitivities depending on the destination (you can backup file and folder names the destination can’t write). How does hbs3 backup different versions of a file? Complete files? What about the rest of the unchanged files? Some backups at least don’t upload the complete set of files for every backup, but they might upload complete changed files.

What you’re talking about with keeping the source tree shape at the destination is more what a sync does.

Overview

Duplicati is not:

  • A file synchronization program.

I got unlimited gdrive, some trick behind it, it costs money yes but cheaper than anyone els as far as i know , and its cloud to,

Hbs 3 uploads the file as it is ,whatever it is mp3 or mkv, if i change a file in the sub folder it syncs it with backup once a week, its mirrored, it don’t pack it ,

Let me show you some screenshots



As i said, hbs3 mirrors, so if files exists it skips, that is the nice thing about it, it check pretty fast , per existing folder like 5 seconds to gdrive, any changes made it will update only that change

If it’s mirrored on top of the old one (is it?) then any damage, mistakes, or removal of any file on Unraid that stays around long enough to go in the mirror means you lost the good file. This isn’t much of a backup IMO.

You may prefer versions so that you can undo problems. Versioning is something Duplicati does very well.

Understanding Cloud Backup vs. Cloud Sync is an article from a backup provider that explains differences.

If you currently want mirroring software, some people with Unraid (and others, of course) find rclone useful, however you would probably want a command line and some scripting to tell it what to do and when to do it.

A GUI option which it appears some Unraid users use is FreeFileSync, which I suspect is slower, but has a better Versioning design than rclone, from what I can see. I would, however, say backups have advantages.

You can consider what you want to do with the remote copy, given different ordinary and disaster scenarios.

Blockquote
If it’s mirrored on top of the old one (is it?) then any damage, mistakes, or removal of any file on Unraid that stays around long enough to go in the mirror means you lost the good file. This isn’t much of a backup IMO.

you got a point there, i never thought of that way because so far it never happend on my qnap

Blockquote
If you currently want mirroring software, some people with Unraid (and others, of course) find rclone useful, however you would probably want a command line and some scripting to tell it what to do and when to do it.
A GUI option which it appears some Unraid users use is FreeFileSync, which I suspect is slower, but has a better Versioning design than rclone, from what I can see. I would, however, say backups have advantages.
You can consider what you want to do with the remote copy, given different ordinary and disaster scenarios.

well i am not a typical command line user, im more of a gui user,

also i readed that " Choosing sizes in Duplicati" is pretty difficult, on restoring dupplicati have to download the whole backup if i understand it correctly , so my backup woill be arround 48tb as it is now, that means if i have to restore 1 file i have to re-download all the 48tb ? since dupplicati don’t know sub folders,

No, it downloads what it needs. If you restore everything, of course, it downloads a lot.
If it says anything about “whole backup” as opposed to less-versus-more, please cite.

Subfolders have nothing to do with how much it downloads. It needs blocks, that’s it.

EDIT:

Here is another explanation of the backup process. There were several others cited.
How the restore process works explains that though some details had been omitted.

what i mean with sub folders is, that it put the rar/zips in the folder it zipped the files from,

for example
/fansubs/completed/a/show name/ *.zip/rar zo the search will be much faster since it wont be so big right ?

from what i see it do is
/fansubs/completed/a/ *.zip/rar basicly everything in folder “a” is packaged in one large database of small files right ?

i don’t mind that it zip files its just it probaly can be less stressing, and faster searching

That’s one way to do it. Duplicati doesn’t do it like that. To quote more of the original sentence for context:

The second part is true. The first part is not true.

There is no search of destination. The information on where a file’s blocks are is in local SQL database.

Database management

Duplicati makes use of a local database for each backup job that contains information about what is stored at the backend.

The only thing considered a database is the above. The destination folder has none. It’s mostly blocks packaged into dblock files, each with a dindex file to know what’s where, and a dlist to give file info.

The above is explained in the provided links. If you mean the backup makes mostly-fixed-size files, yes however everything in every source folder that represents a change is also packaged in the above way.

The main reason the .zip file is used might be that it’s a standard format for a set of compressed files.
Each block is a separate file inside the .zip file, and may or may not be compressed (if source file is in compressed format already based on known file extension, compressing again is usually not effective).

The name of the block inside the .zip file is a seemingly-random name which is based on hash of block. There’s no searching for source file name because it’s not there. The only names are in the dlist files.