How backup file without separating?

In test job duplicati i turn off encryption, “remote volume size” i put 1TB. After that i run job, files create not like from source 1.txt to destination 1.txt. In destination i see created many 7zip with data inside i think, but this data is separate and not specified like source data. So the few questions:

  1. How in destination folder make backup like as in source destionation, 1.txt to 1.txt for example?
  2. If i cant make same data to destination, then how i can backup this data, only with duplicati? What minimal of destination 7z files i will need to restore my data, as i understand not all 7z that create contain data, some of them contain hash or checksum?

Duplicati is not a sync tool. Your source files will be split up in separate data blocks and packaged in ZIP or 7Z archives, to enable always-incremental backups, minimize remote storage capacity and other useful features. Read more about it here.

This will result in a single archive (ZIP/7Z) of max 1TB that will be uploaded to the remote location. I guess this is not what you want, restoring a small file will result in downloading one or more very large (1TB) archive files. Read this article for more information.

Did you set the upload volume format to 7Z? This format has a couple of known issues, so avoid 7Z and keep using the default ZIP format until the issues are addressed.

You need all DBLOCK and DLIST files to restore data. DINDEX files speed up local DB (re)creation dramatically. If one or more requred files are lost, you can use the Recoverytool to restore as many as possible from the remote files that you still have.
You need to install Duplicati, or restore files using this Python script without using Duplicati.

2 Likes

But when we copy from source 1.txt to destination 1.txt its not necessary sync. Software like cobian, robocopy, xcopy, etc is just copy files not sync them. Variant in duplicati make backup more complicated when need to restore data.

my bad, i use 7zip to view archives, but yes used default *.zip for now for archiving in duplicati.

Create incremental backups without .zip or any compression and below has my thoughts on this. Basically robocopy, xcopy, etc. have their approach (and can’t do what Duplicati does), and (at least now) vice versa. Programs typically do what they do, and try to do it well without trying to offer huge variations on the design.

Possibly some of the programs I cite may help your use case, especially if you’re doing local backups, and NTFS links are an option. If you’re seeking completely independent multiple full copies, that’s a lot of space.

Duplicati heads in the other direction heavily, by default, but –blocksize can reduce deduplication, if desired. Main reason would be performance. The core design of chunking source for repackaging is always there…

Perhaps, but it also gives you something synchronization or copy tools cannot do: deduplication. This allows you to store a large number of backups efficiently. I have 450GB protected on one machine and now have 175 backups going back 1.5 years (when I started using Duplicati), yet it only takes 515GB of storage space to hold all these backups.

Each type of tool has its purpose. If you want basic copy/sync then Duplicati is not the right tool for that job.

2 Likes

I’m guessing that 1.5 year history also has multiple versions of the files (an actual HISTORY of the file changes) which is something else a sync or basic file copy tool can’t do.

@squidw, it sounds like you may be wanting something more like rsync (if using a basic destination) or Syncthing (if you can run Syncthing at the destination as well).

1 Like