I found a Great Tool

Hi, this great tool is what I looked for a long time. Why:

  1. It’s Open Source !!!
  2. It has a small footprint AND an intelligent backup strategy, and is easy to use.
  3. It is multi platform and multi protocol capable.
  4. It saved me a lot of disk space.

I’m using Duplicati 2beta now productive for Backing up my small business machines and those of relatives I feel responsible for to my own NAS.

Since 16 years I was just fully copying (yes) regularly all my work directories including system images in a rotating job to six harddisks (each 1.5…2.0 TB) and kept them in a remote safe. This sounds pretty old fashioned, but I made bad experience with paid commercial products like those from MS or Seagate which were discontinued after few years and let me back with useless backups.
All these old harddisks with copied snapshots contained a lot of redundancy. Now with Duplicati the whole content fits safely on less than one TByte NAS-Space.
I created one Backup-Job where I changed the source for each harddisks backup run.
So I can retrieve any file of any Snapshot in the respective directory structures I used those times.
I’m already supporting Duplicati with $ and hope that this tool will have a long life!

5 Likes

Glad you like duplicati 2.

A little bit of an advice before you hit this problem:
The default settings do not work that well for large backups (over a hundred gigabytes) when you try to restore them so it might be wise to change your blocksize and volume size if you haven’t done so already.
https://duplicati.readthedocs.io/en/latest/appendix-c-choosing-sizes-in-duplicati/

Search on this forum for “large backups” for more past discussion threads.

Thank you for your advice. By now I’m already finished with creating the backup of all those harddisks. Actually I made the backup with the default settings i.e.
Remote volume size: 50 MByte
Backup retention: Keep all Backups
the only extra I did: compression-extension-file to the standard list

After I finished the total Backup size reached 568,99 GB / 29 Versions. The backup folder contains among other 11659 files *.dblockzip.aes each 50 MBytes
and 11688 smaller files *.dindex.zip.aes
The last few backup runs took quite some time (few hours) but that does not matter to me.

To test the restore I choose from a backup which is in the middle of the list of Versions a folder with older photo-files that I had almost from the beginning in those backups.
Again it took some time to restore, but the result was 100% acurate (I did a compare with the originals) and only one warning appeared in the log as follows:

Warnings:
2019-10-10 17:27:27 +02 - [Warning-Duplicati.Library.Main.Operation.RestoreHandler-MetadataWriteFailed]: Failed to apply metadata to file: “D:\Temp”, message: Der Prozess kann nicht auf die Datei “D:\Temp” zugreifen, da sie von einem anderen Prozess verwendet wird.
Errors:

It seems that there was a access problem to the restore target folder, maybe from a second Duplicati process.
So I’m not worried, as I won’t backup more data with this job.
Of course I’m not a spcialist in backup technology but so far Duplicati makes a good impression on me.

Volume size usually only leads to problems if your back end has a file count storage limit.

Block size default of 100KB can lead to large backups having a huge number of blocks to track, leading to a larger local database and slower speeds querying it, but other than that it does still work. I have a couple 600-700GB backups using default 100KB block size and everything works. (That being said, if I were to start over I’d probably choose a larger block size - you can only change that setting before your first backup.)

1 Like

Yes in my case I won’t do it all again with another setting.
For future jobs I have put up the blocksize value in the General Settings to 200kByte. This works as a reminder to me, when I’m setting up a new backup job in the future :slight_smile: