Backingup Windows Server Folders to FTPs and Amazon S3/Glacier

I am looking for a solution to back up folders residing on a Windows 2012 server to both an FTPs (i.e. ftp over TLS) server and also to Amazon S3.

The backups will need to run by shedule, be encrypted in both transmission and the files themselves.

Will Duplicati potentially do this?

Any reason not to choose this over CloudBerry (which will save $$)?

Yes, Duplicati works on Windows Server. You should run it as a service which does require a couple extra steps.

And yes, you can target two different locations. You would have to set up two different backup jobs though as a single job cannot target more than one destination.

Alternatively, you could set up just one backup job and then run a different process that synchronizes backup files in one target to the other. (If FTPS is on a server you manage, you could use rclone to sync to S3.)

I have CloudBerry Desktop and it does support a single backup job that can target two locations. I think they call it Hybrid Backup. But as far as I know it backs up locally first, then transfers the local backup to remote storage.

Thanks. Actually its not at the same time I wish to back up to two locations, its one on one day and the other on other days. I want to back up as followings:
Full backups Mon/Wed/Sat, overwritten each week. (i.e. no incrementral backup as the data size is quite small (probably 200gb each time)
Presumably that is possible?

I’m reading the odd reveiw that is saying Duplicati is good when it works, but when something goes wrong (assume they are talking about a currupted file or database) the recreation of database takes days and the backup beyond the corruption is not recoverable. Is there any local/truth to this, and is it still the case for the current versions?

(I will bare in mind that this is a free product and even the paid one’s certaily have there faults.)

Many thanks

The old backup terms “full”, “differential”, and “incremental” do not really apply to many modern backup systems, Duplicati included.

Every backup is effectively a “full” backup because you can restore any file from any backup.
But in the another sense every backup is an “incremental” because only changed blocks of files are transferred to backup storage. Unchanged blocks are not transferred again to backup storage.

The very first backup you do will take some time as it has to back up the entire 200GB. But after that the backups will be quite fast assuming the amount of data that changes is small. In fact subsequent backups are so fast it opens the possibility of running backups much more frequently than once a day, if you want.

So yeah in your situation you could set up two backup jobs. The data to back up in each job would be the same, but you could target different back end storage with each job, and have slightly different schedules.

I use Duplicati on 10 computers and it works great. I have seen some people have issues with a database rebuild. I haven’t experienced that problem but on my largest backups I decided to try and mitigate that issue by backing up the database too (with a different backup job).

Hi @theFlash, welcome to the forum!

Yes and no. You are correct that for some users when a problem arises it appears to be catastrophic / unrecoverable. However different people have different ideas of those terms.

For example, if your local database becomes corrupted it can be rebuilt from the remote data but yes - that can take some time. While this is being improved in newer versions, there’s no hard numbers to say by how much.

If your remove files become corrupted (say your destination was a USB drive and it got dropped) then you would likely run into a situation where you couldn’t do anymore backups. This is by design due to how Duplicati does backups.

As @drwtsn32 described, only changed parts of files are uploaded with each backup run. If you choose to restore a particular version of a file Duplicati will know which versions of which blocks to restore to rebuild that version of the file.

But if a very old block of a file is corrupted and that block was never changed, then EVERY version of that file will have that corrupted block. Adding another backup of that file on top of ones that are sitting on a corrupted block will just result in a bad backup.

To avoid that happening, Duplicati chose to disallow backups when corrupted files are found on the destination. Recovery from this issue is possible (by deleting the bad versions of the files) but not as smooth as we’d like. For some people that is enough of a reason to not use Duplicati.

But even when a backup is in a corrupted state and Duplicati won’t let it be written too, it usually CAN be restored from. Granted, the corrupted blocks will still be corrupted in the restored files - but Duplicati will restore everything it can, filling in the bad blocks with zeros.


As far as backing up to Amazon S3 - yep, Duplicati can do that. But backing up to Amazon Glacier is more difficult. Because of how Glacier works, Duplicati will start thinking files have gone missing and complain. To get around this you have to disable many of the features that Duplicati uses to verify your backups and clean up old versions.

Personally, I think you’d be better off using your FTPS destination for all your backups then set up something to mirror that to Glacier. Of course doing that means if you ever need to restore from Glacier you’ll have to copy ALL the Glacier files to somewhere Duplicati can see (such as back to your FTPS server) - but it’s doable.


Hey, @Pectojin - do you think it would be possible to support Glacier as a restore-only destination?

It should be possible but it’s a little weird due to the archiving nature Using s3 Glacier for backups