Initial backup to USB then WebDAV

Hi guys,

before I start my quite huge backup (700GB) I would like to ask you if my plan will work.

My plan is to make an initial backup on an external USB 3.0 Disk with Duplicati. When finished I want to move the USB disk to an Synology DS1512+ NAS which I will access via WebDAV since it is located outside my home.
Will this work?

Because of my slow upload speed of 10MBit I would choose the 50 MB Volume size limit. This might slow down the initial USB backup a bit, right?

Thank you guys for your support!

That would indeed work. You would just have to change the destination location afterwards and that’s it.

The 50MB default volume size would be a good decision with your speed. Though in case you have lots of data that doesn’t change often or keep versions unlimited time you could go for 100MB (But only if your connection is pretty stable).

It might slow down the initial backup a little bit, but it shouldn’t have that much impact.

You could also go for a bigger volume size in the initial backup and change it afterwards. But again only a good idea if the backup data doesn’t change to much and with a long keep version time.

if the size of the backup doesn’t matter to much you could start the initial backup with zip compression to 0 or 1 and change it back to default afterwards to save bandwidth.

2 Likes

Thank you Niels for your response and help!
I’ve tried your suggestion to set the zip compression to 0. It is much much faster!
My files are mostly JPEG and RAW photos. Is there a huge difference in filesize if I don’t use the compression?

Is it really no problem to change back the compression ratio when the initial backup is done and I will continue my backup over the internet?

I’m not really sure about that, probably the fastest way to find out the size difference is running a small local test on a single folder.

That’s no problem, just be aware that (obviously) it won’t compress existing data on the backup side. So only new data will benefit from the better compression.

image

So the impact on the filesize is not really worth mentioning. The impact on the backup time is rather high so I will do the initial backup without compression.

Many thanks!

As others have said, it will definitely work. I did exactly the same thing to seed the backups of my parents’ computers. They back up via WebDAV to a Synology NAS I have at my house.

Note that if you are using the beta version of duplicati, there is a bug that was giving me problems with secure WebDAV. There is a workaround though:

https://forum.duplicati.com/t/ssl-errors-with-webdav-target-even-with-accept-all-cert-option/904

1 Like

Thank you very much! Really great community here - I did not expect that :slight_smile:

1 Like

According to Fact Sheet • Duplicati Dulpicati should be detecting already compressed (or likely not compressible) files such as zip, 7z, mp3, jpeg, mkv, etc. and add them as they are - without compression.

If that’s actually happening there shouldn’t be much difference between compression levels if most of your content is photos / videos…

@kenkendk, is there a way to tell whether or not a backed up file was compressed?

No, this is not reported anywhere, not even in the verbose logs. Maybe this should be added to the verbose output that already reports a few file details.

@seb According to this comparison post there is significant speedup if you use --compression-level=1, and you do not loose all the compression benefits.

1 Like

Thank you for the tip. Unfortunately my initial backup is already finished and now I’m using the default compression settings since I will backup over my slow internet connection in the next days!

Glad to hear the initial run is done - good luck with the shift to WebDAV!

Will most NAS’s let you just add a hard drive that has existing data? If the NAS has a RAID setup that wouldn’t work, right? Even if it is setup with JBOD, doesn’t it still have to format the new drive when adding it to the NAS? I was under the impression that you can’t just add a drive that has existing data to a NAS but maybe there is a way that I’m not aware of.

It depends on the NAS. Some (such as unRAID and I think FreeNAS & a few Synology boxes) support a “misc” USB drive and expose it like any other share. Exactly how that all works varies by provider, and even version.

Since you’ve got a DS1512+ box I’d suggest checking out the Synology page at External Devices | DSM - Synology Knowledge Center.

In summary, it says:

“By connecting an external disk to the system, you will be able to share its disk capacity through a system-created shared folder named usbshare[number] (for USB disks, if applicable) or satashare[number] (for eSATA disks, if applicable). The shared folder will be removed automatically when the external disk is ejected from the system.”

Thanks Jon.
I’m the one with the DS1512+ not @Spaulding :slight_smile:
From my experience it is no problem to use an USB Disk as “usbshare” in combination with an internal raid. With an Synology DiskStation you can use the USB for nearly all functionalities exept the Synology Cloud (which I had planned earlier and got disappointed)

Whoops - thanks for pointing that out.

I really have to learn to use the mobile interface just for reading, not replying. :blush:

Is there a particular NAS about which you were curious? It’s possible somebody here has already done what you appear to be considering… :slight_smile:

I’m in the process of replacing my P2P Crashplan setup and will be backing up to a friend and he will be backing up to me. Both of us will first seed the backup locally as they are multi-terabytes. One option I considered was to use my existing QNAP NAS which has a free bay. However, I don’t want to use a USB enclosure and I do not believe it is possible to add a hard drive to one of the bays without having to format it.

It’s not a big deal as I also have a Plex server that I can easily add a drive to and just run Minio on that. That’s where I am now in the process. I have installed and am able to connect to the Minio server. I just need to figure out the exact steps to add the seeded drive and get it working with Minio. I think I first create a bucket pointing to a folder on the seeded drive and then move all of the Duplicati data to that bucket/folder…

That sounds about right to me. Good luck!

I’m back with bad news…
First of all I had to get my internet provider to change me from DSLite IPv6 to an IPv4 connection… Took me quite a long time.

Now I am able to establish a WebDAV connection to the USB drive on which I have stored my initial backup.
That’s where the problem starts…
The Duplicati folder contains 27625 files (647 Gb). This might be a problem for opening the folder via Windows Explorer which stops working while trying to open it. I’ve waited >15 minutes without being able to open the folder.

Next I’ve tried “CarotDAV” as a DAV client software. Here the folder opens after 6 Minutes successfully.
So the connection should theoretically work.

That’s when I switched into Duplicati. As you told me, I changed the backup path of my backup-job to “WebDAV”.
The connection test to the remote folder which contains all the backup files needs some time ~30 sec and finishes succsessfully.
Then I tried to start the backup… after some time I get this error:

Found 27625 files that are missing from the remote storage, please run repair

… that seems to be all files that are missing. I believe this is a timeout problem… What do you think? What can I do?

My guess is it’s more likely a destination path issue than a timeout as timeouts usually through a timeout error.

I’d suggest double checking that your destination path correctly aligns with where your WebDAV connection is dropping you.

Note that there has been some discussion of large file list issues including this one that talks about a POTENTIAL subfolder system to better support many destination files:

Until something like that makes it into the codebase some things you could consider include:

  • use a larger --dblock-size. For example the default is 50MB is if you only doubled it to 100MB you’d HALF the destination files (at the expense of more time/bandwidth consumed during the post-backup validation stage). Note that change in dblock size of an existing backup will only apply to NEW archive files - it won’t affect the old sized files until they become ready for “cleanup”, so in your current situation you’d have to start fresh to get this benefit

  • break your source up into multiple jobs each with it’s own destination. You’d lose a bit of deduplication benefit and you’d probably have to bring your USB drive back local to implement the change, but you wouldn’t lose any of your existing history. Here’s a topic about somebody else to did this

Thank you Jon for you help… Bad news you tell me here!

But I had another idea :slight_smile:

First I wanted to get my windows explorer connection working without a timeout. After changing the following Registry entry, I managed to get the explorer to show the folder contents (after ~6min of loading)

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters 

BasicAuthLevel --> 2

After that I switched Duplicati’s target to the network volume I created.
It worked! It needed some time but this time Duplicati did not get any timeouts. It may not be the fastest way but it’s okay when running once a week overnight for 2 hours… I think I will try this way for the next time.

1 Like