Initial backup to USB then WebDAV

I’m not really sure about that, probably the fastest way to find out the size difference is running a small local test on a single folder.

That’s no problem, just be aware that (obviously) it won’t compress existing data on the backup side. So only new data will benefit from the better compression.

image

So the impact on the filesize is not really worth mentioning. The impact on the backup time is rather high so I will do the initial backup without compression.

Many thanks!

As others have said, it will definitely work. I did exactly the same thing to seed the backups of my parents’ computers. They back up via WebDAV to a Synology NAS I have at my house.

Note that if you are using the beta version of duplicati, there is a bug that was giving me problems with secure WebDAV. There is a workaround though:

https://forum.duplicati.com/t/ssl-errors-with-webdav-target-even-with-accept-all-cert-option/904

1 Like

Thank you very much! Really great community here - I did not expect that :slight_smile:

1 Like

According to Fact Sheet • Duplicati Dulpicati should be detecting already compressed (or likely not compressible) files such as zip, 7z, mp3, jpeg, mkv, etc. and add them as they are - without compression.

If that’s actually happening there shouldn’t be much difference between compression levels if most of your content is photos / videos…

@kenkendk, is there a way to tell whether or not a backed up file was compressed?

No, this is not reported anywhere, not even in the verbose logs. Maybe this should be added to the verbose output that already reports a few file details.

@seb According to this comparison post there is significant speedup if you use --compression-level=1, and you do not loose all the compression benefits.

1 Like

Thank you for the tip. Unfortunately my initial backup is already finished and now I’m using the default compression settings since I will backup over my slow internet connection in the next days!

Glad to hear the initial run is done - good luck with the shift to WebDAV!

Will most NAS’s let you just add a hard drive that has existing data? If the NAS has a RAID setup that wouldn’t work, right? Even if it is setup with JBOD, doesn’t it still have to format the new drive when adding it to the NAS? I was under the impression that you can’t just add a drive that has existing data to a NAS but maybe there is a way that I’m not aware of.

It depends on the NAS. Some (such as unRAID and I think FreeNAS & a few Synology boxes) support a “misc” USB drive and expose it like any other share. Exactly how that all works varies by provider, and even version.

Since you’ve got a DS1512+ box I’d suggest checking out the Synology page at DiskStation Manager - Knowledge Base | Synology Inc..

In summary, it says:

“By connecting an external disk to the system, you will be able to share its disk capacity through a system-created shared folder named usbshare[number] (for USB disks, if applicable) or satashare[number] (for eSATA disks, if applicable). The shared folder will be removed automatically when the external disk is ejected from the system.”

Thanks Jon.
I’m the one with the DS1512+ not @Spaulding :slight_smile:
From my experience it is no problem to use an USB Disk as “usbshare” in combination with an internal raid. With an Synology DiskStation you can use the USB for nearly all functionalities exept the Synology Cloud (which I had planned earlier and got disappointed)

Whoops - thanks for pointing that out.

I really have to learn to use the mobile interface just for reading, not replying. :blush:

Is there a particular NAS about which you were curious? It’s possible somebody here has already done what you appear to be considering… :slight_smile:

I’m in the process of replacing my P2P Crashplan setup and will be backing up to a friend and he will be backing up to me. Both of us will first seed the backup locally as they are multi-terabytes. One option I considered was to use my existing QNAP NAS which has a free bay. However, I don’t want to use a USB enclosure and I do not believe it is possible to add a hard drive to one of the bays without having to format it.

It’s not a big deal as I also have a Plex server that I can easily add a drive to and just run Minio on that. That’s where I am now in the process. I have installed and am able to connect to the Minio server. I just need to figure out the exact steps to add the seeded drive and get it working with Minio. I think I first create a bucket pointing to a folder on the seeded drive and then move all of the Duplicati data to that bucket/folder…

That sounds about right to me. Good luck!

I’m back with bad news…
First of all I had to get my internet provider to change me from DSLite IPv6 to an IPv4 connection… Took me quite a long time.

Now I am able to establish a WebDAV connection to the USB drive on which I have stored my initial backup.
That’s where the problem starts…
The Duplicati folder contains 27625 files (647 Gb). This might be a problem for opening the folder via Windows Explorer which stops working while trying to open it. I’ve waited >15 minutes without being able to open the folder.

Next I’ve tried “CarotDAV” as a DAV client software. Here the folder opens after 6 Minutes successfully.
So the connection should theoretically work.

That’s when I switched into Duplicati. As you told me, I changed the backup path of my backup-job to “WebDAV”.
The connection test to the remote folder which contains all the backup files needs some time ~30 sec and finishes succsessfully.
Then I tried to start the backup… after some time I get this error:

Found 27625 files that are missing from the remote storage, please run repair

… that seems to be all files that are missing. I believe this is a timeout problem… What do you think? What can I do?

My guess is it’s more likely a destination path issue than a timeout as timeouts usually through a timeout error.

I’d suggest double checking that your destination path correctly aligns with where your WebDAV connection is dropping you.

Note that there has been some discussion of large file list issues including this one that talks about a POTENTIAL subfolder system to better support many destination files:

Until something like that makes it into the codebase some things you could consider include:

  • use a larger --dblock-size. For example the default is 50MB is if you only doubled it to 100MB you’d HALF the destination files (at the expense of more time/bandwidth consumed during the post-backup validation stage). Note that change in dblock size of an existing backup will only apply to NEW archive files - it won’t affect the old sized files until they become ready for “cleanup”, so in your current situation you’d have to start fresh to get this benefit

  • break your source up into multiple jobs each with it’s own destination. You’d lose a bit of deduplication benefit and you’d probably have to bring your USB drive back local to implement the change, but you wouldn’t lose any of your existing history. Here’s a topic about somebody else to did this

Thank you Jon for you help… Bad news you tell me here!

But I had another idea :slight_smile:

First I wanted to get my windows explorer connection working without a timeout. After changing the following Registry entry, I managed to get the explorer to show the folder contents (after ~6min of loading)

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters 

BasicAuthLevel --> 2

After that I switched Duplicati’s target to the network volume I created.
It worked! It needed some time but this time Duplicati did not get any timeouts. It may not be the fastest way but it’s okay when running once a week overnight for 2 hours… I think I will try this way for the next time.

1 Like

Interesting solution!

Please let us know if it works for you next time. :slight_smile:

Hi guys,
short summary of the last weeks using the proclaimed solution.

It works, but it takes forever to compare the client and server side. Surely this is the mentioned problem when using webdav with these many single files…

Is there something new about the possible option to split the backup chunks into subfolders?

1 Like

At least the “it works” part is good news. :slight_smile:

Not that I’m aware of…but that doesn’t mean somebody isn’t working on it.