Initial backup to USB then WebDAV

Will most NAS’s let you just add a hard drive that has existing data? If the NAS has a RAID setup that wouldn’t work, right? Even if it is setup with JBOD, doesn’t it still have to format the new drive when adding it to the NAS? I was under the impression that you can’t just add a drive that has existing data to a NAS but maybe there is a way that I’m not aware of.

It depends on the NAS. Some (such as unRAID and I think FreeNAS & a few Synology boxes) support a “misc” USB drive and expose it like any other share. Exactly how that all works varies by provider, and even version.

Since you’ve got a DS1512+ box I’d suggest checking out the Synology page at DiskStation Manager - Knowledge Base | Synology Inc..

In summary, it says:

“By connecting an external disk to the system, you will be able to share its disk capacity through a system-created shared folder named usbshare[number] (for USB disks, if applicable) or satashare[number] (for eSATA disks, if applicable). The shared folder will be removed automatically when the external disk is ejected from the system.”

Thanks Jon.
I’m the one with the DS1512+ not @Spaulding :slight_smile:
From my experience it is no problem to use an USB Disk as “usbshare” in combination with an internal raid. With an Synology DiskStation you can use the USB for nearly all functionalities exept the Synology Cloud (which I had planned earlier and got disappointed)

Whoops - thanks for pointing that out.

I really have to learn to use the mobile interface just for reading, not replying. :blush:

Is there a particular NAS about which you were curious? It’s possible somebody here has already done what you appear to be considering… :slight_smile:

I’m in the process of replacing my P2P Crashplan setup and will be backing up to a friend and he will be backing up to me. Both of us will first seed the backup locally as they are multi-terabytes. One option I considered was to use my existing QNAP NAS which has a free bay. However, I don’t want to use a USB enclosure and I do not believe it is possible to add a hard drive to one of the bays without having to format it.

It’s not a big deal as I also have a Plex server that I can easily add a drive to and just run Minio on that. That’s where I am now in the process. I have installed and am able to connect to the Minio server. I just need to figure out the exact steps to add the seeded drive and get it working with Minio. I think I first create a bucket pointing to a folder on the seeded drive and then move all of the Duplicati data to that bucket/folder…

That sounds about right to me. Good luck!

I’m back with bad news…
First of all I had to get my internet provider to change me from DSLite IPv6 to an IPv4 connection… Took me quite a long time.

Now I am able to establish a WebDAV connection to the USB drive on which I have stored my initial backup.
That’s where the problem starts…
The Duplicati folder contains 27625 files (647 Gb). This might be a problem for opening the folder via Windows Explorer which stops working while trying to open it. I’ve waited >15 minutes without being able to open the folder.

Next I’ve tried “CarotDAV” as a DAV client software. Here the folder opens after 6 Minutes successfully.
So the connection should theoretically work.

That’s when I switched into Duplicati. As you told me, I changed the backup path of my backup-job to “WebDAV”.
The connection test to the remote folder which contains all the backup files needs some time ~30 sec and finishes succsessfully.
Then I tried to start the backup… after some time I get this error:

Found 27625 files that are missing from the remote storage, please run repair

… that seems to be all files that are missing. I believe this is a timeout problem… What do you think? What can I do?

My guess is it’s more likely a destination path issue than a timeout as timeouts usually through a timeout error.

I’d suggest double checking that your destination path correctly aligns with where your WebDAV connection is dropping you.

Note that there has been some discussion of large file list issues including this one that talks about a POTENTIAL subfolder system to better support many destination files:

Until something like that makes it into the codebase some things you could consider include:

  • use a larger --dblock-size. For example the default is 50MB is if you only doubled it to 100MB you’d HALF the destination files (at the expense of more time/bandwidth consumed during the post-backup validation stage). Note that change in dblock size of an existing backup will only apply to NEW archive files - it won’t affect the old sized files until they become ready for “cleanup”, so in your current situation you’d have to start fresh to get this benefit

  • break your source up into multiple jobs each with it’s own destination. You’d lose a bit of deduplication benefit and you’d probably have to bring your USB drive back local to implement the change, but you wouldn’t lose any of your existing history. Here’s a topic about somebody else to did this

Thank you Jon for you help… Bad news you tell me here!

But I had another idea :slight_smile:

First I wanted to get my windows explorer connection working without a timeout. After changing the following Registry entry, I managed to get the explorer to show the folder contents (after ~6min of loading)

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters 

BasicAuthLevel --> 2

After that I switched Duplicati’s target to the network volume I created.
It worked! It needed some time but this time Duplicati did not get any timeouts. It may not be the fastest way but it’s okay when running once a week overnight for 2 hours… I think I will try this way for the next time.

1 Like

Interesting solution!

Please let us know if it works for you next time. :slight_smile:

Hi guys,
short summary of the last weeks using the proclaimed solution.

It works, but it takes forever to compare the client and server side. Surely this is the mentioned problem when using webdav with these many single files…

Is there something new about the possible option to split the backup chunks into subfolders?

1 Like

At least the “it works” part is good news. :slight_smile:

Not that I’m aware of…but that doesn’t mean somebody isn’t working on it.

i have tried the local backup to USB, then copying those files to my nextcloud. Changing the duplicati parameters to webdav instead of local file, tests ok, but then lists all 3600 files missing. Clearly, I’m missing a step. it’s almost as if the backup database is storing the local path to the files, and then when they are moved to an external webdav location, they are all ‘missing’. That’s a complete WAG on my part, as I’m neither a webdav or database person, but its the only thing I could come up with.

This isn’t the case… I have migrated between different back ends a few times. As long as you are very careful about transfering the files and making sure the paths are correct, it should be fine.

rclone is the best command line utility (in my opinion) for syncing to/from/between cloud storage providers. It does support WebDAV/Nextcloud: WebDAV

I’d recommend testing the process/workflow with a tiny backup. Once you get the details figured out you can do it with your main backup. (Make sure each backup has its own unique folder of course.)

Doc - Probably wasn’t clear in my original query. I’m not changing back-ends, I’m simply doing sneaker-net to avoid a transfer speed of 1.5 mbps!!

I have a remote system with 300G of data. Using duplicati on the remote system with a webdav destination would take 30 days (!!)
So I cable in a SSD locally, and set my duplicati destination to be a NTFS file system on that SSD. A day later, I collect the SSD and mount it on my nextcloud server, copying all of the files into the nextcloud account for that remote user.

Remotely connect into the source system and change the destination to my webdav location, and all 3600 files are ‘missing’. The only option I’ve found is to delete the remote database and recreate it, which while somewhat faster than uploading all of the dblocks, is still painful.

I’ve read several stories online where what I’m trying to do seems easy. Thus far, this ‘easy’ exercise has defeated me.

To me it sounds like you are seeding the initial backup.

I have done the same thing a few times using a USB drive and then physically moving the drive to a remote location so I could quickly copy the data to the intended destination. Then I just adjust the backup set on the Duplicati machine to target the remote storage instead of the (formerly) local USB drive.

This is still a back end migration in a way - similar steps are involved.

If after relocating the backup data and reconfiguring the backup job, if it complains about missing remote data it still sounds to me like it’s not looking in exactly the right path. But if that were the case I would think a database rebuild would fail.

I must be missing something…

well the database rebuild is taking forever and I have exactly zero confidence that it is going to work in any case. I’ve just been schooled on how to use some of the commandline tools (another frustration solved!), so let me play with those and I’ll get back to you.
Thank you!

Doc - I thought I had it figured out - the missing piece was running the test on the command line to verify the backend

destination. That worked on one of my sneaker-net ops, in spite of needing a very specific nextcloud directory name. Regardless, the next

attempt failed like all of the others:

At the customer site, i set up a duplicati job to backup to a local SSD/USB drive. That finished fine, with 2200+ files on it including dlist and dindex files.

After sneaker-netting the files back to my shop and copying into my nextcloud:

I logged on remotely to the customer system, change the destination parameters on the duplicati config file, extracted this command and ran it:

“C:\Program Files\Duplicati 2\Duplicati.CommandLine.exe” test “webdavs://{nextcloud URL}/remote.php/dav/files/{cust.name}?auth-username={username}&auth-password={pw}” --dbpath=“C:\Users{cust.name}\AppData\Local\Duplicati\VIPBKRETPQ.sqlite” --encryption-module=aes --compression-module=zip --dblock-size=50mb --passphrase="{secret}" --disable-module=console-password-input

yields

May 7, 2020 2:32 AM: Missing file: duplicati-20200506T052415Z.dlist.zip.aes

~2200 more like this

Missing file: duplicati-ibd7f315925e845c4a4088d4a7f14c4ec.dindex.zip.aes

Missing file: duplicati-i12f1fca75f1f4405abc0c56cf3cda17d.dindex.zip.aes

Missing file: duplicati-bd721235088c744c0aecd484243d7a438.dblock.zip.aes

Missing file: duplicati-ifcc3bdd1533b45e7a82fa731f8405b4c.dindex.zip.aes

Missing file: duplicati-b2a9548eb7ba54465a17c50ca57a80ce4.dblock.zip.aes

I do an ‘ls -l’ on these and select others in my nextcloud location where I loaded the sneaker-net files:

-rwxrwxrwx 1 root root 14406717 May 5 22:26 duplicati-20200506T052415Z.dlist.zip.aes

-rwxrwxrwx 1 root root 26589 May 5 01:03 duplicati-ibd7f315925e845c4a4088d4a7f14c4ec.dindex.zip.aes

-rwxrwxrwx 1 root root 27021 May 5 01:03 duplicati-i12f1fca75f1f4405abc0c56cf3cda17d.dindex.zip.aes

-rwxrwxrwx 1 root root 52338877 May 5 01:03 duplicati-bd721235088c744c0aecd484243d7a438.dblock.zip.aes

-rwxrwxrwx 1 root root 27277 May 5 01:03 duplicati-ifcc3bdd1533b45e7a82fa731f8405b4c.dindex.zip.aes

-rwxrwxrwx 1 root root 52338413 May 5 01:03 duplicati-b2a9548eb7ba54465a17c50ca57a80ce4.dblock.zip.aes

Why can’t the duplicati process see these files? What other tests or commandline arguments can I use to help troubleshoot this.

I did not run any commandline stuff on the remote system against the windows local filesystem to see if the problem was there and I just

transferred the headache. Next time I will be sure to do that.

Frustration continues because whether it works or not (at least at this point) appears random.

Thanks for your help

Rick

I’d probably try using a 3rd party WebDAV client, just to verify that the expected files are visible at the URL you’re using.

In addition, you can see what Duplicati sees (which might be nothing – then question becomes why) at <job> --> Show log --> Remote to find the list at start of backup. Click on it, to see what it sees there.