According to Fact Sheet • Duplicati Dulpicati should be detecting already compressed (or likely not compressible) files such as zip, 7z, mp3, jpeg, mkv, etc. and add them as they are - without compression.
If that’s actually happening there shouldn’t be much difference between compression levels if most of your content is photos / videos…
@kenkendk, is there a way to tell whether or not a backed up file was compressed?
Will most NAS’s let you just add a hard drive that has existing data? If the NAS has a RAID setup that wouldn’t work, right? Even if it is setup with JBOD, doesn’t it still have to format the new drive when adding it to the NAS? I was under the impression that you can’t just add a drive that has existing data to a NAS but maybe there is a way that I’m not aware of.
It depends on the NAS. Some (such as unRAID and I think FreeNAS & a few Synology boxes) support a “misc” USB drive and expose it like any other share. Exactly how that all works varies by provider, and even version.
“By connecting an external disk to the system, you will be able to share its disk capacity through a system-created shared folder named usbshare[number] (for USB disks, if applicable) or satashare[number] (for eSATA disks, if applicable). The shared folder will be removed automatically when the external disk is ejected from the system.”
I’m the one with the DS1512+ not @Spaulding
From my experience it is no problem to use an USB Disk as “usbshare” in combination with an internal raid. With an Synology DiskStation you can use the USB for nearly all functionalities exept the Synology Cloud (which I had planned earlier and got disappointed)
I’m in the process of replacing my P2P Crashplan setup and will be backing up to a friend and he will be backing up to me. Both of us will first seed the backup locally as they are multi-terabytes. One option I considered was to use my existing QNAP NAS which has a free bay. However, I don’t want to use a USB enclosure and I do not believe it is possible to add a hard drive to one of the bays without having to format it.
It’s not a big deal as I also have a Plex server that I can easily add a drive to and just run Minio on that. That’s where I am now in the process. I have installed and am able to connect to the Minio server. I just need to figure out the exact steps to add the seeded drive and get it working with Minio. I think I first create a bucket pointing to a folder on the seeded drive and then move all of the Duplicati data to that bucket/folder…
I’m back with bad news…
First of all I had to get my internet provider to change me from DSLite IPv6 to an IPv4 connection… Took me quite a long time.
Now I am able to establish a WebDAV connection to the USB drive on which I have stored my initial backup.
That’s where the problem starts…
The Duplicati folder contains 27625 files (647 Gb). This might be a problem for opening the folder via Windows Explorer which stops working while trying to open it. I’ve waited >15 minutes without being able to open the folder.
Next I’ve tried “CarotDAV” as a DAV client software. Here the folder opens after 6 Minutes successfully.
So the connection should theoretically work.
That’s when I switched into Duplicati. As you told me, I changed the backup path of my backup-job to “WebDAV”.
The connection test to the remote folder which contains all the backup files needs some time ~30 sec and finishes succsessfully.
Then I tried to start the backup… after some time I get this error:
Found 27625 files that are missing from the remote storage, please run repair
… that seems to be all files that are missing. I believe this is a timeout problem… What do you think? What can I do?
My guess is it’s more likely a destination path issue than a timeout as timeouts usually through a timeout error.
I’d suggest double checking that your destination path correctly aligns with where your WebDAV connection is dropping you.
Note that there has been some discussion of large file list issues including this one that talks about a POTENTIAL subfolder system to better support many destination files:
Until something like that makes it into the codebase some things you could consider include:
use a larger --dblock-size. For example the default is 50MB is if you only doubled it to 100MB you’d HALF the destination files (at the expense of more time/bandwidth consumed during the post-backup validation stage). Note that change in dblock size of an existing backup will only apply to NEW archive files - it won’t affect the old sized files until they become ready for “cleanup”, so in your current situation you’d have to start fresh to get this benefit
break your source up into multiple jobs each with it’s own destination. You’d lose a bit of deduplication benefit and you’d probably have to bring your USB drive back local to implement the change, but you wouldn’t lose any of your existing history. Here’s a topic about somebody else to did this
Thank you Jon for you help… Bad news you tell me here!
But I had another idea
First I wanted to get my windows explorer connection working without a timeout. After changing the following Registry entry, I managed to get the explorer to show the folder contents (after ~6min of loading)
After that I switched Duplicati’s target to the network volume I created.
It worked! It needed some time but this time Duplicati did not get any timeouts. It may not be the fastest way but it’s okay when running once a week overnight for 2 hours… I think I will try this way for the next time.