thanks for sharing this, so I stand corrected: Duplicati has indeed support for switching support, and is even more complete than I was thinking, in fact it’s more advanced in this regard than some commercial software.
That’s one way to do it, where you tie the backup database to its drive by having a config per drive.
The new way being proposed has one config for all drives because a drive database is on its drive.
This potentially makes things vastly more convenient, but fails harder if a drive is yanked midway…
Keeping the database on C: at least has a database to try to repair the backup bits after drive yank.
If drive yank corrupts the database as well, that’s worse news, but with luck+time it would Recreate.
The other issue is whether a drive letter will move. I don’t recall if those can be made to stay stable.
Windows Drive Letters seems aimed at the destination data files, not a database which is there too.
There are possibly some filesystem or other tricks to redirect the database, but how far to go here?
This is solved by using destination drive for its database rather than keeping everything on C: drive.
It’s an idea. I’m not sure how workable it is, especially if moving drive letters might get in the picture.
Don’t overlook cost of labor, in a time-is-money situation. It’s hard to say which products lead to best mixture of client satisfaction and total cost of ownership, but some money spent might be worthwhile.
I don’t follow every product, of course, but this is one reason I mentioned Arq Backup. You can buy it.
Sure you might want updates or new versions occasionally, but at least it does not need monthly pay.
Enterprise and MSP type backup is not my area, which is why I suggested searching around Internet.
and after whatever discussions and testing, ultimately you have to pick what works for you and clients.
Use cases differ. I was on CrashPlan Home before they decided to exit the home user market as they changed their focus (presumably in seeking revenue). Duplicati is kind of what it is, for better or worse.
This could improve dramatically if people volunteered, but volunteers are scarce lately. Any out there?
I agree, it works really great.
I’m not sure if I’m understanding you correctly, but I would not advise to store the live database on the backup drive itself if that drive is going to be removed. This would prevent from browsing the backup data while the drive is disconnected. In that scenario you’d have to you ‘no-local-db’ function. I suggested to store all 5 databases on volume local to the machine, seeing as the OP mentioned that the jobs backup millions of files.
You don’t need to make the drive letter/mount point stable. Duplicati can check all mounted volumes for a file you specify to determine which volume to use. This is done using the ‘-alternate-destination-marker’ switch.
You don’t need to ‘trick’ the software, but learn it’s capabilities. There is a way to ‘seed’ the initial backup and I use it all the time. It can be done in the following manner, if you can access the remote location or have a end-user plug in a drive for you:
- Create backup job to backup to local portable drive for the first initial backup. Do not enable the schedule. After initial full backup completes, disconnect and ship the drive. It’s safe as everything is encrypted with AES-256.
- On the remote device, copy all the files from the portable backup. For example to your cloud/FTP/SFTP server.
- Alter your backup job to backup to the remote location. For example to your cloud/FTP/SFTP server.
- Enable schedule. Next backup will be delta incremental. If you have millions of files on Windows NTFS volume, don’t forget to enable USN Journal option. From now on only the new and changed files will be uploaded.
That’s there really is. And you can also use a tool like ‘Robocopy’ and ‘FastCopy’ on Windows to check the historical rate of data change so you can estimate the upload sizes.
Good point, although I’m not sure if that impedes this (not totally known) planned use.
Possibly the only time the backup data is of interest is when somebody wants restore.
Possibly the clients (especially for desktops) run backup more often then drive swaps.
The onsite drive would be preferred. If disaster occurs, there’s an older backup offsite.
Leaving a drive attached by default also makes that drive more vulnerable to damage.
It’s all tradeoffs. Multiple local jobs also has drawbacks too (as you noted). Moving on:
I haven’t done much with that, so am not sure exactly what its limits are. It looks slow.
On the other hand, I’m not sure how well the other candidates do things with no drive.
sounds like the temporary database that Direct restore from backup files builds for disaster recovery.
In the case of a missing local DB due to no drive, its backup files are missing too. Nothing is there…
If looking around the backup with no drive is important (I don’t know), the database on C: is needed.
This might be more useful someday as progress happens (all thanks to the developer volunteers…).
Volunteers in all areas including development on fixes and features, test, docs, etc. can help hugely.
There’s a pressing need to work on fixes, so features often must wait for special assistance such as
The reason I mention this is that, while it’s always nice to have a better view (including dates) of files,
physical retrieval of a drive might add some additional motivation to try to plan the restore in advance.
C: storage is especially useful if C: is an SSD (I don’t know). Portable drive is (I’d guess) a mechanical.
There’s a C: space question, and there’s also a redundancy point. Sometimes keeping the working DB offsite is better for local disaster recovery, as it may avoid having to discover (late) a DB recreate issue.
Once again, tradeoffs.
As I mentioned, are you sure this could easily-without-additional-work steer Duplicati’s database there?
That might be beyond its design intent (which seems more aimed at backup files), but I have not tested.
Avoiding DB-on-removable-drive (by doing DB-on-C:) avoids the issue, but I don’t know what you mean.
You seem to be rebutting my concern about database on portable drive from my lines above and below.
by responding
By the way (before anyone else hits it), this is best done on Options screen 5. Screen 2 drops it (a bug).
Lots of good advice, but I don’t know if that’s still too much for the network, or if there’s a remote device.
To clarify, the USN journal improves scanning speed. The upload either way is only the source changes.
You can also directly view the historical upload sizes. It’s in the log. Click on Complete log
for the stats.
"BackendStatistics": {
"RemoteCalls": 16,
"BytesUploaded": 104392420,
"BytesDownloaded": 53284903,
If rate of change is highly variable, one might need to read lots. Other tools might do long-term changes better, but client complaints about Duplicati slowing their other Internet work might track short-term load.
One can throttle network usage (it’s still a bit bursty – smooth throtte needs QoS and router) if that helps.
YMMV and I’m sorry if @Flea77 head is spinning from all this, but it’s been a pretty thorough discussion.
Thanks to all who participated, but without knowing more about actual usage, it’s hard to say what’s best.