Moving to another storage provider

I use Dropbox to synchronize AND as destination to store Duplicati backups, as well as Backblaze B2.

If I need to change the provider in the future, I can simply copy the Duplicati files to the new provider (and continue using the same encryption password), or I will have to restart the backups from scratch and (what worries me the most) lose all the versions that will be stored in the “old” provider?

All you have to do is:

  1. copy / move the destination files from your old to new destination
  2. update your existing backup job to point to the new destination

All your version history should stay intact - no “starting from scratch” required. :slight_smile:

If you want to read up on other users’s experiences with this, you can check out these topics:


1 Like

Thank you!

I had done a search before asking but it didn’t occur to me to search for “cloud to cloud”.:roll_eyes:

1 Like

Do you recall what search you used? It might make sense to tag those topics with common terms to help make them easier to find…

I don’t remember the exact terms, but it was something similar to the title of this topic, “moving to another provider” or something like that.

RCLONE may help in such situations.

https://rclone.org/

1 Like

One of the easiest way to migrate from Digital ocean to Wasabi.com Thank you @JonMikelV I was not expecting that easy was worrying about making databases, configuration etc to use my new hosting space in wasabi. The good thing is in Duplicati is wasabi cloud hosting is pre-defined.

And for migration from Digital Ocean (DO) to Wasabi I use flexify.io which is paid service but initially offered a bonus to use it which is enough for about 200GB transfer

After using services of wasabi in free trial I am back to DO because the pricing there is charged on per day basis, so it comes out to be more costlier. So easy through flexify.io to bring back my last one month updates to DO.

Hi guys. Sorry for digging up this old thread but I have a very related problem.
Goal: backing up my Synology (using Duplicati running in docker) ultimately to Google Drive.
Desired workflow:

  1. I create a backup locally.
  2. I copy all backup files to google drive
  3. I point my backup configuration to google drive
  4. I set the schedule on my backup config, live on, etc

My problem: after changing the backup config to point to google drive, and starting another backup, I got an error that some files are missing. I thought the sqlite db thinks it is still the local path and did not reconcile the change in destination yet, so I went to repair the db.

Then (hitting Repair) I get an error, stating “The backup storage destination is missing data files. You can either enable --rebuild-missing-dblock-files or run the purge command to remove these files.” Then a list of all the zips and dlist files. I check in google drive and they are there. I go to the destination config, test the connection, and it is successful.

TBH, I have created the destination folder using the duplicati backup config just to make sure it is generated with an application-token and not manually (as I saw in other threads that google drive manages user vs app-created folders differently).

Could it be the case that the synology cloud sync task (which got a different oauth token from the same user I use in duplicati) created these files, but the duplicati token will not be able to see them?
Or what do I do wrong?

I would not like to regenerate the db from scratch as I was playing with this in the last 2 months and it is painfully slow for my case (7 TB of data spread out in 5 backups, the biggest source being 5 TB).
I would like to reconcile the db with the new destination.

I think I found the reason: Duplicati and Google Drive paths

It is what I feared. I have pulled up the informations per file in the google drive gui, and even though the parentfolder is created with “Duplicati”, the files are created with “Synology Cloud Sync”.

Now the dilemma begins whether I go for the fully authorized token or I find another way to “reown” those files.

Worries with that path include it being generous in what the app can get to, and Google’s wish to “fix” that:

Enhancing security controls for Google Drive third-party apps (which didn’t happen, but it could someday)

If Google gave a decent way to manage such access issues (maybe web UI?), their goal is more feasible.
Lacking that, you might need to script something (maybe with xargs) to let Duplicati do uploads as itself…

Duplicati.CommandLine.BackendTool.exe can put each file using the URL from Export As Command-line.

One other Google Drive quirk is it will happily let you make duplicated names, so don’t do that inadvertently.

1 Like

I have played with this backend tool a little bit. My windows beta installation included the async bug CommandLine upload to GoogleDrive throws ObjectDisposedException · Issue #4556 · duplicati/duplicati · GitHub, that is why I downloaded the source and rather built myself. Kudos to the team the building was refreshingly easy, I was expecting thousands of warnings and caveats and was positively surprised. The locally built backend tool does the job just fine.

However I did not want to implement retrying, backoff, check-if-it-is-still-there, etc mechanisms and decided to go another route: the real root cause behind my earlier decision of doing the initial backup locally was due to the uncontrollable network throttling. I have a 400Mbps/96Mbps connection and during video conferences (WFH) my colleagues were tearing my head off for having such a bad connection. So I went now and found others also had issues with network throttling in general:

But it seems resolved in the version I am using(2.0.6.3_beta_2021-06-17): --throttle-download ignored, --throttle-upload throttles download too · Issue #4115 · duplicati/duplicati · GitHub

So I went ahead, set asynchronous-concurrent-upload-limit to 1 and throttle-upload to 8 MBps, restarted the backup configs and according to speedtests it looks fine, crossing my fingers for the confcalls. :smiley:

Anyways for the initial full backups which I estimate to run for 7-9 days starting from now I need to take the daily 750GB gdrive limit, so I could not move much faster. Once the initial backups are done, I will switch to nightly scheduled ones and YOLO no throttling.

It’s been a difficult spot, and possibly adding concurrent uploads made it worse. Investigation would be wonderful if you’re up to it. You seem very capable. Have you considered a QoS approach to network? Basically have Duplicati upload yield to the latency-sensitive videos. There are forum posts on this e.g.

There also seems to be a OneDrive-specific issue. After I took a guess, its code expert commented in

Throttle Not Working

I’m not the expert of course, but it seemed to me to be similar to the general burstiness problem where Duplicati (not having good control of packet sending, as a router was) throttles by sending timed bursts.

I don’t know if parallel uploads (I’m not expert there either) just lets bursts coincide or added a new bug.
There is no Issue filed for development consideration, but a good-quality issue (steps, data) might help.

I think the throttling algorithm is below. I solved one bug in it, but don’t know the full surrounding context.

https://github.com/duplicati/duplicati/commits/master/Duplicati/Library/Utility/ThrottledStream.cs

From results, this seems to be a workaround, but QoS might be a better one and is more self-adjusting.
That’s good for backups that run at off-hours, but for your working-hour runs, fixed speeds might be fine.

Welcome to the forum, I’m glad your situation is at least a bit better, and I hope you can share your skills.
Duplicati could deeply benefit from skilled contributors in all areas including forum, code, test, and so on.

Well, I can offer some testing in return for your pro bono work you have done. I really appreciate the availability of this (although sometimes bumpy) software, preparing for the apocalypse where paid software won’t have its debuggability edge. :smiley:

Sso to the point. I have started my last chunk of backup (5 TB) which is about 1/3 through and I rather wait this out with the setup I have. It takes approx. 2 hours to collect files, so quickly adjusting parameters and stop/resume backup process wouldn’t be as quick. Limiting the concurrent uploads to 1 did help with the video conference issue.

To conclude I see that there is room for growth as the measured upload speed never reaches its dedicated 8MB/s (it is using about 5MB/s on average) but I leave it as is right now. I feel the limiting factor might be actually the CPU of the NAS so I couldn’t go too much higher. Is there BTW a setting to allow multiple threads actually?

(For future performance diggers: I do this from the synology DS1520+ where I gave full CPU, 4 GB of RAM to the docker container, I have 700 GB of r/w NVMe SSD cache (24% overprovisioned a pair of EVO Pro 970 1 TB) and SHR-1 over 4x8TB WD Red Pro.)

And to finish with a positive tone, I am the set-and-forget kind of guy but I think the software needs some manpower to automate some reliability testing (at least before bigger releases once a year we should battle-test it using cloud) where I could help on the long term. I even consider bringing in some sw eng skills to tweak the ambitious “why can’t we change blocksize” problem, as it might bite me some years down the road. My wife is producing humonguous amount of bigger data as a photographer, I sometimes produce a lot of small junk in the format of small files. So I feel, even with the 10MB blocksize I set now, I will definitely hit the ceiling with the internal DB and I am willing to invest there - once I hit it. Kids don’t leave me too much volunteer time for unpaid opensource… I admire how you guys are able to dedicate so much effort just for the fame on these forums.

Once my current large backup finishes, I will be here for testing and producing data points.
Once I hit a ceiling I will be here to do improvements as well.

1 Like