Migrate from cloud to cloud?

LOL, I’m not familar with any transfer method on Linux… I’ve just started digging into rclone but haven’t gotten very far yet (and I’m not even considering the ACD caveat yet.)

Edit: Arrrg! You can’t even install it via apt-get (follow install instructions instead)

To be honest I’m not either - nor do I know much about ACS and B2. But if you can mount the source and destination locations, you could probably just use cp and it would work. There’s nothing magical about the destination files - they’re just zip (or 7-zip) files that may or may not be encrypted. I’m sure they could shuffled around in Linux all the time. :slight_smile:

In my work I use a tool called Beyond Compare which I really like. It’s not free, but there is a 30 day trial which should be plenty of time. But you could use FTP, FileZilla, mc (Midnight Commander), etc. if any of them feel more comfortable to you. And if ACS and/or B2 support FTP, SFTP, WebDAV, etc. you wouldn’t even need to know how to mount them in Linux as some of the tools I mentioned support those transfer protocols as well.

I think the ACD problem was fixed with rclone,; someone had an approved key and they are using that now.

I guess you are referring to this workaround: Proxy for Amazon Cloud Drive - Howto Guides - rclone forum

I’ve tried but failed for today. I have managed to configure rclone to access both B2 and ACD (!!) but when I do

rclone copy ACD:Backup/MEET B2:MEET

I get loads of this:

2017/10/01 21:46:11 ERROR : Kz(m#qlak/#_7`0#3I!,v(WuG6!%S'a~%YJBj3qFYTbG4H0xxPc`+Tdm5Ag~bhP^Dd%LL'N-6;Cn7k.s3250535805.part0002: Failed to copy: File names must not contain DELETE (400 bad_json)
2017/10/01 21:46:12 ERROR : zMZ(M)#j5h/&CY@`jKArL^We'tWU$G_sX]Ci}/v8=R$gHr!])2%CqqP$y8UG~bMf'q%;wwbi$@ejDu)OoI: Failed to copy: File names must not contain DELETE (400 bad_request)

Makes no sense to me :frowning:


UPDATE: I raised this on the rclone forum and it looks like “DELETE” does not refer to a delete operation but the ASCII character DEL 127. And indeed, my backup files do include some strange invisible character:

Is it possible that ACD can deal with these but not B2. And doesn’t that mean that duplicati should be prevented from using DEL 127 in filenames?

If you go VPS way then…

Use software called Syncovery, I use Windows version and the trial will last you for ages… they have a Linux version also

Supports B2 and ACD and is excellent sync software

That’s very odd - I’ve never seen a Duplicati backup file with a name like that. Then again I’ve not used ACD so maybe it’s something unique to it as a destination? (Just guessing.)

If you make a new mini-test backup job pointing to ACD does it also give you fil names including non-alphanumeric characters and ending in .part####?

I can’t write anything to that account because I’m on the 5GB plan but have 10TB of data on that account :neutral_face:

I cannot recommend Syncovery. I have spent months testing it and reported dozens of bugs before I have up on it.

I can only vouch for my own experience. I’ve used it for years for transferring data between fileservers and across clouds and FTP servers for various companies for quite large datasets and never had an issue. For my own data I had about 1TB of data in Livedrive and transferred it to Amazon Cloud drive, again no issue and job done.

There are other products that also worked, but I preferred this one…

I have no idea where those characters are from. They are not created by Duplicati, it only uses base64 (A-z,0-9,+=-_) for the names. The .part00001 suggests that it is some multipart upload that broke (best wild guess I have!).

Are you using an intermediate tool like something that exposes your ACD in a different way (like odrive or similar)?

So are you saying that duplicati is not producing such files? Or are you saying that duplicati leaves such files behind when something breaks?

Because I just checked and found that my backup archive on ACD has 11512 (!) files with .part000 in their filename. ACD doesn’t seem to be able to tell me how much space they use combined, but since may of these files are 1 or 2 GB big, it’s probably huge.

Can I safely delete them?

I did use odrive for some time and had my ACD connected to it but I never used it, i.e. I didn’t sync it with anything or used it to access the archive.

What seems more likely as a source is that I transferred parts of the archive from one amazon account o another using multcloud. I remember that it did not work very well and it took multiple attempts until the transfer completed successfully (I’m having the same trouble again now when trying to transfer from ACD to B2. I cannot recommend them at all). So if ken confirms that duplicati doesn’t create such files they must be from the failed multcloud transfers which means I can delete them, which means I might have a much much smaller backup archive than I thought, which means I means I may not have to worry so much about costs

I’ve never seen anything outside base64 characters in my backup files and he’s confirmed it, so I think it’s safe to say whatever these are they’re not related to Duplicati.

Your multcloud attempts theory makes sense to me - my suggestion would be to gather them up into a subfolder, run a backup, and see if Duplicati complains (my guess is it won’t).

Alternatively, you could look at only moving the for-sure Duplicati based files to your new destination and see how things work there with all the .part#### files missing.

1 Like

Wait: we are talking about two different things here. ken confirmed that duplicati doesn’t produce filenames with the DEL character in it. But what I’m asking here is whether it produces files with a .part000* ending.

It will not complain. It parses filenames with a simple regular expression that only matches <prefix>-[A-z0-9]+.\w+(.\w)? where <prefix> is usually duplicati (the regex is not the actual one used, but similar).

This topic has been going back and forth with various more or less related issues. Here I provide something like a conclusion of what I ended up doing.

  1. I gave up on Multcloud. It is just too buggy/unreliable and when it works, transfer is rather slow.
  2. I used the cheapest VPS from Scaleway to run rclone (their forum is at forum.rclone.org)
  3. Setting up rclone to use Amazon Cloud Drive is a bit of a pain, but thanks to these instructions, it is entirely possible.
  4. I ended up not migrating to B2 but to the German Amazon instead, since they still offer unlimited storage.
  5. Transferring from ACD (US) to ACD (DE) with this setup works extremely smoothly so far: 1 TB transferred within less than 12 hours.

Thanks to everyone helping me with this! :smiley: :+1:

1 Like

There seems to be a little caveat with that: after 1.3 TB, transfer rates have fallen 63 Bytes/s. Amazon gave the following notice when I restarted the transfer:

amazon drive root 'Backup': Modify window not supported

I suppose that’s something I need to take up on the rclone forum when I have time. For now, I’ll just assume that one of the parties involved in this transfer is imposing some kind of rate limitation that will go away again in a day or so. We’ll see.

Edit: the above message is apparently meaningless and hence unrelated to the apparent rate limitation.