Error trying to backup to Amazon Clouddrive

Hi thanks for all the work on this. It looks like just want I’m looking for - a way to backup my PC to my cloud storage. I’m using Amazon Cloudrive and I’ve managed to get an Auth code and succesfully tested it. However when I tried a test backup I get a red message at the bottom of the screen:
Duplicati Error
Looking at the log it seems to be looking for three files:

Missing file: duplicati-20170903T185949Z.dlist.zip.aes
Missing file: duplicati-b24fee42474064c7db414082d6c9ee754.dblock.zip.aes
Missing file: duplicati-i80fce45cf87e4ab4be258fb86b7ae5ec.dindex.zip.aes

Culminating in the final messsage in the log:

Fatal error
Duplicati.Library.Interface.UserInformationException: Found 3 files that are missing from the remote storage, please run repair
   at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, String protectedfile)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile)
   at Duplicati.Library.Main.Operation.BackupHandler.Run(String[] sources, IFilter filter)

Can any-one please give me any pointers as to what I need to do to fix this? How do I “run repair”? Thanks,

Please try running repair and reporting back with what happens…

To run repair: open the backup set, “Advanced” -> “Database” -> “Repair”

image

1 Like

This is interesting, as I am having the same problem.

I am also new to Duplicati (another Crashplan refugee) and I’m experiencing problems backing up to Amazon Cloud Drive as well. As a test I’ve set up two backup sets on the same data. One backup goes to a local network drive, the other goes to my Amazon Drive account. The local backup is working fine, but when I backup to Amazon Drive sometimes it works and sometimes I get the same “Files are missing” error that Prospector is getting. Here are the error messages:

Log output:

To try to isolate the problem, I created a third backup set using the exact same parameters and source file sets, only this time backing up to a Google Drive. The GDrive backup runs without error every time (as does the local backup). File adds, deletes, and repeated runs all work fine.

Is there a problem with the way Duplicati interacts with Amazon Cloud drive that might be causing these errors?

Yes. Amazon Cloud Drive is “eventually consistent”, so what can happen is that you upload the file, but when you list what files are present, it is not yet updated.

You can use the option --amzcd-consistency-delay=60s to make Duplicati wait 60 seconds before trying to list files. By default it waits 15 seconds.

1 Like

Thanks kenkendk.

I tried delay=60 and am still getting errors. I’ve been increasing the timeout value to see if it works better. I’m currently up to 180 seconds and still getting errors. I’ll keep testing, but at some point I’m going to have to try something else. I’ll let you know how it goes.

BTW: A good way of showing your appreciation for a post is to like it: just press the :heart: button under the post.

If you asked the original question, you can also mark an answer as the accepted answer which solved your problem using the tick-box button you see under each reply.

All of this also helps the forum software distinguish interesting from less interesting posts when compiling summary emails.

So I upped the delay to 240 seconds and was still getting intermittent errors. I’ve since moved on to Backblaze B2, which turns out to be cheaper than Amazon anyway. If someone wants to follow up on this I’d be happy to serve as a test case.

Thanks for trying.

1 Like

I have seen reports from other AmzCD users that say that the delay is not always enforced:
https://groups.google.com/forum/#!topic/duplicati/jV3K2h32NQ0

Not sure why that is, it seems to delay correctly in my test cases.

I believe I’m facing the same issue on Amazon cloud drive. I’m using Duplicati - 2.0.2.12_canary_2017-10-20 on Mint 18.2 with latest mono. This is is driving me nuts as repairing does not fix it. Even recreating the database appears useless. Even a small files set such /etc is enough to trigger the problem after some time.
I did not touch any option or delay yet.

If you use 2.0.2.12, it should have some fixes that prevent the delay from being reset.

Try setting the advanced option --amzcd-consistency-delay=60s and see if the problem goes away.

Thanks Kenkendk, I added the option and will let it run for some time. It seems already better but I think it may be linked to the fact the set is very small (only /etc) and is backed up very quickly.

It still happens despite --amzcd-consistency-delay=60s though less often than before. and the backup set “/etc” is only 30MB. Larger sets (on other computers) seems unaffected or even less affected.

I definitely still have the problem on small sets (/etc) which are 8 to 30MB. Larger sets seem unaffected

Hi guys,

Pretty much in the same boat here - trying to test Duplicati with ACD as the target. Whilst Backblaze B2 seems to be an interesting alternative, I prefer ACD, as am getting 3x upload speed with it maxing out my bandwidth (not trying to say B2 is bad, it’s most likely due to my location).

I’m using Duplicati - 2.0.2.12_canary_2017-10-20 on Windows 10 64-bit, test backup set is ~50MB (though I tried bigger / smaller ones as well),

This is what happens when backing up to ACD:

  1. 1st run of backup job throws an error:
    Found 1 files that are missing from the remote storage, please run repair

  2. Running repair and getting this:
    Listing remote folder …
    Destination and database are synchronized, not making any changes
    Return code: 0
    However, on the home page backup status still says:
    Last successful run: Never
    Though I can restore the files from restore option anyway.

  3. After running the same backup job again with no files changed / added - last successful run is properly displayed

  4. 3rd run, some files added - no errors, 1st and 3rd jobs properly displayed.

  5. 4th run, 4 files deleted and some added on the source - error stating this:
    Found 2 files that are missing from the remote storage, please run repair
    "Broken" version not shown on the home page, but again it is available for restores though

  6. Running repair again with the same effect - no changes to job status (4th run not visible)

  7. Running another “empty” job (no files changed/added/deleted) fixes it again.

So in general all backups actually work, but false positive errors are generated during baseline and whenever a file is deleted on source.

Can any of this be fixed? For the deleted files - is there any flag to set? With versioning I’d say this is obvious one wants to keep deleted files in backup images.

Kind regards,
radek

@radek, what value are you using for the --amzcd-consistency-dela parameter as mentioned here?

@kenkendk, if I’m understanding correctly this seem to be happening because the ACD file list is somehow being cached / delayed. Does a file verification happen at the end of a job? If there a way to disable the end-of-job verification so only the begging-of-job check happens (hopefully after the ACD cache is all caught up)?

Hi @JonMikelV - I tried 60s, 180s and 240s in different tests with exactly the same results

Thanks for the verification. I’m not an expert in ACD but It’s sounding like an oddity in how their system works so I’m not sure exactly how to address the problem (other than the suggestion I made to kenkendk in my previous post).

Just out of curiosity, does enabling --no-backend-verification help at all? (This isn’t a fix so much as a test…)

yes, it does - no errors with baseline, no errors when removing / adding files within backup source.

1 Like

Thanks for confirming that setting made a difference!

Now to figure out how to keep it working even with verification still enabled… :slight_smile:

The problem seems to have “disappeared” for me. But I now have many versions of /etc on AZCD, Il will restart with a “fresh” back and see what happens.