Large Backup keeps Failing

Hi Guys.

I am having issues trying to upload one of my backup jobs to Amazon Drive. It failed during the first upload to to internet modem reboot or ISP issue. But I tried to start it again, and it wouldnt work. So I repair db, verify files those didnt work, so deleted and recreated db, and it recreated the db. But its still failint even tho it spent 4 days looking at the files and uploading the rest. This contains alot of ISOs, so de-duping it really helps since some images are pretty much or exactly the same. They are moaslty TechNet stuff for my home lab.

I also get connection lost while in dupliciti alot. Doesnt really bother me, but not sure if it effecting the backup.

Below is the last log entry for the backup job…

Any ideas? Is my dataset too large? I can split it in 2 if that would help.

System.Net.WebException: Error getting response stream (ReadDone1): ReceiveFailure ---> System.IO.IOException: Unable to read data from the transport connection: Connection reset by peer. ---> System.Net.Sockets.SocketException: Connection reset by peer
  at System.Net.Sockets.Socket.EndReceive (System.IAsyncResult result) [0x00033] in <5071a6e4a4564e19a2eda0f53e42f9bd>:0 
  at System.Net.Sockets.NetworkStream.EndRead (System.IAsyncResult asyncResult) [0x0005f] in <5071a6e4a4564e19a2eda0f53e42f9bd>:0 
   --- End of inner exception stack trace ---
  at Mono.Security.Protocol.Tls.SslStreamBase.EndRead (System.IAsyncResult asyncResult) [0x00057] in <1d0bb82c94e7435eb09324cf5ef20e36>:0 
  at Mono.Net.Security.Private.LegacySslStream.EndRead (System.IAsyncResult asyncResult) [0x00006] in <5071a6e4a4564e19a2eda0f53e42f9bd>:0 
  at System.Net.WebConnection.ReadDone (System.IAsyncResult result) [0x0002a] in <5071a6e4a4564e19a2eda0f53e42f9bd>:0 
   --- End of inner exception stack trace ---
  at Duplicati.Library.Main.Operation.BackupHandler.HandleFilesystemEntry (Duplicati.Library.Snapshots.ISnapshotService snapshot, Duplicati.Library.Main.BackendManager backend, System.String path, System.IO.FileAttributes attributes) [0x0000e] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation (Duplicati.Library.Snapshots.ISnapshotService snapshot, Duplicati.Library.Main.BackendManager backend) [0x0018a] in <118ad25945a24a3991f7b65e7a45ea1e>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x0063c] in <118ad25945a24a3991f7b65e7a45ea1e>:0 

Thanks
Will

by the way, I have uploads going in 200mb chunks… Not sure if that I should make it 500mb or 1gb

This sounds like a general internet connection issue to me. I doubt your dataset is too large as that usually results in a timeout error.

I’d recommend against larger dblock (upload chunk) size as that just means you have more to re-transfer if something happens to your connection during a transfer.

In fact, you might want to try a SMALLER dblock (archive) size, at least on a test backup - just to see if it works better for you. And in case you’re worried, the --block-size parameter is the size of your deduplication chunks (and CAN’T be changed) while the --dblock-size mostly just sets how many blocks are grouped together into a single archive file before being uploaded (and can be changed without issue).

The biggest drawback of smaller dblock sizes is that if you have a LOT of them then your destination (in this case Amazon Drive) can slow down when trying to list them all, in which case you’d have to extend the timeout and/or shift to a larger dblock size.

OK, Thanks for the info. Its running again now. I guess checking everything. It has most uploaded so I will let it run again as is. Or should I change the dblock size down to 50, leaving the current 200mb files up there?

If it’s working, I’d say leave it at alone. :slight_smile:

1 Like

Will do Will see what happens and report back. Thanks :slight_smile:

I don’t know if this is the exact same thing, but I am also having huge difficulties getting a 600GB backup to Amazon to complete. It fails at least once a day. I suppose it’s a problem with Amazon. However, my gripe with duplicati is that whatever amazon does or does wrong, it should be able to handle it in a meaningful way. Currently, what happens for me is that it just stops uploading. No error message, nothing. The progress bar just stays where it was and nothing happens.

1 Like

It finally uploaded everything… But get the following message, which I will have to research.

“Failed to connect: SQLite error no such table: LogData”

Nice dedup… backup is
Source:936.18 GB
Backup:691.25 GB

:slight_smile:

I haven’t seen this error before and the only place I’m finding it mentioned is this year old GitHub post…

The GitHub post also mentions ACD but seems to have been running on an ARM BananaPi with Ubuntu.

1 Like

Thanks Jon… I seems to back up… i need to test restores… I moved the db location from the default path, so not sure if that is causing issues. The default space was too small… So wondering if there is soemthing else I have to change or something…

If you moved the database from within Duplicati’s “Database …” menu then you should be fine.

But it couldn’t hurt to double check the path in there with where the file actually is, just to be sure.

Thanks Jon… The paths seem to be ok, as they are getting written to… but when the backup has error messages, I cant read them (clicking the show button). I’ll go ahead and open a new thread since this one is solved. .