So I finally managed to get my backup into B2 without errors, but within a couple of days my nightly backup got stuck at “waiting for upload to finish”. I watched the network traffic in ksysguard and there was no evidence that anything was actually being uploaded, and the live log didn’t indicate any activity either. After about 12 hours I stopped the backup and tried to repair the database to make sure there weren’t any issues, and this ended up triggering a recreate for some reason.
The problem now is that it’s been recreating the database for a few days, it’s just constantly downloading dblocks and doesn’t seem to be making any real progress. Now that ACD is shutting down and I’m using B2 this is concerning since I’m paying for this download bandwidth. The whole backup is only 150gb so I don’t quite understand why it’s downloading so much data to recreate the db.
Has anyone run into this and been able to fix it? I’m not really sure what to do at this point, but I’ve been running into nonstop issues with duplicati + B2 and am at the point where I’m beginning to look at other solutions.
I just checked again, and oddly enough after 3 days it just appears to be hanging again on the recreating db phase. Last entry in the live log was a GET 4 hours ago, but it appears to have stalled with no network activity in the system monitor.
Here is a sampling of errors I was having on B2, but it’s hard to know what’s B2, and what’s the network. Backup log will show you “RetryAttempts” in “BackendStatistics”. Retries can cover problems up to their limit which could be raised by –number-of-retries and –retry-delay to try to ride through troubled periods.
I haven’t tested it, but it might be possible to put a safety timer with –http-operation-timeout at something longer than your network will ever normally take to upload or download a remote volume (50 MB default).
B2 is…ok. I see periodic retries but no serious indications from the live logs that there are any major problems. I’m on day 3 of a db repair now and I haven’t seen any hangs, so that could’ve been a freak accident. The bigger problem is the recreate operation has already downloaded well over 100gb of a ~150gb backup set, but I’m trying to let it finish for science, since I’m curious to see if it can fix itself after all that.
I’m having a similar problem with Duplicati - 18.104.22.168_beta_2019-07-14 on server 2016 standard and server 2008r2. It looks like it takes about 5 minutes between blocks. cpu is low, disk and network are moderately busy. looks like the network activity is mostly smb and rdp with a bunch between firefox and the Duplicati.GUI.traylcon.
disk io varies with a lot of traffic on:
Remote storage is WD MyCloud EX4100 box, 1gb ethernet.
Unfortunately that version suffers from the bug @ts678 is talking about a couple posts up. That version is basically identical to 22.214.171.124 with a warning about Amazon Cloud Drive.
That particular bug was presumably fixed in later Canary builds. One thing you could try is switching to version 126.96.36.199 and try to recreate again. Hopefully your recreate will be faster. (Note that 188.8.131.52 is a canary version so in some other ways it MAY be less reliable than a beta version, but in my experience this particular canary version is pretty solid.)
ok, thanks. That explains why I was having the same problem with v184.108.40.206-220.127.116.11_beta_2018-11-28. I was rebuilding the database because because I was seeing “Detected non-empty blocksets with no associated blocks!” errors.
I’ll help where I can. The change to 18.104.22.168 seemed promising at first but now it is back to the “stuck at 90%” level processing one block every five minutes. I’ll investigate more later and see what I come up with.
I’m not super familiar with how the database recreate process works, but from what I understand it SHOULD only need dindex and dlist files. If Duplicati detects some info missing from those two types of files (not sure what kind of missing info), it will grab some dblocks.
I do remember seeing a special case in the code where Duplicati decides it has to download ALL dblocks. I don’t remember the circumstances.
Mine is taking 2-3 minutes after dblock download to process it before moving to the next dblock. (My dblocks are 50MB.) So… 20-30 per hour will take 5-7 hours to rebuild. If you have over 4000 blocks and they are taking 14-18 minutes each… well, yikes.
Thanks for all the information guys. At this point after wrestling with Duplicati and corrupt DB’s for close to a month, I’ve decided to abandon it and try out Duplicacy. Feel free to keep discussing in this thread.