If your status bar says “Verifying backend data” for a long time, please check either the “Current action” in the job menu or the “lastPgEvent” in main menu “About” -> “System info” (button) -> “Server state properties” (page section).
If you see “Backup_ProcessingFiles” in either of those areas, then your backup is likely running just fine and it’s only the status bar text that is “stuck”.
I don’t know if this is a side effect of some recent UI updates, (more likely) some multi-threading changes that have recently arrived (I think in 2.0.3.6 canary), or something else completely but, at last in my experience, the backup IS actually progressing despite what is (not) showing in the progress bar.
IDK. I see the same problem, some backup sets also stopped with 36000 warnings. I hope 2.0.3.6 hasn’t wrecked my backups. Didn’t make backups since latest canary. Well, my fault, it’s bleeding edge.
I suspect your backups are just fine - there is a known performance issue that started with 2.0.3.6 (which I think has a fix in testing) but I haven’t heard of any corruption issues.
Though I haven’t hard of the 36,000 warnings problem yet. I’m a bit behind on topics but if you haven’t posted about that yet you might want to.
Since the issue seems to be cosmetic (my backups otherwise are running fine) I’ve stuck with the latest canary versions and can confirm that that 2.0.3.7 and 2.0.3.8 both still exhibit the issue (at least for me).
Oh, and I found it’s a bit easier to see if I just look at the “Current action” of the job details:
Don’t think that issue has been resolved at all. Just started a new backup on pCloud, I’m at the end of it, and this is what I’ve been seeing for about an hour and 30 minutes:
There’s been no activity whatsoever since the List command. I don’t think the verify task should take longer than the backup itself
This is running on 2.0.3.5 (Linux), but I’ve observed similar behavior on 2.0.3.9 (Windows).
Agreed - but as far as I know it wasn’t supposed to have been fixed in any recent releases either.
My guess has been that it’s a side effect of the big concurrency rewrite, but if you’re seeing it in 2.0.3.5 then I could be wrong about that.
Luckily, I think it’s still a cosmetic issue so shouldn’t be affecting actual backups - unfortunately, that also means it’s likely lower on the priority list than some actual functional issues that have also popped up recently.
You’re saying that taking hours to verify is a cosmetic issue? Something else is going on. I suspect it’s related to that run on query that I’ve seen reported here somewhere, or maybe this:
No, I’m saying I think it’s actually past the “Verifying backend data” step but isn’t updating the GUI appropriately - however I haven’t taken the time to confirm that’s what’s going on.
I see your example is using the “ExplicitOnly” filter of the Live log but I’m not sure exactly what parts of the code use explicit logging. If you look at the job level Remote log do you see any PUT items even while “Verifying backend data” is showing in the progress bar?
It hasn’t done anything for an hour and 45 minutes now.
It should be fairly easy to reproduce; the behavior has been consistent on all platforms I have tried. pCloud is providing 20GB for free, 1 or 2 GB backup should be enough to trigger the behavior, maybe even less. I would try it, but all of my machines are now tied up in verifying stages.
So, to recap, two different machines, two different backup sets, two different OSs (Win and Linux), pCloud / WebDAV, same Duplicati version, same exact behavior.
I’m waiting on the backup on 2.0.3.9 to finish. But I expect to see the exact same behavior.
My apologies on the lateness of this - I typed it up yesterday but apparently forgot to hit the giant “Reply” button.
It sounds like there’s definitely something going on there, but the PUT commands showing in the Remote log don’t happen until after the Verifying step is finished so it’s likely something other than an overly long verify process.
I know some other users have reported “stalling” in situations such as:
temporary loss of connection to destination (provider or internet goes offline, even for a short bit)
inclusion of “synced” folders (such as a Dropbox or OneDrive folder) as part of Duplicati source
So both backup #1 and #2 finished after hours of verification, so that’s something.
I’m not sure I agree with your assessment of the PUT command, because as shown in both logs, the List command happened after it. There’s no doubt that Duplicati is doing something during that long verification process, but since there is nothing in the log (or visually) it begs to question whether it is stuck in a loop, etc.
Considering how I saw the exact same behavior on different platforms, etc. I don’t think it is a connectivity issue or synced folder related. I’m not backing up anything synced on Linux, for example.
FYI, Backup #3 running on the latest Canary had some errors, so it had to be restarted, still chugging along, so far.
You could be right - with all the multi-threading changes made recently I could easily be working off of an old linear understanding of the flow.
That might also explain what people are seeing - the progress bar really only shows one thing at a time, but if we’re not multi-threading steps then then it’s likely that the progress bar isn’t accurately representing EVERYTHING being worked on, plus it’s possible the threading is causing the verification to actually take longer than it used to.
Of course if the verification step is actually happening alongside the start of the backup then that might be a problem as the verify should be done first to make sure the backend is viable. I could see verify and file scan being concurrent, but not actual uploads.
I’m not really sure where to go from here - about all I can offer is to take this set of posts to it’s own topic were we can make it clear 2.0.3.5 actually IS stuck at verifying and maybe get some other input…
I’m not sure whether this issue has been resolved in newer releases, but I would like to share my finding on this.
On a certain backup destination I’ve noticed long delays doing the List() function, taking some times as much as 4 hours!
The report (debug output) says that 39KB were transferred, however, using a network monitoring tool, it measured 250KB sent and 9.5MB received.
While I’m not keen with C#, I took a look at the code for SSH List(), and I found a loop where it does something like this:
While (GetNextFile())
Process()
Loop
And it seems that the call to get the next file in the remote directory is causing a full listing every time it is called, thus transferring a big amount of data when the destination has too many files (e.g. thousands or more).
I wish I could do it myself, but I’m not familiar with C# nor GitHub, so what I can suggest is to load the directory listing once in an array, and loop through it locally.