Slow download speed Windows Duplicati

I’m asking about one file. Did it exist completely when backup began, and not change?
Reading the second part above worries me unless it means the backup began moving.

End of what? Do you mean end of the one file below?

One thing that is different this time is that upload is short of demand, so it’s less clear which is stuck.

I think we heard earlier that the destination is Backblaze B2. I didn’t see signs it stopped, but logs are lengthy and there might be some earlier indication of problem. My initial look was just at the stop time.

I’m asking about one file. Did it exist completely when backup began, and not change?
Reading the second part above worries me unless it means the backup began moving.

It existed in its entirety, and at the beginning of the backup process, it was changing.

End of what? Do you mean end of the one file below?

Yes, at the end of the specified file.

The stoppage occurred three days ago (I restarted the operation today), and I have provided all the available logs. I have also highlighted the time intervals in the log when the stoppage occurred, but during that interval, there is nothing (empty); there is just a time jump from the 25th to the 28th.

So more complex than existing and not changing. Can you wait until it stops changing?
Backing up something that’s changing might not do good restore. It won’t be consistent.

VSS can get what’s known as crash consistent, if that helps. Without that, it’s mixed up.
Some applications have expectations of their files without pieces being old, others new.

When was this? After backup completed, with some help (somehow) from your Edit and Save?

From some other run? Need the context. Some numbers don’t line up with the other posts, e.g.

would be at end of backup, but

doesn’t look like an end where it would be downloading 3 files for verification. Retries are off too

and there are more than 1 in your live log postings, which look like same times but different levels.
I looked through the retry one and saw retries of multiple files after the uploading started up again.

Other than that, the uploads seemed to finish, except for the ones at the end. Did you just grab the various live logs at that time, or did it stop again? I can’t tell the situation and the timing from posts.

There is more that can be logged, but it might get pretty enormous. Easier to first try to not backup simultaneously with changing files, as that will maybe get bad backup anyway, even if it completes.

When was this? After backup completed, with some help (somehow) from your Edit and Save?
From some other run? Need the context. Some numbers don’t line up with the other posts, e.g.

No, that was at the moment of the stoppage.

Other than that, the uploads seemed to finish, except for the ones at the end. Did you just grab the various live logs at that time, or did it stop again? I can’t tell the situation and the timing from posts.

I was looking at the logs precisely during the stoppage.

The backup is actually scheduled to run at night (specifically) when there are no file changes happening. However, at some point, it simply stops. Using Edit > Save helps to resume the backup process and it continues working.

Is it necessary to set the snapshot-policy in the settings? Do I need to choose something, or not?

The job log does not exist in the middle of the backup. It reports results. You quoted a small bit of the Complete log that has no timestamps. Please either post timestamps, or post image of job summary.

The stoppage was not a moment, right? It was about Feb 25, 2024 1:14 AM to Feb 28, 2024 1:24 PM

and if a job log managed to come out somewhere in that three day slow/stop time, it’d be significant…

This doesn’t fit either, as the logs went to Feb 28, 2024 1:28 PM, after “it starts working again”. Clarify?

then why did you say it was changing, below? Was that line ending supposed to say “not changing”?

It depends on what the file is, whether the application (can you say what sort?) would mind a file not consistent on a restore, and whether that will occur (the conflictingly answered file change question).

That’s the answer (awaiting your answers) to the consistency question. The answer to the stoppage requires your answers as well, and it could take some experimentation and more logs, as mentioned.

EDIT 1:

from the Complete log whose time information is missing also looks kind of inconsistent with the idea:

which in itself is a little weird. What complete, clean job logs do you actually have in your jobs history?

EDIT 2:

I’d mentioned earlier in this post that the timestamps didn’t match, but to be specific, there were four uploads seemingly in progress at that time, which makes sense if it was after (not during) stoppage. Below, I went through the posted log and indented things that completed. Four uploads had not, yet.

Data from the RETRY (log, live):

Feb 28, 2024 1:30 PM: Backend event: Put - Started: duplicati-b809fcb72a50c4116b7f328ae1b4aa856.dblock.zip.aes (99,96 MB)
 Feb 28, 2024 1:30 PM: Backend event: Put - Completed: duplicati-i9d87d8bfc0f54830be475c09595bb89c.dindex.zip.aes (35,18 KB)
 Feb 28, 2024 1:30 PM: Backend event: Put - Started: duplicati-i9d87d8bfc0f54830be475c09595bb89c.dindex.zip.aes (35,18 KB)
Feb 28, 2024 1:30 PM: Backend event: Put - Started: duplicati-bfc68475356ee40669df66dcae0392fab.dblock.zip.aes (99,97 MB)
 Feb 28, 2024 1:30 PM: Backend event: Put - Completed: duplicati-ie5a862e781494cfe9132356d72487857.dindex.zip.aes (35,22 KB)
 Feb 28, 2024 1:29 PM: Backend event: Put - Completed: duplicati-b2a07c987b10143d283616cb45427f469.dblock.zip.aes (99,91 MB)
 Feb 28, 2024 1:29 PM: Backend event: Put - Started: duplicati-ie5a862e781494cfe9132356d72487857.dindex.zip.aes (35,22 KB)
 Feb 28, 2024 1:29 PM: Backend event: Put - Completed: duplicati-b973b88fe488f4ae588e72a8a22e67e39.dblock.zip.aes (99,94 MB)
Feb 28, 2024 1:29 PM: Backend event: Put - Started: duplicati-bae304fcdac8c4738aeeca7f036f76788.dblock.zip.aes (99,93 MB)
 Feb 28, 2024 1:29 PM: Backend event: Put - Completed: duplicati-i86cbdc9b564f468e87d2e83e13824b2d.dindex.zip.aes (35,23 KB)
 Feb 28, 2024 1:29 PM: Backend event: Put - Started: duplicati-i86cbdc9b564f468e87d2e83e13824b2d.dindex.zip.aes (35,23 KB)
 Feb 28, 2024 1:29 PM: Backend event: Put - Completed: duplicati-b57afbaf745504f4bb22fb4127498c662.dblock.zip.aes (99,95 MB)
Feb 28, 2024 1:29 PM: Backend event: Put - Started: duplicati-b12926b6ceffa464592f19c59c318609b.dblock.zip.aes (99,93 MB)

Let’s start another angle (still hoping to clarify the previous situation reports though).

What is different about this one backup? Are others on same system or some other?

Do they all backup a similar small number of large files produced by something else?

Since current issue was once said to be timing sensitive, e.g. after a missed day, is it
potentially backup volume sensitive? How long is backup? Can bad one be reduced?

If that’s 2008 R2 SP1, it should be able to use .NET Framework 4.8 . Can you verify?

  1. This backup is no different. Previously, there were 4 of them, and now there is only one left. Out of the four copies, 3 were working fine, but this one had issues.

  2. I didn’t understand what you meant. They create backups of the MSSQL database.

  3. No, it doesn’t depend on the size. Currently, the monitoring shows a volume of 32.80GB. How can I reduce the number of problematic files? I have installed .NET Framework 4.8, and the update package is also installed. The task takes approximately 3-4 hours to complete.

  4. I don’t understand what other data is required from me? I have gathered all the information we discussed earlier to ensure accurate information. Please let me know what else I need to correctly present again.
    Screenshot_1

Not quite answered. Still looking for the below:

Because possibly different systems behave differently for some reason. Still searching for any clues.

which works as follows? MSSQL makes the large database file for Duplicati to backup?

“small number of large files” refers to this statistic. I wanted to confirm that’s typical of them all:

I meant the backup uploading size. Skipping days will likely mean more change, so larger upload. Presumably you’re talking about static storage size which is surprisingly small for 105 GB source.

I don’t know what the source actually looks like, of course, nor do I know what monitoring you run.
Duplicati home page will have the source size, and on next line the backup size. Logs have more.

I don’t have MSSQL, but I think Duplicati’s approach is to use a VSS snapshot to get consistency.
Yet another MS SQL related topic - SQL Databases vs. file backup
MSSQL backup how’s it work exactly?