Using 2.1.0.4_stable_2025-01-31 on a Win11 desktop machine from the web interface.
After repeat problems with my formerly working backup set (it was missing some files ), I have now created a new backup set.
If this is relevant: backup target is a Webdav folder (and I am not ruling out a Webdav error - but i WOULD want to see error messages lighting up if something fails.
In short: the first backup run reports:
Source: 2.36 GB
Backup: 429.22 KBs
In the target webdav folder, I can actually see one file xxxxx.dblist.zip.aes
Full disclosure: I have, as a result of my prior frustration, added the following non-standard parameters:
backup-test-percentage = 100
full-remote-verification = True
Log-file-log-level = Profiling
upload-verification-file = true
(and I can see duplicati-verification.json, as well)
Other oddities I am doing: I have set a 100 character all capital letters password for the archive - let me know if this is not paranoid, but much more stupid.
Let me know if there is anything I should do to support my case. Please point me to a different place, if this is not the best place to get the level of support I am seeking.
Let me know if I should jump to a different version - and right now I will.
Thanks a gazillion - from a happy-for-many-moons user.
Are you planning to look in that? These are pretty big files. If you want to, look for things like:
Including source path or Excluding path.
About → Show log → Live → Verbose works too, if you want to try another backup run with it.
These often don’t have logs if Duplicati sees a path name in the log and deletes log for privacy.
You can look in your job log. What did yours say in the upper right corner? For an example of it:
These look disturbingly similar to the ones I found for another issue.
It looks like a bunch of uploads were started, but newer completed, and the backup proceeds anyway, ending up in a broken state. The only difference in your case is that this happens right from the first backup.
I am getting a bit suspicious that this is a significant error that happens in different cases. It looks so far that we have seen this on Dropbox and now WebDAV, perhaps also Backblaze, so it is not backend dependent.
If all you have is the single dlist file on the remote storage, could you delete that, and the local database and then run the backup again? If the same thing happens again, could you try downgrading to 2.1.0.2 and try again?
I am pretty confident that the issue is resolved for 2.1.0.108+ but it is still some time before we are at a stable 2.2 release, so if we can somehow figure out what is causing this problem, I would like to get an updated stable out.
The claimed size of the dlist is 439517, which seems large if all source files got filtered out. Attempting to upload dblock files also suggests that Source data was also found to upload:
February 27, 2025 8:29:35 PM put of first of many dblock (oddly, no dindex)
February 27, 2025 8:32:55 PM put of duplicati-20250227T202908Z.dlist.zip.aes
February 27, 2025 8:32:59 PM list shows no files, neither dlist nor earlier
2039204856 dblock bytes maybe uploaded in roughly 3:25, so speed of 2.1 Gb/sec?
That sounds high. Even if one could get a line that fast, backend might not be.
On the other hand, what should be a list of all source files has none (in the FixedFile table).
I wonder if the files were there initially, but discovering that their data was lost removed them?
I’d still like a peek at the log before the database is deleted. Maybe the Complete log is best.
It now turns out my WebDAV connection apparently went sour.
However, both when mounting files on Windows Explorer and when using an ancient DavExplroer java utility, it is throwing error messages at me left and right.
So, I dont mind that I didnt get a backup. I am concerned that this wasn’t caught along the line.
Either way: I plan to not pursuit this any further, unless there is academic interrest on your behalf.
Thanks for sharing the bugreport data with me, it was pivotal to locating the problem.
After examination of this and related issues, I believe the cause is a timeout that is not handled correctly. Set --read-write-timeout=10m to work around it until the next stable is out, or use canary build 2.0.1.108 or later.