If backup previously gave 40 warnings (the 20 in log are access errors, so may be fixable), then OK.
Personally I try to get warning-free because I get a popup and an email whenever warning happens.
It is interesting that it worked, because that seems to disprove the assumption that the broken files were originally there and somehow deleted in the last version. Maybe the blocks that were deleted from the database were still there on the destination, so recreate was able to find them again. The backup should be fine because your test only showed extra (those are unpreventable at the moment). It would be nice to find out what originally caused this, but I don’t think we can get any further there.
I would have liked to compare block table right after the recreate. The backup log says there are now 5 versions, down from 11, with 326 files deleted. Maybe large drop was aided by time and retention plan.
"CompactResults": {
"DeletedFileCount": 326,
"DownloadedFileCount": 0,
"UploadedFileCount": 0,
At least it didn’t repackage files, which would have made comparing the old to the new packaging awful.
There were enough broken filesets that one of them is probably still around. Previous “Unexpected” has gone away, so we might be able to find the file entry, then the blocklist, then the blocks for a comparison.
All of this hinges on getting a database bug report and being willing to look. I’m getting a bit tired of that, however opportunities don’t come up often, even though this problem has been sadly common recently.
I think I’ll request a database bug report from @titilesoleil however at least recovery removes pressure.
extra files mean that Duplicati has recreated the mapping for all files backed up (referenced in the dlist files), and found enough data in the block files, but it has found more data that is not useful to recreate (restore) the backed up files.
This is usually the result of interrupted backups, since Duplicati uploads first the data, and finishes by the file lists (the dlist files). If there is an interrupt, it can lead to this extra data, that has to be parsed and examined while rebuilding, hence it can happen that useful data and less than useful data be mixed in the same remote file.
Anyway, extra files by definition are not a blocker to file restoration and as such are more an annoyance than a big problem.
here is the latest bugreport
I have given a quick look to your last (post 85 already!) bug report and as expected the strange inconsistencies of your database have been solved by the recreate. The duplicated blocks table has 22 entries, guess it can be expected although I have not investigated the whole relationship with the extra blocks found by the test. There are some in common but not all it seems.
It has been interesting and will probably lead to more enhancements to Duplicati in the next months, thanks.
Thanks for the bug report. I’m still struggling to write the right SQL to get some answers that I’d like.
This may take awhile, but at least we now have better material to try to slowly make some sense of.
I’ve also been looking at the where-do-extra-blocks come from. An interrupt is definitely not needed, however I can’t prove that it can’t also be caused by an interrupt. There might be multiple paths to it.
If anyone remembers where we were last working with the interrupt tester, we can continue there, as:
Running backup at 2023-11-07 10:35:19.070606
Running test recreate at 2023-11-07 10:35:31.525090
test db recreate: error code 0
Running test --full-remote-verification all at 2023-11-07 10:35:52.599936
test db: error code 3
===========> duplicati-ib8fa825870284779a7080924560dac3f.dindex.zip: 1 errors
===========> Extra: 6dM/+8ObwlZ3ScP9SSV/3y6opkaIKPlOEpwS2+PQNzs=
===========> duplicati-bba8d51b7aaf24ca79cece3f287a4b4e9.dblock.zip: 1 errors
===========> Extra: 6dM/+8ObwlZ3ScP9SSV/3y6opkaIKPlOEpwS2+PQNzs=
stderr was:
error running test db, exiting...
Statistics: started: 18 timeout: 0 missing: 0 extra: 0 marked: 0
occurs if I pull timeout, add test all --full-remote-verification, and randomize just 2 bytes at block start.
One can also see that most of the Extra in first bug report were in place before the first backup error.
SELECT OperationID,Path FROM RemoteOperation WHERE Path IN (
'duplicati-i65e21c89978948a780b1bde730945fa7.dindex.zip',
'duplicati-i67c638cbc73c4fd89817b6b1a7686a2d.dindex.zip',
'duplicati-i77cd37af3c01421781ed01373822b3cd.dindex.zip',
'duplicati-i93a412f43ca848e5a7be53721053fbd6.dindex.zip',
'duplicati-i9f5dd8c3f0694dbe90f1c8f9e5e1cb08.dindex.zip',
'duplicati-ibcbb6783c9d842b7830912db970cc251.dindex.zip',
'duplicati-ibd4f149ffa6a4777b22dfd36ab03575e.dindex.zip',
'duplicati-ic450fbd21fba444bad5abc6650769dfb.dindex.zip',
'duplicati-ic735b48c995e4390ad9017299763eb30.dindex.zip',
'duplicati-icae0f4af5b0944328bb6c23c917c2d24.dindex.zip',
'duplicati-ice998ae5dcaa4267b5242a6a8c7bb1ff.dindex.zip',
'duplicati-id13ed4ad86ab4aea8d43ee5dc20b9d47.dindex.zip',
'duplicati-id1a07527554d465689c3354d62ce8887.dindex.zip',
'duplicati-id64adb4228ac4df6b76adfcca90fbd87.dindex.zip',
'duplicati-ie21014469ccd44bf979fdfbc8c57388a.dindex.zip',
'duplicati-ief1f4ed59bfb4f1e8fa7de0ded8dc846.dindex.zip',
'duplicati-ifbb7bc15c09e420eb6fcd2a94a06445a.dindex.zip',
'duplicati-b093ddaa3c6d04d239e1051609fab162d.dblock.zip',
'duplicati-b20e8459f3cd240bd86ffd30114d968d4.dblock.zip',
'duplicati-b21b4276853194de78e9e5341b8878b02.dblock.zip',
'duplicati-b40cc6e82b92846e2a83b03b980d85501.dblock.zip',
'duplicati-b48403b9f51534d62bba567f5f178af86.dblock.zip',
'duplicati-b521699ae52da462f9147746af35503e7.dblock.zip',
'duplicati-b5b8119f043b14c04bc6c8d3a1d61288d.dblock.zip',
'duplicati-b63cfcfb14f0e4476b9e80582bead185c.dblock.zip',
'duplicati-b6b4f89a58f7c45ec9dbd57c50644b25d.dblock.zip',
'duplicati-b71a96fff45654cdbb6c6751367d44d7a.dblock.zip',
'duplicati-b92206f0de0274d35be83e88f4b346d5b.dblock.zip',
'duplicati-b940ba82bb0034ed784ed1d32bb80adb7.dblock.zip',
'duplicati-b98ae9719a556448eb2a1874bbd097efb.dblock.zip',
'duplicati-ba2c9a7973f434f92815b475bc5f03c6a.dblock.zip',
'duplicati-ba8c9bd98e3054c08a740f1f7c681b091.dblock.zip',
'duplicati-bb6925d190d90421b88a576531c1e6454.dblock.zip',
'duplicati-bc142744f1f9c4bc28146dd758419c0da.dblock.zip'
) ORDER BY OperationID
Output is too long, but has lots of old operations except for last pair in Backup 29 before network died.
I’m also noticing that database has no information on Backup 28: October 21, 2023 11:00:00 AM UTC.
EDIT:
recreate “Remote file referenced as …” error after interrupted backup then compact #5023
is probably the version I modified, but after modification it probably fits better with the issue
test all with full-remote-verification shows “Extra” hashes from error in compact #4693
which also has interrupt-free “Steps to reproduce”. Its associated draft PR thinks fix part is:
Fix “test” error: extra hashes #4982
Reuse existing deleted blocks to prevent uploading duplicated copies that can cause problems later
I’m not sure where best to post the modified script and the resulting DB (and prior one) now.
This is kind of a secondary thought-to-be-harmless issue. One suggestion for anyone who’s
really not interested in technical talk is to set their notification level, if it’s currently set to this:
The first one seems to have other problems. Comparing the bug report to bug report after Recreate:
SELECT COUNT(*) FROM FilesetEntry WHERE FilesetID = 1
281605 versus 281670, so the usual 65 files short, but a “clean” loss without inconsistency of others.
Previously I was hunting for some code that could pull things out of FileLookup, but now it’s different.
I’m also attempting a repro so I can get more debug info, but no luck so far given so little information.
