Database recreation not really starting?

As explained, it’s not a large download and store, but it’s a download (time and maybe money).

There are some relatively tiny downloads that would give some info. I’m tempted to study dlists.
These are what should match the fileset, and one theory is the filesets got fewer files somehow.

The dlists for you are about 25 to 30 MB each, so should be very fast and easy to get manually.

The idea is to compare the file list from the dlist with the file list from the fileset in the database.
A rough compare is pretty easy, using a spreadsheet and counting lines. A good one could use
a Python script that I have that normally looks at dlist and dindex files. It’d change to look in DB
instead of dindex to see what blocks are there. If needed blocks are missing, it will reveal them.

A good starting point may be one of the middle ones on the list, as I think the end ones are OK.
Fileset 5 is duplicati-20230929T110000Z.dlist.zip, and we have looked at it some. Maybe that?

Features

Incremental backups
Duplicati performs a full backup initially. Afterwards, Duplicati updates the initial backup by adding the changed data only. That means, if only tiny parts of a huge file have changed, only those tiny parts are added to the backup. This saves time and space and the backup size usually grows slowly.

New backups build on prior ones. If old ones get discarded, new ones have more changes to grab.

As mentioned, we could try hand-cleaning DB using SQL, but the destination might be still be bad.
You would then think you have a backup, perhaps have a disaster, and find your backup is broken.

There are several commands that do what you ask, but they’re for known damages, not unknown.
Did list-broken-files have anything to say? If so, there are follow-on commands for cleanups.

At the moment, you might have a couple of intact backups – the first and last, so that’s protection.

Sorry forgot to clear that point up : when I mentioned crashes, it was internet connection crashes due to my partner fiddling with our router configuration. As far as I know, all my PC’s data is fine. I also have copies on local hard drives. A lot of them. Yes, I am a bit paranoid. I lot data a few years ago due to bad backup habits. If I had not sent a particular file to my partner via a messaging service, it would have been lost forever. It was a very emotional and unique memory related to a traumatic loss. Since then I’ve been very anxious about this particular subject and it makes it difficult for me to think clearly about it and make good decisions. The distant backup service is the only thing that gives me a real peace of mind, which is why I wish I would not have to delete everything because it means waiting a few days to be at peace again…

list-broken-files returns
Listing remote folder …
Return code: 0

I’m assuming that means that I have no broken file.

This is definitely not a risk I am willing to take :slight_smile: right know I just would like to be able to backup my current data. I don’t really care about loosing the history of my files. Anytime I tend to do major changes in anything I make a copy anyway…

The strange thing is that this backup is dated 29th September. The problem started around 21st October. Between the 29th September and the 21st October, Duplicati performed daily backups without issue. I can connect to the swift server and downoad this particular dlist if you like ?

It might be looking at just the files that remain though. We still have 65 unaccounted for in database.
The assumption has been they got removed but the backup destination still has them. A dlist will tell, however even easier is for you to look at that log and see how many files it thinks it has (in top right).

That could be next step after seeing whether the log info agrees with the higher “expected” file count. Remember that this has file names (but not data), so either you study there, or decide how to share it.
PM is also possible, or maybe paths aren’t sensitive to you. Another user posted log of paths recently.

EDIT 1:

How the backup process works shows what’s in dlist. You can easily look at your own filelist.json which (aside from paths) is generally numbers so that Duplicati knows how to restore the files correctly from the blocks in the dblock files with some help from the dindex files to know contents of the dblocks. During a good DB recreate, only the dindex files are read in, and ideally that says where all blocks are. Losing a dindex means having to look in dblock files directly. BTW your dindex/dblock “appear” healthy based on me writing and running some consistency checks on the database. No new issues found yet.

EDIT 2:

Example of looking at log for my last backup, then checking in my database to see if I can match count:

image

so things look consistent at Fileset level. Now see if there’s actually a File entry:

SELECT FileID FROM FilesetEntry WHERE FileID NOT IN (SELECT ID FROM File)

finds no misses in it. Above simple query found yours somehow got too few File.

EDIT 3:

The counts seen in “Unexpected difference” include folders and symlinks as well.
The counts in the log are just files, however I can adjust for that if I can get value.
The “BlocksetID > 0” shown above will count only files, and not those other types.

I confirmed in the bug report database that the difference in verification count is the same as the broken file name links with this query:

SELECT COUNT(*) FROM (SELECT FileID FROM FilesetEntry  WHERE FilesetID = 5) LEFT OUTER JOIN "FixedFile" F ON F.ID = FileID WHERE F.Path IS NULL;
-- COUNT(*) = 65

This is the case for all filesets except the very first one. It is strange that the verify error did not appear earlier. This suggests that the file table was broken at a later point and was correct initially.

I also came up with this query (merging both verification queries) which should output file paths (if they are not null and assuming the expected number is higher), maybe that will help in a different case in the future:

Query extra paths
SELECT FileID, F.Path FROM (SELECT FileID FROM "FilesetEntry"  A  WHERE A.FilesetID = 5)
LEFT OUTER JOIN "FixedFile" F ON F.ID = FileID WHERE F.Path NOT IN (SELECT DISTINCT "Path" FROM (
SELECT
    "L"."Path", 
    "L"."Lastmodified", 
    "L"."Filelength", 
    "L"."Filehash", 
    "L"."Metahash", 
    "L"."Metalength",
    "L"."BlocklistHash", 
    "L"."FirstBlockHash",
    "L"."FirstBlockSize",
    "L"."FirstMetaBlockHash",
    "L"."FirstMetaBlockSize",
    "M"."Hash" AS "MetaBlocklistHash"
FROM
    (
    SELECT 
        "J"."Path", 
        "J"."Lastmodified", 
        "J"."Filelength", 
        "J"."Filehash", 
        "J"."Metahash", 
        "J"."Metalength",
        "K"."Hash" AS "BlocklistHash", 
        "J"."FirstBlockHash",
        "J"."FirstBlockSize",
        "J"."FirstMetaBlockHash",
        "J"."FirstMetaBlockSize",
        "J"."MetablocksetID"
    FROM 
        (
        SELECT 
	        "A"."Path" AS "Path", 
	        "D"."Lastmodified" AS "Lastmodified", 
	        "B"."Length" AS "Filelength", 
	        "B"."FullHash" AS "Filehash", 
	        "E"."FullHash" AS "Metahash", 
	        "E"."Length" AS "Metalength",
	        "A"."BlocksetID" AS "BlocksetID",
	        "F"."Hash" AS "FirstBlockHash",
	        "F"."Size" AS "FirstBlockSize",
	        "H"."Hash" AS "FirstMetaBlockHash",
	        "H"."Size" AS "FirstMetaBlockSize",
	        "C"."BlocksetID" AS "MetablocksetID"
        FROM 
	        "FixedFile" A	
        LEFT JOIN "Blockset" B
          ON "A"."BlocksetID" = "B"."ID" 
        LEFT JOIN "Metadataset" C  
          ON "A"."MetadataID" = "C"."ID"
        LEFT JOIN "FilesetEntry" D
          ON "A"."ID" = "D"."FileID"
        LEFT JOIN "Blockset" E
          ON "E"."ID" = "C"."BlocksetID"
        LEFT JOIN "BlocksetEntry" G
          ON "B"."ID" = "G"."BlocksetID"
        LEFT JOIN "Block" F 
          ON "G"."BlockID" = "F"."ID"  
        LEFT JOIN "BlocksetEntry" I
          ON "E"."ID" = "I"."BlocksetID"
        LEFT JOIN "Block" H 
          ON "I"."BlockID" = "H"."ID"
        WHERE 
          "A"."BlocksetId" >= 0 AND
          "D"."FilesetID" = 5 AND
          ("I"."Index" = 0 OR "I"."Index" IS NULL) AND  
          ("G"."Index" = 0 OR "G"."Index" IS NULL)
        ) J
    LEFT OUTER JOIN 
        "BlocklistHash" K 
    ON 
        "K"."BlocksetID" = "J"."BlocksetID" 
    ORDER BY "J"."Path", "K"."Index"
    ) L

LEFT OUTER JOIN
    "BlocklistHash" M
ON
    "M"."BlocksetID" = "L"."MetablocksetID"
) UNION SELECT DISTINCT "Path" FROM (
SELECT
    "G"."BlocksetID",
    "G"."ID",
    "G"."Path",
    "G"."Length",
    "G"."FullHash",
    "G"."Lastmodified",
    "G"."FirstMetaBlockHash",
    "H"."Hash" AS "MetablocklistHash"
FROM
    (
    SELECT
        "B"."BlocksetID",
        "B"."ID",
        "B"."Path",
        "D"."Length",
        "D"."FullHash",
        "A"."Lastmodified",
        "F"."Hash" AS "FirstMetaBlockHash",
        "C"."BlocksetID" AS "MetaBlocksetID"
    FROM
        "FilesetEntry" A, 
        "FixedFile" B, 
        "Metadataset" C, 
        "Blockset" D,
        "BlocksetEntry" E,
        "Block" F
    WHERE 
        "A"."FileID" = "B"."ID" 
        AND "B"."MetadataID" = "C"."ID" 
        AND "C"."BlocksetID" = "D"."ID" 
        AND "E"."BlocksetID" = "C"."BlocksetID"
        AND "E"."BlockID" = "F"."ID"
        AND "E"."Index" = 0
        AND ("B"."BlocksetID" = -100 OR "B"."BlocksetID" = -200) 
        AND "A"."FilesetID" = 5
    ) G
LEFT OUTER JOIN
   "BlocklistHash" H
ON
   "H"."BlocksetID" = "G"."MetaBlocksetID"
ORDER BY
   "G"."Path", "H"."Index"

));

Replace 5 with the correct fileset ID.
To work with normal (non-bugreport) databases, replace FixedFile with File in the query.

Due to the way NOT IN works with NULL, this outputs nothing for this current case.

The last one (30) is good too, for my query and this query. How it became good is a good question, although I’d be happy if somebody could find how all of the middle Filesets went bad in similar way.

Welcome to the chase.

My guess is that a file was removed in version 30, and for some reason the whole cleanup process was run as if the entries (blocksets, metadata, etc.) were no longer needed in any version, even though the files are still in the other versions. This could also happen if compact deleted one volume and simply did not create a new one. Maybe it is linked to the partial backup before. From the RemoteOperation table it looks like operation 30 ran a compact (there are 29 deletes of dblock/dindex). 29 didn’t delete any files, so it should not be an interrupted compact.

I’m not sure what that was. A compact that repackages dblock files would do get first, but it didn’t. Completely waste files can get deleted though, but there would be no repackaged ones to not put.

SELECT * FROM RemoteOperation WHERE Path IN (SELECT Path FROM RemoteOperation WHERE Operation = 'delete' AND OperationID = 30)

gives the history of the dblock/dindex of interest. For some reason they’re all from Backup 20 or 21.

I’m suspicious of that as part of the cause. Maybe @titilesoleil can look in Complete log of job logs because there’s a growing number of things we need, and all were erased from the DB bug report…

The job log often has a Compact section in it, so we can see if any Compact was run, and what it did.
I’m also interested in the cleanup activity after the interrupted backup. A job Complete log formats it, except the result below is from the other topic where only some of the job logs in LogData got erased:

  "Messages": [
    "2023-10-15 11:53:30 +01 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (3.91 KB)",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-KeepIncompleteFile]: keeping protected incomplete remote file listed as Uploading: duplicati-20231014T135524Z.dlist.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: scheduling missing file for deletion, currently listed as Uploading: duplicati-bd2722d010ec748b58425f8a98c0a07a7.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: scheduling missing file for deletion, currently listed as Uploading: duplicati-ba6332017268d4c9dadc0eea95a05a083.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: scheduling missing file for deletion, currently listed as Uploading: duplicati-b035645e1ba5b44019bbb8e4b9e3a4ada.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-b668e45677e9b4a63a34ae349df445f05.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-b86ba3f6b522943cfbca7a126e153d8c9.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-b178bc563fec146e29150c34b5881af4c.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-bad55f0e957ea46f383ea80d0f6631efa.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-b976063604c42429595ec7fc2d7e81119.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-bac69f4529b2642c29173566138f7356e.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-be71cdb6bec414d95bc097820f75d0112.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-ba2f8108c05df4b0fa3addf23ea4b72b3.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-bd82c252457964df68af57268b9e28967.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-bb1a85b7fa0a14fc4a24caf258311004c.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-b2e1e98f014da4c8ea553c83b4ed7c5ce.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-b98ce13ac37e44b0187b31a57ac41523b.dblock.zip.aes",
    "2023-10-15 11:54:13 +01 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-b55568858a2d64f4aac7286036c44deda.dblock.zip.aes"
  ],

In current topic we have no job logs. An easy fix depending on SQLite tool behavior might be to do this:

sqlite> SELECT * FROM LogData;
1|1|1695549001|Result|ERASED!|
2|2|1695642217|Result|ERASED!|
3|3|1695662576|Result|ERASED!|
4|4|1695823396|Result|ERASED!|
5|5|1695994460|Result|ERASED!|
6|6|1696075817|Result|ERASED!|
7|7|1696093478|Result|ERASED!|
8|8|1696245434|Result|ERASED!|
9|9|1696331801|Result|ERASED!|
10|10|1696426823|Result|ERASED!|
11|11|1696504585|Result|ERASED!|
12|12|1696591685|Result|ERASED!|
13|13|1696677461|Result|ERASED!|
14|14|1696763765|Result|ERASED!|
15|15|1696852548|Result|ERASED!|
16|16|1696936771|Result|ERASED!|
17|17|1697029274|Result|ERASED!|
18|18|1697116618|Result|ERASED!|
19|19|1697262765|Result|ERASED!|
20|20|1697282181|Result|ERASED!|
21|21|1697388801|Result|ERASED!|
22|22|1697454914|Result|ERASED!|
23|23|1697541372|Result|ERASED!|
24|24|1697627814|Result|ERASED!|
25|25|1697714264|Result|ERASED!|
26|26|1697802557|Result|ERASED!|
27|27|1697803988|Result|ERASED!|
28|30|1698004699|Result|ERASED!|
sqlite>

except on non-sanitized database there would be useful text (which could be redacted if necessary). Formatting would stink, but that’s better than no data… I could also stop asking for the log file count.

I don’t plan to ask for actual database because even though I think it’s safe, logs are easier to check.

If we can’t get logs, the more interesting pieces so far are the file count (mentioned in past post), the probable FilelistProcessor work (subject to the limit of 20 messages) to clean up after 29 interruption,
and now the question of whether or not 30 did a compact, as seen by Compact section at the top left.

I can provide you with whatever you want from the original database, if you give me the sql request to do so because there is 20+ years and two pregnancies between the times when I was working in the field and able to use SQL :slight_smile:

1 Like

Compact can be triggered by wasted space, and in this case it would do get, but also by reclaim (when deleting old filesets).

I’m not sure how to parse that (double negations are tricky), but in this case (operation 30) there was no put interparsed with deletes, after putting 2 dlist files and deleting 2 dlist files, there is only a bunch of deletes, no put, so it’s a compact triggered by removal of old version(s) I think.

A slightly enhanced version of the earlier query shows most of the deletes were from backups 20 and 21, except for the dlist from 27. The dlist from 21 was the other early delete before dblock/dindex deletes.

SELECT * FROM RemoteOperation WHERE Operation = 'put' AND Path IN (SELECT Path FROM RemoteOperation WHERE Operation = 'delete' AND OperationID = 30)

Delete of 21 dlist causing removal of 21 data. Deletion of 20 data when 27 dlist is deleted seemed odd, however maybe 20 dlist in 21 set things up, or maybe 21 was using 20 data so kept those files around?

image

If nobody has other requests, I’m going to ask @titilesoleil again to post a SELECT * FROM LogData, if SQL tool can do it (I was using the Command Line Shell For SQLite for that one because it just writes to screen rather than making tidy GUI boxes that are too narrow for the message), or Complete log of 30 which is around duplicati-20231022T070808Z.dlist.zip. That should give us more history to look through. Optionally expand the Compact phase (if it’s present), and it will save us a bit of deciphering the raw log.

Example of the areas I’m talking about:

Here is the complete CSV of LogData

1 Like

note that
select filesetid, min(lastmodified), max(lastmodified) from filesetentry where lastmodified!=0 group by filesetid;

gives strange results

1|600527520000000000|650390688000000000
5|600527520000000000|650390688000000000
12|600527520000000000|650390688000000000
19|600527520000000000|650390688000000000
23|600527520000000000|650390688000000000
24|600527520000000000|650390688000000000
25|600527520000000000|650390688000000000
26|600527520000000000|650390688000000000
29|600527520000000000|650390688000000000
30||658857404020000000

600527520000000000 is 1904-01-01 00:09:21
650390688000000000 is 2062-01-03 01:00:00

I have found this stuff in searching to identify the 65 records, that are supposed to date from 1980.

@titilesoleil
is the clock working correctly on the computer where Duplicati is installed ?

clock seems to be working very fine (right now). I see two things that might have caused this :

  • corrupted data from the USB keys
  • the automatic date/time thingy on Linux Mint tried to connect to the date/time server when my partner was doing one of his deliciously timed configuration tests on our modem router, and the protocol used was one of those who couldn’t get through (spoiler alert : nothing was working at some point, except youtube, go figure).

The simplest explanation is something like an install program or archive extract just giving weird dates.

Why? It’s filestem-dependent, but ext4 supposedly does around 1901 to 2446 at nanosecond precision. Looking on my Windows NTFS drive, I see files from 1979 that look like some software installs created.
Testing on ext4 suggests that an old date is fine, but an out-of-range old date gets pulled into the range:

$ touch --date 1901-01-01 old
$ ls -ln --full-time old
-rw-rw-r-- 1 1000 1000 0 1901-12-13 15:45:52.000000000 -0500 old
$ 

The stat command shows the nanosecond precision:

Access: 1901-12-13 15:45:52.000000000 -0500
Modify: 1901-12-13 15:45:52.000000000 -0500
Change: 2023-11-03 11:35:46.350102066 -0400

EDIT 1:

On actual database, change FixedFile to File, and also expect some actual paths that can be checked.

EDIT 2:

With the seemingly damaged second Fileset (5), I can now compare the computed size with an old log:

5,5,1695994460,Result,“{”“DeletedFiles”“:43230,”“DeletedFolders”“:1856,”“ModifiedFiles”“:1515,”“ExaminedFiles”":298257,

so, as suspected, it has lost 65. Hard to explain, but adding 65 in so many middle filesets is even worse.

Log for backup 30 shows the expected FilelistProcessor messages when it tries to clean up after 29, but there would have been 2034 messages if an external log were used, but the job log only records first 20. Unfortunately there is no record of the database that the interrupted backup left. In a debug environment these are handy (along with a profiling log), but it’s more than a user would typically do. I did try a little to reproduce this using a complete backup, add a file and interrupt that backup, add a file and let that finish. This would get a certain amount of FilelistProcessor cleanups, but not compact. Maybe error needs that?

““2023-10-22 09:11:59 +02 - [Information-Duplicati.Library.Main.Operation.Backup.UploadSyntheticFilelist-PreviousBackupFilelistUpload]: Uploading filelist from previous interrupted backup””

was also expected, and log confirms it. It also confirms that a compact, if I’m reading the messy log right:

““CompactResults””:{““DeletedFileCount””:58

58 would be correct if that means dblock and dindex files without two dlist delete that set off the compact.

It would be helpful to know when the error was first seen. The network failure was posted earlier like this:

21 Oct would be backup 29, the interrupted one. It uploaded files for awhile, then started doing put retries which one can recognize by the repetitions of size and hash. A new name is used for a retry.
After some file gets to default 5 retries, Duplcati calls it quits on the backup and logs into server log.

Final put retry after network outage was October 21, 2023 9:04:23 PM UTC, so nearly 23:05 local.
@titilesoleil were the 23 and 31 lines just not expanded in that server log? Backup 30 was 22 Oct. Backup 31 was 23 Oct 13:00 local time, and made no job log so might have failed. Maybe clicking
the 23 oct. line above in server log would show the first view of that “Unexpected difference” error?

I collapsed some of the dates because of the caracter limit of the forum. The collapsed dates as from the 23rd are all the same “Duplicati.Library.Interface.UserInformationException: Unexpected difference in fileset version 8” errors

Here is it :
23 oct. 2023 13:01: Failed while executing “Backup” with id: 2
Duplicati.Library.Interface.UserInformationException: Unexpected difference in fileset version 8: 29.09.2023 13:00:00 (database id: 5), found 317826 entries, but expected 317891
at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency (System.Int64 blocksize, System.Int64 hashsize, System.Boolean verifyfilelists, System.Data.IDbTransaction transaction) [0x0036e] in <9cab5b8f1b4f49ec980acae8e278968b>:0
at Duplicati.Library.Main.Operation.Backup.BackupDatabase+<>c__DisplayClass34_0.b__0 () [0x00000] in <9cab5b8f1b4f49ec980acae8e278968b>:0
at Duplicati.Library.Main.Operation.Common.SingleRunner+<>c__DisplayClass3_0.b__0 () [0x00000] in <9cab5b8f1b4f49ec980acae8e278968b>:0
at Duplicati.Library.Main.Operation.Common.SingleRunner.DoRunOnMain[T] (System.Func1[TResult] method) [0x000b0] in <9cab5b8f1b4f49ec980acae8e278968b>:0 at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x01048] in <9cab5b8f1b4f49ec980acae8e278968b>:0 at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00009] in <9cab5b8f1b4f49ec980acae8e278968b>:0 at Duplicati.Library.Main.Controller+<>c__DisplayClass14_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x0004b] in <9cab5b8f1b4f49ec980acae8e278968b>:0 at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action1[T] method) [0x0026f] in <9cab5b8f1b4f49ec980acae8e278968b>:0
at Duplicati.Library.Main.Controller.Backup (System.String inputsources, Duplicati.Library.Utility.IFilter filter) [0x00074] in <9cab5b8f1b4f49ec980acae8e278968b>:0
at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00349] in <30a34d71126b48248d040dda634ddad9>:0

Thanks. This check is done at start of backup, so I guess that shows that Backup 30 worked but broke things so that Backup 31 (and later work – the Verify files I see might have hit this too) didn’t work.

At least one developer seemed to have another idea to try, but if no conclusion is reached, having an idea about how it happens is extremely valuable if an effort is made to try to set up conditions to repro.

There aren’t that many places that delete File table entries. I didn’t see an old Repair specifically asked for, but auto-cleanup in Advanced options might do one automatically. Do you have that option set?

I do see what might be a recent manual repair, but it’s after the problems began, so can’t be the cause:

““2023-11-01 10:50:05 +01 - [Information-Duplicati.Library.Main.Operation.RepairHandler-DatabaseIsSynchronized]: Destination and database are synchronized, not making any changes””]

I attempted a “Repair” of the database, it said it was successful but it makes the same error if I try to backup.

I never changed the auto-cleanup setting so it was probably set to false. I tried to run the backup command from the Command line function with auto-cleanup activated (failed with same error), and I copied everything that resulted this attempt in the PROFILING live log in this file, I don’t know if there is something useful in it :

The Duplicati repair is a confusing operation, as it’s doing 2 things totally unrelated: if it detects a discrepancy between the database and the backend (more or less files than expected, different sizes) it assumes that the database is fine and attempts to put the backend is in line with it, and if the database is missing, it assumes that the backend is fine and attempts to recreate the database from the backend.

As far as I know, that’s all, and there is no function to actually try to repair a defective database in Duplicati.

That’s why I advised to test the backend - this is a function that is NOT depending on the database being fine or not - and then, if the backend is found all right and also after having saved the database in any case, recreate the database. If the backend is not found fully all right, post the messages here, the database recreation may be advisable if the backend problems are not too bad.

the backup process begins by testing if the database is sane by running a few coherence tests and if something is found awry, it aborts.

how do I test the backend ?