SQLite error no such column: S.BlocksetID

After upgrading from 2.0.5.107_canary_2020-05-26 to 2.0.5.111_canary_2020-09-26 i’m getting this strange error:
SQLite error no such column: S.BlocksetID

I have no idea where it comes from and what to do.
Tried a DB restore, file check, nothing seams to work.

Hope somebody can help me with this :slight_smile:

Welcome to the forum @Freemann

When is it seen? Did 2.0.5.107 work?

SQLite Error No Such Column S.BlocksetID has more descriptions, except version history isn’t said.

Maybe the message is from last line below. Code was changed for 2.0.5.111, and I’m asking about it.

Thanks for your reply!

2.0.5.107 was “working” weeks ago and was giving me errors on restoring non files.
The restore was succesfully, but it noticed that no files where restored.

For me that was the trigger to upgrade, because maybe this was a fixed bug.

Now on 2.0.5.111 it’s doing “nothing” except giving me this error after a restore.

I’m running in a docker container, is it needed to run/trigger some sort of upgrade script to fix the database schema?

[Edit]
Ok, did some more testing in relation to the linked topic.
Looks like I have the same problem as the other topic, when choosing a location it’s going wrong and getting the s.blocksetid error.

When choosing to restore in the original location, it’s not working and I’m getting a successful restore warning without restoring any files.

When I remove the corrupted file from the destination filesystem and restore, it’s working.

So restoring is now possible for me, but only when I remove the corrupted file from the destination filesystem. Looks like duplicati isn’t able to overwrite files on the destination filesystem.

So even more like the other issue and I’m trying to attract Docker or other help from there to here.

This issue seems to be in a temporary table newly made. How’s your SQL? Line I cited might do:

SELECT DISTINCT "Fileset-C409B04BE8434E439BEAE376862253E4"."TargetPath"
	,"Fileset-C409B04BE8434E439BEAE376862253E4"."Path"
	,"Fileset-C409B04BE8434E439BEAE376862253E4"."ID"
	,"Blocks-C409B04BE8434E439BEAE376862253E4"."Index"
	,"Blocks-C409B04BE8434E439BEAE376862253E4"."Hash"
	,"Blocks-C409B04BE8434E439BEAE376862253E4"."Size"
FROM "Fileset-C409B04BE8434E439BEAE376862253E4"
	,"Blocks-C409B04BE8434E439BEAE376862253E4"
	,"LatestBlocksetIds-4327E7D8F93D2342BD9A31132B45EA18" S
	,"Block"
	,"BlocksetEntry"
WHERE "BlocksetEntry"."BlocksetID" = "S"."BlocksetID"
	AND "BlocksetEntry"."BlockID" = "Block"."ID"
	AND "Blocks-C409B04BE8434E439BEAE376862253E4"."Hash" = "Block"."Hash"
	AND "Blocks-C409B04BE8434E439BEAE376862253E4"."Size" = "Block"."Size"
	AND "S"."Path" = "Fileset-C409B04BE8434E439BEAE376862253E4"."Path"
	AND "Blocks-C409B04BE8434E439BEAE376862253E4"."Index" = "BlocksetEntry"."Index"
	AND "Fileset-C409B04BE8434E439BEAE376862253E4"."ID" = "Blocks-C409B04BE8434E439BEAE376862253E4"."FileID"
	AND "Blocks-C409B04BE8434E439BEAE376862253E4"."Restored" = 0
	AND "Blocks-C409B04BE8434E439BEAE376862253E4"."Metadata" = 0
	AND "Fileset-C409B04BE8434E439BEAE376862253E4"."TargetPath" != "Fileset-C409B04BE8434E439BEAE376862253E4"."Path"
ORDER BY "Fileset-C409B04BE8434E439BEAE376862253E4"."ID"
	,"Blocks-C409B04BE8434E439BEAE376862253E4"."Index"

and before that might be:

CREATE TEMPORARY TABLE "LatestBlocksetIds-4327E7D8F93D2342BD9A31132B45EA18" AS

SELECT "File"."Path"
	,"File"."BlocksetID"
	,MAX("Fileset"."Timestamp")
FROM "Fileset"
	,"FilesetEntry"
	,"File"
WHERE "FilesetEntry"."FileID" = "File"."ID"
	AND "FilesetEntry"."FilesetID" = "Fileset"."ID"
	AND "File"."Path" IN (
		SELECT DISTINCT "Fileset-C409B04BE8434E439BEAE376862253E4"."Path"
		FROM "Fileset-C409B04BE8434E439BEAE376862253E4"
			,"Blocks-C409B04BE8434E439BEAE376862253E4"
		WHERE "Fileset-C409B04BE8434E439BEAE376862253E4"."ID" = "Blocks-C409B04BE8434E439BEAE376862253E4"."FileID"
			AND "Blocks-C409B04BE8434E439BEAE376862253E4"."Restored" = 0
			AND "Blocks-C409B04BE8434E439BEAE376862253E4"."Metadata" = 0
			AND "Fileset-C409B04BE8434E439BEAE376862253E4"."TargetPath" != "Fileset-C409B04BE8434E439BEAE376862253E4"."Path"
		)
GROUP BY "File"."Path"

Output is from a successful “Pick location” restore on Windows 10 without Docker. I don’t have Docker.

This often confuses people, but restore to a folder where file is already the backed up version restores no file because there is no need, because specified file is already set, yet it warns you. Maybe you saw that?

I’m unclear on your situation. If there is a corrupted file where you said to restore to, it should get patched. “Destination” to me often means the backup Destination as set up on job screen 2. Assuming it means “Where do you want to restore the files to?” answer, perhaps Docker really is key to this, but I don’t use it.

You can diagnose somewhat by watching your Restore in About --> Show log --> Live --> Verbose to see what it does. Or use Advanced options to set up a log file at appropriate log level, if that is easier to follow.

Where is it from? Some are third-party. Duplicati produces https://hub.docker.com/r/duplicati/duplicati

There seems to be a fair amount of setup to giving the container the correct mappings to host folders. Information is sometimes on the page at docker, sometimes given in forum. I can’t explain much here.

One possible (though a stretch) tie-in from the 2.0.5.111 change to Docker might be a new temp table.
2.6. TEMP Databases suggests that temporary table is in file at. 5. Temporary File Storage Locations.

Improve query performence used in GetFilesAndSourceBlocksFast #4297 list of changes named this:

Move inner selection S to a temporary table

I wonder where that landed on your system? In my limited SQL knowledge, it seemed like created table should have had your missing column, even if it had no rows, but if table couldn’t get made, what then?

tempdir option follows SQLite page above, saying TMPDIR environment variable can move SQLite temp folder, however I’m calling it a long shot because I’m pretty sure there are other temporary tables in use.

A workaround might be to set no-local-blocks option true. That might avoid the SQL code that saw error.

duplicati-log.zip (1.7 KB)
Who that are big reply’s with a lot of question!! Thanks for that!!

Where to begin… I attached a file with a few log lines.
The log says my database is locked, but I don’t know how that could have happend.
It’s running in the container and doing nothing else then that.

First I use the Duplicati/Duplicati Container.

Sound a bit strange that a Overwrite doesn’t overwrite when there a file on the destination. It shouldn’t matter if there a file and/or in what state that file is.
Also the same for the timestamp version, that also needs to write a file despite the version on the destination.

On my database I did a “.tables” with the following output;
sqlite> .tables

Block Configuration Fileset Operation
BlocklistHash DeletedBlock FilesetEntry PathPrefix
Blockset DuplicateBlock IndexBlockLink RemoteOperation
BlocksetEntry File LogData Remotevolume
ChangeJournalData FileLookup Metadataset Version

I ran the above query:

sqlite> SELECT DISTINCT “Fileset-C409B04BE8434E439BEAE376862253E4”.“TargetPath”
…> ,“Fileset-C409B04BE8434E439BEAE376862253E4”.“Path”
…> ,“Fileset-C409B04BE8434E439BEAE376862253E4”.“ID”
…> ,“Blocks-C409B04BE8434E439BEAE376862253E4”.“Index”
…> ,“Blocks-C409B04BE8434E439BEAE376862253E4”.“Hash”
…> ,“Blocks-C409B04BE8434E439BEAE376862253E4”.“Size”
…> FROM “Fileset-C409B04BE8434E439BEAE376862253E4”
…> ,“Blocks-C409B04BE8434E439BEAE376862253E4”
…> ,“LatestBlocksetIds-4327E7D8F93D2342BD9A31132B45EA18” S
…> ,“Block”
…> ,“BlocksetEntry”
…> WHERE “BlocksetEntry”.“BlocksetID” = “S”.“BlocksetID”
…> AND “BlocksetEntry”.“BlockID” = “Block”.“ID”
…> AND “Blocks-C409B04BE8434E439BEAE376862253E4”.“Hash” = “Block”.“Hash”
…> AND “Blocks-C409B04BE8434E439BEAE376862253E4”.“Size” = “Block”.“Size”
…> AND “S”.“Path” = “Fileset-C409B04BE8434E439BEAE376862253E4”.“Path”
…> AND “Blocks-C409B04BE8434E439BEAE376862253E4”.“Index” = “BlocksetEntry”.“Index”
…> AND “Fileset-C409B04BE8434E439BEAE376862253E4”.“ID” = “Blocks-C409B04BE8434E439BEAE376862253E4”.“FileID”
…> AND “Blocks-C409B04BE8434E439BEAE376862253E4”.“Restored” = 0
…> AND “Blocks-C409B04BE8434E439BEAE376862253E4”.“Metadata” = 0
…> AND “Fileset-C409B04BE8434E439BEAE376862253E4”.“TargetPath” != “Fileset-C409B04BE8434E439BEAE376862253E4”.“Path”
…> ORDER BY “Fileset-C409B04BE8434E439BEAE376862253E4”.“ID”
…> ,“Blocks-C409B04BE8434E439BEAE376862253E4”.“Index”;
Error: no such table: Fileset-C409B04BE8434E439BEAE376862253E4

and

sqlite> SELECT DISTINCT “Fileset-C409B04BE8434E439BEAE376862253E4”.“TargetPath”
…> ,“Fileset-C409B04BE8434E439BEAE376862253E4”.“Path”
…> ,“Fileset-C409B04BE8434E439BEAE376862253E4”.“ID”
…> ,“Blocks-C409B04BE8434E439BEAE376862253E4”.“Index”
…> ,“Blocks-C409B04BE8434E439BEAE376862253E4”.“Hash”
…> ,“Blocks-C409B04BE8434E439BEAE376862253E4”.“Size”
…> FROM “Fileset-C409B04BE8434E439BEAE376862253E4”
…> ,“Blocks-C409B04BE8434E439BEAE376862253E4”
…> ,“LatestBlocksetIds-4327E7D8F93D2342BD9A31132B45EA18” S
…> ,“Block”
…> ,“BlocksetEntry”
…> WHERE “BlocksetEntry”.“BlocksetID” = “S”.“BlocksetID”
…> AND “BlocksetEntry”.“BlockID” = “Block”.“ID”
…> AND “Blocks-C409B04BE8434E439BEAE376862253E4”.“Hash” = “Block”.“Hash”
…> AND “Blocks-C409B04BE8434E439BEAE376862253E4”.“Size” = “Block”.“Size”
…> AND “S”.“Path” = “Fileset-C409B04BE8434E439BEAE376862253E4”.“Path”
…> AND “Blocks-C409B04BE8434E439BEAE376862253E4”.“Index” = “BlocksetEntry”.“Index”
…> AND “Fileset-C409B04BE8434E439BEAE376862253E4”.“ID” = “Blocks-C409B04BE8434E439BEAE376862253E4”.“FileID”
…> AND “Blocks-C409B04BE8434E439BEAE376862253E4”.“Restored” = 0
…> AND “Blocks-C409B04BE8434E439BEAE376862253E4”.“Metadata” = 0
…> AND “Fileset-C409B04BE8434E439BEAE376862253E4”.“TargetPath” != “Fileset-C409B04BE8434E439BEAE376862253E4”.“Path”
…> ORDER BY “Fileset-C409B04BE8434E439BEAE376862253E4”.“ID”
…> ,“Blocks-C409B04BE8434E439BEAE376862253E4”.“Index”;
Error: no such table: Fileset-C409B04BE8434E439BEAE376862253E4

The .tables isn’t showing any of the above tables, so don’t know what the query is doing and how I should see the results.

Also can’t find the location of the temp store in the docker container, maybe somebody can tell me it so I can check it?

I think this is an issue with the Docker container. I am unable to replicate this on Linux (and can verify that the query works in a breakpoint). However, I do get the error when running the official Docker.

image

I see these often enough that I wonder if they’re a false alarm or an artifact of reporting.
Regardless, the first of those came after the problem you reported, but it does confirm
idea that ScanForExistingSourceBlocksFast ran GetFilesAndSourceBlocksFast,
meaning no-local-blocks workaround (not tried yet?) would avoid going down that path.

There is no gain in wasting time to overwrite a file that is already exactly as it should be.
The overwrite question controls what happens if file is different, overwrite it or as it says
“Save different versions with timestamp in file name”. Feel free to suggest better words.

You want it always written? That is not the option presented. This is not the time for debate.
You can make feature requests on behaviors, but changes have impacts on existing users.

Please don’t run random examples of SQL, unless requested, or you fully understand them.
That was sample output from the middle of a long series of SQL with unique IDs for the run.
Fortunately it’s just some SELECT statements that couldn’t find anything, so no harm done.

Thanks to warwickmm for stepping in with a Docker try. That may give something to look at.
I’m not sure how one debugs inside Docker though, but at least seeing it reproduce is great.
I’m probably going to step back, if the people who know about Docker can step in for awhile.

I created a new issue to track this problem

1 Like

This fixes it for me.

@ts678 Thanks for warning me about the Query’s. I already had seen the couldn’t do any harm to my database. As a precaution I made a backup of my original DB and ran the query’s on that DB, so no need for you to worry for :wink:

@warwickmm Who great!! Is there a way that I can test this code in my or a seperate Docker container?

You can build the image from the source and create a container from that. Otherwise, you’ll have to wait for the next canary release.