Gary and Jerry

I have been working on my paper ‘Consciousness, Higher-Order Thoughts, and What It’s Like’ which I will be presenting in a couple of weeks, parts of which have appeared here and over at Brains. I was reading through it today and something interesting occurred to me. It has been a project of mine for a while now to show that all and only mental states have qualitative properties, and so that the qualitative is the mark of the mental. To that end I have been developing a model of the propositional attitudes that treats the mental attitudes as a distinctive way of feeling about some represented proposition (I give an introduction to the account in my award winning 😉 paper The Mark of the Mental).

 In this current paper I am trying to show that one prominent theory of consciousness requires that thoughts be modeled as qualitative states, and that this view that I have independantly worked out fits very nicely with the higher-order account but I am also interested in ways of trying to get people to see that they already think that the attitude of belief has a distintive qualitative feel. I point out what I think are good ways of seeing that in the paper, one of which is a intuition pump that Alvin Goldman came up with in his 1993 paper “The Psychology of Folk Psychology”. Here is what I say.

Goldman offers us a nice intuition pump. Imagine a Mary-like thought experiment with a super-scientist called Gary. Gary has never had a desire, now imagine that he suddenly does have one. Won’t he have learned something new? Namely won’t he now know what it is like for him to have a desire? It seems to me that this suggests that there is a qualitative aspect to this mental attitude. But what about beliefs?

What occurred to me was a way to extend Goldman’s intuition pump to the case of beliefs. Given that we think that there coul be unconscious beliefs, consider the following super-scientist Jerry. Imagine that Jerry has been raised in a special room, much like Mary and Gary, but instead of never seeing red (Mary) or never having a desire (Gary), Jerry has never had a conscious belief. He has had plenty of unconscious beliefs, but none of them have been conscious. Let us imagine that we have finally discovered the difference between conscious and unconscious beliefs and that we have fitted Jerry with a special implant that keeps all of his beliefs unconscious, no matter how much he introspects. Let us also imagine that this device is selective enough so that it wipes out only the beliefs and so Jerry has plenty of other conscious experiences. He consciously sees red, has pain, wants food, fears that he will be let out of his room one day, wonders what the molecular structure of Einsteinium is, etc.

Now imagine that one of Jerry’s occurrent, unconscious, beliefs suddenly becomes a conscious belief. For the first time in Jerry’s life he has a conscious belief. Won’t he learn something new? Won’t he learn what it is like for him to have the belief that he has always had? Doesn’t this suggest that it is part of what we ordinarily think about beliefs that they are qualitative states? Consider a Jerry-like Mary experiment. Let us suppose that Mary has never had a conscious experience of red, though she has had all kinds of unconscious red experiences and all kinds of other conscious experiences (perhaps, though, no conscious color experiences?). Now imagine that an unconscious, occurrent, experience of red suddenly becomes conscious…it seems to me that these two cases are identical.

HOT Fun in the Summertime 2

Given that higher-order theories of consciousness are committed to the claim that there are unconscious sensory states (like pains, and seeings of red, etc) and that such unconscious states are not like anything for the creature that has them, they need a way to identify the sensory qualitative properties independently of our access to those properties (i.e. independent of their being conscious). This is where homomorphism theory comes in.

Rosenthal begins by noting that we characterize our sensory qualities in terms of their resemblances and differences within families of properties. These families of properties are in turn specified by reference to the perceptible properties of things in the world. For example we can characterize red as more similar to pink than to brown and so on and these resembelances and differences are homomorphic to the family of perceptible properties (presuambly wavelength reflective properties) that give rise to the mental qualities. What we get from doing this systematically is a ‘quality space’ which is homomorphic to the quality space of the perceptible properties. Our being aware of the qualitative properties of sensory states explains how it is that we have mental access to the perceptible properties. An unconscious pain state, then, will be one that resembles and differs other pain states in ways that are homomorphic to a family of perceptible properties, and via which we gain mental access to those properties. Though there may be other ways to independently specify the qualitative properties all higher-order theories need some way to do it and homomorphism theory looks promising. It is, at the very least, an illustration that it can be done. How can we extend this to cover the requirement that there is something that it is like for a creature to have a conscious thought?

I have elsewhere argued (The Qualitative Character of Conscious Thoughts) that the propositional attitudes can be modeled as taking some specific mental attitude towards some represented proposition and that the mental attitude just is some particular way of feeling about the represented proposition. So, for instance having a belief consists in feeling convinced, that is, it is the subjective feeling of certainty that one has with respect to the truth of the represented propositon. This model of the propositional attitudes actually fits very nicely with homomorphism theory. In the sensory case we become aware of the sensory qualities, which are the properties that mental states have in virtue of which they resemble and differ each other, and which resemblances and differences are homomorphic to the resemblances and differences that hold between the family of perceptible worldly properties. Our being conscious of these properties explains how it is that we have mental access to colors. So too in thought we become conscious of the cognitive qualities and this gives us access to our thoughts. To have a conscious belief is to be conscious of oneself as having a certain cognitive quality with respect to some content. And, these cognitive qualities (that is the mental attitudes themselves) will stand in various patterns of resemblances and differences from each other in just the same way that the sensory qualities do.

What are we to say about the actual homomorphism to perceptible properties? Is there any set of properties that the mental attitudes are homomorphic to? That is, is there a set of properties that have similarities and differences which resemble and differ in a way that preserves the similarities and differences between the mental attitudes? This is important since we need a way to specify the attitudes apart from their qualitative component. Yes; we can hypothesize that the homomorphic properties are the illocutionary forces of utterances. So the differences between beliefs that p and desires that p are homomorphic to the differences between the illocutionary force of the utterance of some linguistic item in the process of expressing the belief or desire.

This even may even turn out to be an explanation of why it is that having language allows us to have more fine-grained thoughts, if we could defend the claim that being conscious of our thoughts in respect of their qualitative attitude towards some represented content gives us mental access to the properties of the language that we would use to express that thought. If this were the case then the cognitive qualities would be exactly like the sensory qualities and our theory of one could be used to explain the other. Obviously more work needs to be done to flesh this out completely, but this line of thought seems to be a promising way of extending homomorphism theory to cover propositional attitudes and so this account of the propositional attitudes should be very attractive to anyone who accepts a higher-order theory of consciousness.

HOT Fun in the Summertime 1

I have been working on my paper ‘Consciousness, Higher-Order Thoughts, and What It’s Like’ which I will present as a poster at the SPP and as a talk at the ASSC in June. This paper is basically the first half of a longer paper of mine Consciousness on my Mind: Implementing the Higher-Order Strategy for Explaining What It’s Like, which I wrote in my spare time and while trying to avoid working on my dissertation 🙂 parts of this paper are adapted in various posts around here…e.g. Explaining What It’s Like, Two Concepts of Transitive Consciousness, Kripke, Consciousness, and the ‘Corn, and As ‘Corny as I Want to Be. At any rate, I thought it might be helpful/interesting to post the basics of the paper.

The paper has two parts. In the first part I give the argument that all higher-order theories of consciousness are committed to the claim that there is something that it is like for an organism to have conscious propositional attitudes (like beliefs, desires, etc).  In the second part of the paper I suggest a model of the propositional attitudes that treats them as qualitative states and show that it actually fits nicely with Rosenthal’s homomorphism theory of sensory qualities.

Given that the transitivity principle says that a conscious mental state is a mental state that I am conscious of myself as being in the argument for the commiotment to the qualitative nature of conscious beliefs is pretty simple and straight-forward.

  1. The transitivity principle commits you to the claim that any mental state can occur unconsciously and so to the claim that pains can occur unconsciously
  2. An unconscious pain is a pain that is in no way painful for the creature that has it (the transitivity principle commits you to this as well, on pain of failing to be able to give an account, as promised, of the nature of conscious qualitative states)
  3. It is the higher-order state, and solely the higher-order state, that is responsible for there being something that it is like to have a conscious pain.
  4. So, when a higher-order state of the appropriate kind is directed at a beleif it should make it the case that there is something that it is like for the creature that has the belief, otherwise there is more to conscious mental states than just higher-order representation.

I will post on the second part of the paper a little later.

Do Thoughts Make Us Conscious of Things?

The Transitivity principle says that a conscious state is a mental state that we are conscious of ourselves as being in, thus an account of transitive consciousness is key for implementing a higher-order theory. Rosenthal is clear that he thinks that thoughts can sometimes make us conscious of things. Here is what he says in the introduction to Consciousness and Mind

We are conscious of things when we are in mental states that represent those things in some suitable way. We might be conscious of something by seeing it or sensing it in some other way, or by having an appropriate thought about it (p 4)

 In particluar Rosenthal argues that when we think of some object as present we become conscious of that object. This claim is crucial for anyone that wants to hold a higher-order thought version of higher-order theory. 

 In some recent arguing with Pete over at the Brain Hammer, he has denied that thoughts can make us conscious of things. Here is the example that I gave

You get up in the middle of the night to take a leak, it is pitch dark in your room, you can’t see a thing, you think to yourself “there’s a table in this room by the door, I better be careful not to stub my toe”.

I claim that I am conscious of the table. Or consider another case. Suppose that for some reason you think ‘John is here, in this room’ with your eyes closed and where John is in fact in the room. I claim that I would be conscious of John.

Now Pete seems to think that it is obvious that I am NOT conscious of the table or of John in these cases, whereas I seem to think that it is equaly obvious that I am. Does anyone have an argument/intuitions either way?

UPDATE: I think I have actually found an argument for the claim that thoughts makes us conscious, other than the claim that it is intuitive in the above examples that I am. Rosenthal argues that we can be conscious of one and the same experience in various ways and these ways can be more or less exact. So, I could be conscious of an experience of red as a particular shade of red or juast as a generic shade of red, but preseumably the first-order state is in fact a determinate shade. This means that there is more to my conscious experience than the first-order expereinces that I have. We need a higher-order state that is able to capture these kinds of differences and the intentional/conceptual content of thought is arguably the only way to do this. I rather like this arguement…

Swimming Vegetables? Fish, Pain, and Consciousness

There has been for some time now a debate between fishing enthusiasts and animal rights activists over whether or not fish feel pain. A recent study by scientists in Scotland has reopened this debate by claiming to have demonstrated that fish in fact do feel pain.

They claim that fish have nociceptors and a part of the brain that responds to them, which is to say that they have a pain pathway. Also, when trout had their lips stung by bees they exhibited a rocking motion that is similar to pain behavior seen in other animal species (see http://news.bbc.co.uk/2/hi/science/nature/2983045.stm for a report on the study.) It has already been known for some time that fish have endogenous opiodes and so it really looks like the preponderance of evidence suggests that fish do feel pain. (see http://www-phil.tamu.edu/~gary/awvar/lecture/pain.html for a table comparing various vertebrates and invertebrates on what we take to be requirements for feeling pain). When you think about it this is what we should expect, seeing as how fish are vertebrates and all. Of course not all fishes are vertebrates and the study I was just talking about used trout so when I talk about fishes I will be talking about fish like trout.

These findings are disputed by some. The standard claim that is made by people who want to deny that fish feel pain is that fish lack the cerebral cortex that allows them to experience the psychological state of being in pain. Pain behavior is not enough, nor is nociception. Pain is a psychological state distinct from the awareness of tissue damage. The problem with this response is that it is not the case that trout do not have any cerebral cortex at all but rather that they have very primitive ones. Their cortex is so simple, in fact, that it does not require a thalamus to relay information to it but rather is directly hooked up to the sensory neurons. Thus we cannot conclude that they do not have pains at all, but only that they have some primitive form of pain.

Also, +notice that the question ‘do fish feel pain’ is an empirical question, not a philosophical question and both parties recognize it depends on the particular brain structures that fish have. This supposes that we can tell, by looking at the brain of the fish, whether or not it experiences pain. Notice also, though that this objection assumes that something is not a pain unless it is felt as painful by the organism that has it, that is unless it is a conscious pain. So, for example, consider a fish like a trout except that its nociceptors are not connected to the brain. This fish will be in the very same states as the one who does have this connection. They will even behave in all the same ways because the brain stem and spinal cord is where most of the action in fish occurs any way. If the higher-order theory turns out to be right, then the way to characterize this situation is one where the latter fish has an (in principle) unconscious pain.

This brings out three important points. 1. It is likely that some fish do have conscious pains and therefore there is reason for thinking that sport fishing is immoral, and that eating fish is as immoral, or moral, as eating other kinds of animals. 2. Fish look like good candidates for helping us to empirically test the higher-order theory of consciousness. And 3. It raises an interesting question for Utilitarians; Do unconscious pains matter? Is it wrong to torture a zombie?

Brain Reading, Brain States, and Higher-order Thoughts

Recently there has been a lot of progress in brain reading; for instance Here is a nice piece done by CNN, here is a nice article on brain reading video games, and here is a link to Frank Tong’s lab, who may be familiar to those who regularly attend the ASSCor the Tuscon conferences. This stuff is important to me because it will ultimately help to solve the empirical question of whether or not animals, or for that matter whether we, have the higher-order states necessary to implement the higher-order strategy for Explaining What It’s Like so I am very encouraged by this kind of progress. The technology involved is mostly fMRI, though in the video game case it is scalp EEG. But though this stuff is encouraging fMRI and scalp EEG are the wrong tools for decoding neural representation, or so I argued in my paper “What is a Brain State?” (2006) Philosophical Psychology 19(6) (which I introduced over at Brain a while ago in my post Brain Statves Vs. States of the Brain). Below is an excerpt from that paper where I introduce an argument from Tom Polger’s (2004) book Natural Minds and elaborate on it a bit.

Polger argues that thinking

that an fMRI shows how to individuate brain states would be like thinking that the identity conditions for cricket matches are to pick out only those features that, statistically, differentially occur during all the cricket games of the past year. (p 56)

The obvious difficulty with this is that it leaves out things that may be important for cricket matches but unique (injuries, unusual plays (p 57)) as well as includes things that are irrelevant to them (number of fans, snack purchasing behavior (ibid)). The same problems hold for fMRI’s: they may include information that is irrelevant and exclude information that is important but unusual. Irrelevant information may be included because fMRI’s show brain areas that are statistically active during a task, while they may exclude relevant information because researchers subtract out patterns of activation observed in control images.

I would add that at mostwhat we should expect from fMRI images are picture of where the brain states we are interested in can be found not pictures of the brain states themselves. They tell us that there is something in THAT area of the brain that would figure in an explanation of the task but they don’t offer us any insight into what that mechanism might be. Knowing that a particular area of the brain is (differentially) active does not allows us to explain how the brain performs the function we associate with that brain area. We need to know more about the activity. Consider an analogy: we have a simple water pump and want to know how it works. We know that pumping the handle up and down gets the water flowing but ‘activity in the handle area’ does not explain how the pump works. Finding out that the handle is active every time water flows out of the pump would lead us to examining the handle with an eye towards trying to see how and why moving it pumps the water.

And, as I go on to argue, after examining those areas to find what the actual mechanisms are neuroscience suggests that it is synchronized neural activity in a specific frequency that codes for the content, both perceptual and intentional, of brain states. So, multi-unit recording technology (recording from several different nuerons in the brain at the same time) is the right kind of technology for looking at brain states. This is not to say, of course, that the fMRI and EEG technology is not valuable and useful. It is, and we can learn a lot about the brain from studying it, but it must be acknowledged that it is ultimatly, explanatorily, useless. To find higher-order thoughts or perceptions we will need to use advanced multi-unit recordings.  

As ‘Corny As I Want To Be

As some of you may know, I have been mounting an offensive against Pete Mandik’s Unicorn argument against higher-order theories of consciousness. We have been having quite a bit of discussion over at the Brain Hammer (Me So ‘Corny) on whether or not my proposed answer works or not, and so I thought I would take this opportunity to sum up the debate so far.

The Argument

Pete’s argument is actually quite simple. Here is the way that he puts it:

First, some quick and dirty definitions of my targets:

[Higher-order Representationalism] – The property of being a conscious state consists in being a represented state.

P1. Things that don’t exist don’t instantiate properties.

P2. We represent things that don’t exist.

P3. Representing something does not suffice to confer a property to that thing.

C1. Representing a state does not suffice to confer the property of being conscious to that state (so [higher-order representationalism] is false).

There is another conclusion (C2) that first-order representationalism is false, but I already knew that and so will ignore it.

Two Ways to Kill a ‘Corn

Now it is not secret that I think that is a bad argument that rests on several misunderstandings of the higher-order theory. It is not a threat to Rosenthal’s version of higher-order theory because he would deny the assumption needed to get P3 and hence C1. Here is the way I put it in Kripke, Consciousness and the ‘Corn.

[T]his argument does not threaten Rosenthal’s version of higher-order theory because for him the higher-order thought does not ‘transfer’ or ‘confer’ the property of consciousness to the first order state. For him the property of being a conscious state consists solely in my representing myself as being in a certain state. The first-order state is not changed in any way by the higher-order thought. The only thing that has changed is that the creature is now aware of itself as being in the state.

Now it may be counter-intuitive to say that the higher-order state in no way changes the first-order state, but intuition is not argument. Also, the transitivity principle commits you to this claim as I detailed in Explaining What It’s Like, and as Rosenthal is well aware of. Here is his response to the problem posed by P2 (The interviewer is Uriah Kriegal)]

Ephilosopher: Professor Rosenthal, let me raise one final difficulty for your theory. According to your theory, what it is like for the subject to be in a conscious state is determined by how that state is represented by the second-order state. But what happens when there is a misrepresentational second-order state, with no first-order state at all? It seems your theory commits you to saying that, in such cases, the subject is under the false impression that she is having a particular kind of conscious experience, when in fact she is not. Doesn’t that strike you as absurd, though?

David Rosenthal: Answering this question requires a lot of care in how we put things. We can get a feel for what’s at issue by considering a case that actually occurs. Dental patients sometimes seem to themselves to feel pain even when the relevant pain nerve endings are dead or anaesthetized. The widely held explanation is that these patients feel sensations of fear and vibration as though those sensations were pain. We certainly have no trouble understanding this explanation. But how should we describe what’s happening specifically in terms of the patient’s conscious states? It’s undeniable that the patient is in some conscious state, but what kind of conscious state is it? From the patient’s subjective, first-person point of view, the conscious state is a pain, but we have substantial independent reason to say that there simply is no pain. How we describe this case depends on whether we focus primarily on the state of which the patient is actually conscious or on the way the patient is conscious of it. The trouble is that these two things come apart; the patient is conscious of sensations of fear and vibration, but conscious of them as pain. So it’s not at all absurd, but only unexpected, that one be conscious of oneself as being in a state that one is not actually in. It’s worth noting that this divergence between the state of which somebody is actually conscious and how that person is conscious of it has practical importance. The area of so-called dental fear is of interest to dentists and to theorists because patients who understand what’s happening readily come to be conscious of their sensations as sensations of vibration and fear, which is not especially bothersome. How one represents one’s experiences does determine what those experiences are like for one. Is this really the kind of case you asked about? You asked about what happens when one has a higher-order thought that one is in a state that doesn’t occur. But maybe we should treat the dental case rather as a higher-order thought that misdescribes its target; it misdescribes sensations of fear and vibration as a sensation of pain. But I think it will never matter which way we describe things. When a higher-order thought occurs, there are always other mental states, as well. So whenever a higher-order thought doesn’t accurately describe any state that actually occurs, we can say either that it misdescribes some actual state or that it’s about some nonexistent state; it won’t make any difference which way we characterize the situation

So on Rosenthal’s view there simply is no difference between saying that the HOT represents a state that does not exist and saying that it misrepresents a state that does exist. So Rosenthal’s versionof higher-order theory is completely unaffected bythe unicorn argument.

Even so, it does commit him to saying some strange sounding things, but there is another way to think of the relation between the higher-order state and the first-order state, and gives rise to the distinction between what I call K-HOTs and Q-HOTs (Two Concepts of Transitive Consciousness). A K-HOT is caused by the first-order state that it represents, whereas a Q-HOT simply ‘accompanies’ the first-order state it reporesents. Rosenthal used to endorse K-HOTs but has since moved to Q-HOTs, but as I argued in ‘Two Concepts’ there is no reason to abandon K-HOTs and the give us a second, more convincing, way to kill the ‘corn. Here’s how.

A K-HOT represents its target state via the concepts at the disposal of the creature in question in just the same way that Rosenthal has spent so long arguing is the case. The difference is that the K-HOT is (theoretically) required to be caused by some first-order state or other and it is that causal link that determines what first-order state the higher-order state is about. So, K-HOTs will NEVER represent a first-order state that does not exist, it will rather ALWAYS represent (or misrepresent) a state that does in fact exist. So the property of being represented is none other than the property of causing a higher-order state. This means that while it may be true that WE represent things that do not exists, K-HOTs do not. So again, P2 and P3 are blocked.

So whether you link Quine and Q-HOTs or Kripke and K-HOTs the unicorn is no threat to higher-order theories. Of course having said that I think there are reasons to prefer K-HOTs butthat is another story.  

A Tale of Two T’s

A while ago over at the Brain Hammer Pete asked the question ‘What are you conscious of when you have conscious experiences?‘ (I think he asked the same question over at Brains a little later). His basic idea was to solicit peoples intuitions about Transitivity and Transparancy, which he defined as follows.

“The Transparency Thesis”: When one has a conscious experience all that one is conscious of is what the experience is an experience of.

“The Transitivity Thesis”:When one has a conscious experience one must be conscious of the experience itself.

Given these two claims he was in particular interested to ask

Since each of these claims is alleged to be obvious, and since they are in opposition, I’d be interested in hearing what others think of the matter: Which is more obvious than the other?

These two claims both seem obvious to me and so I am interested in finding out why people seem to think, as Pete clearly does, that they are in opposition. Part of theproblem is the way inwhich Pete define Transitivity. He claims that it claims that we must be conscious of the experience itself, but this is actually wrong. What Transitivity claims is that we must be conscious of ourselves as having the experience (or conscious of ouselves as being in a certain state). Once we see that this is the right way to construe transitivity it is no longer the case that these two claims are in opposition. When I have a conscious experience (say, as Pete does, of a leafy tree) then it will be the case that I am conscious of myself as having a leafy tree experience (transitivity) and because of that it will also be true that it seems to me that all that I am conscious of is the leafy tree.  So where is the opposition?

Kripke, Consciousness, and the ‘Corn

Most of you guys probably know Pete Mandik as the bassist and/or singer for the world’s premier Zombies Blues Band, NC/DC and the Devastating Objections

He has also been hammering out an argument against higher-order theories of consciousness that he calls The Unicorn, which tries to show that theories that implement the higher-order strategy via a higher-order thought theory, as Rosenthal does, can’t be right. This is because, Pete argues, these kinds of theories claim that conscious states are states that come to have the property of being represented by a higher-order thought. This would be fine, but there is no such property, for if there were that would mean that unicorns could have it, since we represent them in thought. Closely related to the unicorn argument is the objection to higher-order theories from the possibility of the occurrence of the higher-order state in the absence of the first-order state. What state is it that has the property of being conscious?
 
I ave argued that this argument does not threaten Rosenthal’s version of higher-order theory because for him the higher-order thought does not ‘transfer’ or ‘confer’ the property of consciousness to the first order state. For him the property of being a conscious state consists solely in my representing myself as being in a certain state. The first-order state is not changed in any way by the higher-order thought. The only thing that has changed is that the creature is now aware of itself as being in the states. In the last post I relied on this in making my argument that the higher-order strategy commits people like Rosenthal to the claim that thoughts are qualitative. Now, if one wants to, one can say that the creature has gained a new property, that of being aware of itself as being in a certain state, but he certainly wouldn’t say that a state’s being conscious was a matter of it acquiring a property that it did not have before.

Now, though Rosenthal says this, and it is an answer to the unicorn argument, it is held by most to be puzzling. Intuitively what most people think when they think of the higher-order strategy is that the higher-order state makes the first-order state a conscious state. But now we are told that that is not the case. The higher-order thought makes us conscious of ourselves as being in a certain state, but the first-order state plays no role in determining what it is like for us to have the state in question.

But there is another way of thinking about the relationship between the first-order state and the higher-order thought that represents it. Rosenthal thinks of this relationship as descriptive. The higher-order state describes the person as being in a certain first-order state which is located at such and such a place in some quality space in the case of sensory experiences. In the case of beliefs and desires the higher-order thought describes the person as having a belief or desire with such and such a content. I call these kinds of higher-order thoughts Q-HOT’s. The other way of thinking about this relationship is along the lines of the causal theory of reference. On this way of construing the higher-order thought theory a first-order state is a conscious state if and only if it causes, in the right sort of way,  a higher-order state to the effect that one is in that state.  I call these K-HOT’s.

Now, I don’t really know if the higher-order thought theory of consciousness is true or not, but it seems to me that it has got a good shot at being true. It is not obviously false. But I think that if one is going to have a higher-order thought theory then it is better to cast it in terms of K-HOT’s. It solves the problem of the unicorn as well as the problem of the non-existent state. We will never have K-HOT’s about non-existent states. Every K-higher-order thought is caused by some first-order state and if all we mean by ‘has the property of being represented’ is this causal reference relation then the first-order state will have the property of being represented. There are other advantages that recommend this modification to higher-order thought theory, but I have to go grade papers and so I will come back to them.