Consciousness, Relational Properties, and Higher-Order Theories

Greetings from Kennebunkport!

Jen and I went whale watching yesterday (saw a few Hump-backs and a couple of Fin-backs)…but all I could think about was consciousness! That is, aside from trying not to get sea-sick 🙂

In an earlier post (The Function of Consciousness in Higher-Order Theories) I argued that higher-order theories were committed to saying that consciousness had very little function, but not, as Uriah suggested, to saying that consciousness was epiphenomenal. This was found to be puzzling by some, and today I was thinking about why this is.

Intuitively, what people think that higher-order theories construe a mental state’s being conscious as a relational property of the first-order state. This is not suprising, sunce Rosenthal has said in numerous places that on his view consciousness is a relational property. But this is actually not right.

In Sensory Qualities, Consciousness, and Perception he is very clear that consciousness is not a relational property of the first-order state (I do not have my copy of Consciousness and Mind with me, but the passage is in section 5). This is because, on his view, the higher-order state can occur ibn the absence of the first-order state.

So, a state’s being conscious is not an intrinsic property of the state, nor is it strictly speaking a relational property of the state. It simply isn’t a property that the first-order state has at all. Any given first-order state is conscious when a suitable higher-order state represents the creature as being in that state.

Now, I agree that this is odd sounding, but this is what Rosentyhal’s view is…this is yet another reason to prefer K-HOTs to Q-HOTs. On this view the first-order state does come to have the relational property of being conscious in virtue of causing a K-HOT that represents the creature as being in that state, and since there is no real difference between representing a state that does not exist and misrepresenting a state that does exist (on Rosenthal’s view) then the K-HOT account captures everything that Rosenthal wants to say without the odd sounding results.

OK, so back to the pool for me!

On Hallucinating Pain

OK, so one more for the rode…

I was recently re-reading one of Ned Block’s papers (‘Bodily Sensations as an Obstacle for representationism’) where he denies that there is an appearance/reality distinction when it comes to pain. This is a commn view to have about pain (had for instance by Kripke in his argument against the Identity Theory). Here is what he says

 My color experience represents colors, or colorlike properties. (In speaking of colorlike properties, I am alluding to Sydney Shoemaker’s “phenomenal properties”  or “appearance properties” or Michael Thau’s nameless properties.) But, according to me, there is no obvious candidate for an objectively assessable property that bears to pain experience the same relation that color bears to color experience. But first, let us ask a prior question: what in the domain of pain corresponds to the tomato, namely, the thing that is red? Is it the chair leg on which I stub my toe (yet again), which could be said to have a painish or painy quality to it in virtue of its tendency to cause pain–experience in certain circumstances, just as the tomato causes the sensation of red in certain circumstances? Is it the stubbed toe itself, which we experience as aching, just as we experience the tomato as red? Or, given the fact of phantom-limb pain, is it the toeish part of the body image rather than the toe itself? None of these seems obviously better than the others.

Now if one has adopted a higher-order theory of consciousness one will think that there is indeed an appearance/reality distinction to be made here. Since it is the higher-order state, and only the higher-order state, that accounts for there being something that it is like to have a conscious pain it follows that there is the real possibility that one may misrepresent oneself as being in pain when one is not, or as not being in pain when one is.

So it is no suprise to find David Rosenthal saying stuff like this

Just as perceptual sensations make us aware of various physical objects and processes, so pains and other bodily sensations make us aware of certain conditions of our own bodies. In standard cases of feeling pain, we are aware of a bodily condition located where the pain seems phenomenologically to be located. It is, we say, the foot that hurts when we have the relevant pain. and in standard cases we describe teh bodily condition using qualitative words, such as painful, burning, stabbing, and so forth. Descartes’s famous Sixth Meditation appeal to phantom pains reminds us that pains are purely mental statess. But we need not, on that account, detach them from the bodily conditions they reveal in the standard, nonhallucinatory cases. (from Sensory Quality and the Relocation Story)

 So Rosenthal seems to be saying that it is bodily conditions that play the role that the tomatoe does and it is first-order states which constitute an awareness of those conditions which play the role that Block calls ‘representing color or colorlike properties’. If these are all distinct states, then we should expect for them to come apart.

 I have addressed the issue of unconscious pains in some previous posts. An unconscious pain, for Rosenthal and those like him, is a state that makes us conscious of some bodily condition and which will resemble and differ other pains states in ways that are homomorphic to the resembelances and differences between these bodily states. But what about the other case mentioned? Is it even possible to think that one is in pain and be wrong?

Rosenthal cites what he calls ‘the dental fear phenomenon’ as evidence for this claim. Here is what he says (in the same article as before)

Dental patients occasionally report pain when physiological factors make it clear that no pain could occur. The usual explanation is that fear and the non-painful sensation of vibration cause the patient to confabulate pain. When the patient learns this explanation, what it’s like for the patient no longer involves anything painful. But the patient’s memory of what it was like before learning the explanation remains unchanged. Even when what it’s like results from confabulation, it may be no less vivid and convincing than a nonconfabulatory case.

Now, I have always felt that this dental fear stuff was a really convincing way of showing that there really is a reality/appearance distinction for pains, but when I have tried to research this I have not been able to find very much on it (and Rosenthal offers no citations), but it does seem to be a reletively common phenomenon. Here is an excerpt from a paper on dental fear in children that tells a dentists how to deal with this

Problems that a dentist is convinced are associated with misinterpretation of pain may be addressed by explaining the gate theory of pain. A very basic explanation which is suitable for children as young as five is as follows. ‘You have lots of different types of telephone wires called nerves going from your mouth to your brain (touch appropriate body parts). Some of them carry “ouch!” messages and the others carry messages about touch (demonstrate) and hot and cold. The sleeping potion stops the ouch messages being sent, but not the touch and the hot and cold messages. So you will still know that I am touching the tooth and you will still feel the cold of the water. Your brain looks out for messages all the time. If you are convinced that it will hurt, it will. This is because if I make the ouch nerves go off to sleep and I touch you, a touch message gets sent. But your brain is looking for ouch messages and it says to itself, ‘There’s a message coming. It must be an ouch message.’ So you go ‘ouch’ and it hurts, but all I did was to touch you. It’s just that your brain was confused.’ (The language may, of course, be adjusted for older children.) If this fails to work, then active treatment should be stopped. (from Dental Fear in Children)

This is clearly a pain hallucination, as evidenced by the fact that the way they treat it is not with more medication, but with an explanation, pitched at the kids level, of why what they are fealing is not pain.

Now this is very different from what is called neuropathic pain, which is pain that is caused by a misinterpretation of an innocuous stimuli, like touch, or pains like phantom limb pain. This is the result of one kind of stimuli, for one reason or another, causing the bodily state that gives rise to the perception of pain.

Peripheral nociceptive fibers located in tissues and possibly in the nervi nervorum can become hyperexcitable by at least by 4 major mechanisms: a) nociceptor sensitization (“irritable nociceptors”); b) spontaneous ectopic activity; c) abnormal connections between peripheral fibers; and d) hypersensibility to catecholamines. This peripheral sensitization results in increased pain responses from noxious stimuli (primary hyperalgesia) and previously innocuous stimuli elicits pain (peripheral allodynia). Central nociceptive second order neurons in the spinal cord dorsal horn can also be sensitized when higher frequency inputs activate spinal interneurons. This results in the release of neuromodulators that activate glutamate receptors and voltage-gated calcium channels with a net effect of an increase of intracellular calcium that windup action potential discharges. Degeneration of peripheral nociceptive neurons may trigger changes in the properties of low-threshold sensitive neurons and axonal sprouting of the central processes of thesefibers that connect with central nociceptive interneurons. (from Neuropathic Pain Treatment: The Challenge

So it does look like we can distinguish the three states and that we do in fact find cases on one without the other.

Shesh! that turned out to be longer than i expected…but what the hell? I’m Outa Here!

The Function of Consciousness in Higher-Order Theories

I was recently reading through a new paper of Uriah Kriegel’s called The Same-Order Monitoring Theory of Consciousness where he says this

If consciousness were indeed a relational property, M’s being conscious would fail to contribute anything to M’s fund of causal powers. And this would make the property of being conscious epiphenomenal (see Dretske 1995: 117 for an argument along these lines).

This is, by all appearances, a serious problem for HOMT [higher-order monitoring theory a.k.a. Higher-order thought theory]. Why have philosophers failed to press this problem more consistently? My guess is that we are tempted to slide into a causal reading of HOMT, according to which M* produces the consciousness of M, by impressing upon M a certain modification. Such a reading does make sense of the causal efficacy of consciousness: after M* modifies M, this intrinsic modification alters M’s causal powers. But of course, this is a misreading of HOMT. It is important to keep in mind that HOMT is a metaphysical, not causal, thesis. Its claim is not that the presence of an appropriate higher-order representation yields, or gives rise to, or produces, M’s being conscious. Rather, the claim is that the presence of an appropriate higher-order representation constitutes M’s being conscious. It is not that by representing M, M* modifies M in such a way as to make M conscious. Rather, M’s being conscious simply consists in its being represented by M*.

So far this is all right (notice how Uriah has Rosenthal’s account correctly formulated in such a way as to be immune from certain unicorn arguments). I have also pointed out how this implicit assumption about what the higher-order thought theory is keeps people from thinking the theory is anti-Cartesian in certain important respects.

But Uriah goes on to say that

When proponents of HOMT have taken this problem into account, they have responded by downplaying the causal efficacy of consciousness. But if the intention is to bite the bullet, downplaying the causal efficacy is insufficient – what is needed is nullifying the efficacy. The charge at hand is not that HOMT may turn out to assign consciousness too small a fund of causal powers, but that it may deny it any causal powers. To bite the bullet, proponents of HOMT must embrace epiphenomenalism. Such epiphenomenalism can be rejected, however, both on commonsense grounds and on the grounds that it violates what has come to be called Alexander’s dictum: to be is to be causally effective. Surely HOMT would be better off if it could legitimately assign some causal powers to consciousness. But its construal of consciousness as a relational property makes it unclear how it might do so.

Now Rosenthal will be speaking about this issue at the ASSC, and Uriah is right that Rosenthal does not think that there is much, if any, function to consciousness qua consciousness, so I don’t want to get into that stuff. What I want to question is whether or not anyone who agrees with Rosenthal is committed, in the way that Uriah seems to think that they are, to saying that consciousness is epiphenomenal.  

The brunt of the challenge seems to come from the claim that our being conscious of a mental state, and hence that mental state being conscious, does not change, or modify the first-order state in any way and so its causal powers are unaffected by being conscious. I think that this is right; in fact I use this as a premise in my argument that higher-order theories are committed to there being something that it is like for a creature to have a conscious thought. But does this claim entail that consciousness is epiphenomenal? I am not sure that it does.

I think that someone who like the higher-order theory could say that, while the first-order state does not come to have any new causal properties when it become conscious, the creature in which the state occurs does. So, at the very least, even by Rosenthal’s lights, we get the ability to report (as opposed to express) our mental states when they are conscious, and we get the ability to introspect our mental states and thereby come to know what it is like for us to have them.

Now whether they can say there is more to the function of consciousness than this is another question, but at the very least, one does not have to dine on the bullet that Uriah has prepared.

 

 

Gary and Jerry

I have been working on my paper ‘Consciousness, Higher-Order Thoughts, and What It’s Like’ which I will be presenting in a couple of weeks, parts of which have appeared here and over at Brains. I was reading through it today and something interesting occurred to me. It has been a project of mine for a while now to show that all and only mental states have qualitative properties, and so that the qualitative is the mark of the mental. To that end I have been developing a model of the propositional attitudes that treats the mental attitudes as a distinctive way of feeling about some represented proposition (I give an introduction to the account in my award winning 😉 paper The Mark of the Mental).

 In this current paper I am trying to show that one prominent theory of consciousness requires that thoughts be modeled as qualitative states, and that this view that I have independantly worked out fits very nicely with the higher-order account but I am also interested in ways of trying to get people to see that they already think that the attitude of belief has a distintive qualitative feel. I point out what I think are good ways of seeing that in the paper, one of which is a intuition pump that Alvin Goldman came up with in his 1993 paper “The Psychology of Folk Psychology”. Here is what I say.

Goldman offers us a nice intuition pump. Imagine a Mary-like thought experiment with a super-scientist called Gary. Gary has never had a desire, now imagine that he suddenly does have one. Won’t he have learned something new? Namely won’t he now know what it is like for him to have a desire? It seems to me that this suggests that there is a qualitative aspect to this mental attitude. But what about beliefs?

What occurred to me was a way to extend Goldman’s intuition pump to the case of beliefs. Given that we think that there coul be unconscious beliefs, consider the following super-scientist Jerry. Imagine that Jerry has been raised in a special room, much like Mary and Gary, but instead of never seeing red (Mary) or never having a desire (Gary), Jerry has never had a conscious belief. He has had plenty of unconscious beliefs, but none of them have been conscious. Let us imagine that we have finally discovered the difference between conscious and unconscious beliefs and that we have fitted Jerry with a special implant that keeps all of his beliefs unconscious, no matter how much he introspects. Let us also imagine that this device is selective enough so that it wipes out only the beliefs and so Jerry has plenty of other conscious experiences. He consciously sees red, has pain, wants food, fears that he will be let out of his room one day, wonders what the molecular structure of Einsteinium is, etc.

Now imagine that one of Jerry’s occurrent, unconscious, beliefs suddenly becomes a conscious belief. For the first time in Jerry’s life he has a conscious belief. Won’t he learn something new? Won’t he learn what it is like for him to have the belief that he has always had? Doesn’t this suggest that it is part of what we ordinarily think about beliefs that they are qualitative states? Consider a Jerry-like Mary experiment. Let us suppose that Mary has never had a conscious experience of red, though she has had all kinds of unconscious red experiences and all kinds of other conscious experiences (perhaps, though, no conscious color experiences?). Now imagine that an unconscious, occurrent, experience of red suddenly becomes conscious…it seems to me that these two cases are identical.

HOT Fun in the Summertime 2

Given that higher-order theories of consciousness are committed to the claim that there are unconscious sensory states (like pains, and seeings of red, etc) and that such unconscious states are not like anything for the creature that has them, they need a way to identify the sensory qualitative properties independently of our access to those properties (i.e. independent of their being conscious). This is where homomorphism theory comes in.

Rosenthal begins by noting that we characterize our sensory qualities in terms of their resemblances and differences within families of properties. These families of properties are in turn specified by reference to the perceptible properties of things in the world. For example we can characterize red as more similar to pink than to brown and so on and these resembelances and differences are homomorphic to the family of perceptible properties (presuambly wavelength reflective properties) that give rise to the mental qualities. What we get from doing this systematically is a ‘quality space’ which is homomorphic to the quality space of the perceptible properties. Our being aware of the qualitative properties of sensory states explains how it is that we have mental access to the perceptible properties. An unconscious pain state, then, will be one that resembles and differs other pain states in ways that are homomorphic to a family of perceptible properties, and via which we gain mental access to those properties. Though there may be other ways to independently specify the qualitative properties all higher-order theories need some way to do it and homomorphism theory looks promising. It is, at the very least, an illustration that it can be done. How can we extend this to cover the requirement that there is something that it is like for a creature to have a conscious thought?

I have elsewhere argued (The Qualitative Character of Conscious Thoughts) that the propositional attitudes can be modeled as taking some specific mental attitude towards some represented proposition and that the mental attitude just is some particular way of feeling about the represented proposition. So, for instance having a belief consists in feeling convinced, that is, it is the subjective feeling of certainty that one has with respect to the truth of the represented propositon. This model of the propositional attitudes actually fits very nicely with homomorphism theory. In the sensory case we become aware of the sensory qualities, which are the properties that mental states have in virtue of which they resemble and differ each other, and which resemblances and differences are homomorphic to the resemblances and differences that hold between the family of perceptible worldly properties. Our being conscious of these properties explains how it is that we have mental access to colors. So too in thought we become conscious of the cognitive qualities and this gives us access to our thoughts. To have a conscious belief is to be conscious of oneself as having a certain cognitive quality with respect to some content. And, these cognitive qualities (that is the mental attitudes themselves) will stand in various patterns of resemblances and differences from each other in just the same way that the sensory qualities do.

What are we to say about the actual homomorphism to perceptible properties? Is there any set of properties that the mental attitudes are homomorphic to? That is, is there a set of properties that have similarities and differences which resemble and differ in a way that preserves the similarities and differences between the mental attitudes? This is important since we need a way to specify the attitudes apart from their qualitative component. Yes; we can hypothesize that the homomorphic properties are the illocutionary forces of utterances. So the differences between beliefs that p and desires that p are homomorphic to the differences between the illocutionary force of the utterance of some linguistic item in the process of expressing the belief or desire.

This even may even turn out to be an explanation of why it is that having language allows us to have more fine-grained thoughts, if we could defend the claim that being conscious of our thoughts in respect of their qualitative attitude towards some represented content gives us mental access to the properties of the language that we would use to express that thought. If this were the case then the cognitive qualities would be exactly like the sensory qualities and our theory of one could be used to explain the other. Obviously more work needs to be done to flesh this out completely, but this line of thought seems to be a promising way of extending homomorphism theory to cover propositional attitudes and so this account of the propositional attitudes should be very attractive to anyone who accepts a higher-order theory of consciousness.

HOT Fun in the Summertime 1

I have been working on my paper ‘Consciousness, Higher-Order Thoughts, and What It’s Like’ which I will present as a poster at the SPP and as a talk at the ASSC in June. This paper is basically the first half of a longer paper of mine Consciousness on my Mind: Implementing the Higher-Order Strategy for Explaining What It’s Like, which I wrote in my spare time and while trying to avoid working on my dissertation 🙂 parts of this paper are adapted in various posts around here…e.g. Explaining What It’s Like, Two Concepts of Transitive Consciousness, Kripke, Consciousness, and the ‘Corn, and As ‘Corny as I Want to Be. At any rate, I thought it might be helpful/interesting to post the basics of the paper.

The paper has two parts. In the first part I give the argument that all higher-order theories of consciousness are committed to the claim that there is something that it is like for an organism to have conscious propositional attitudes (like beliefs, desires, etc).  In the second part of the paper I suggest a model of the propositional attitudes that treats them as qualitative states and show that it actually fits nicely with Rosenthal’s homomorphism theory of sensory qualities.

Given that the transitivity principle says that a conscious mental state is a mental state that I am conscious of myself as being in the argument for the commiotment to the qualitative nature of conscious beliefs is pretty simple and straight-forward.

  1. The transitivity principle commits you to the claim that any mental state can occur unconsciously and so to the claim that pains can occur unconsciously
  2. An unconscious pain is a pain that is in no way painful for the creature that has it (the transitivity principle commits you to this as well, on pain of failing to be able to give an account, as promised, of the nature of conscious qualitative states)
  3. It is the higher-order state, and solely the higher-order state, that is responsible for there being something that it is like to have a conscious pain.
  4. So, when a higher-order state of the appropriate kind is directed at a beleif it should make it the case that there is something that it is like for the creature that has the belief, otherwise there is more to conscious mental states than just higher-order representation.

I will post on the second part of the paper a little later.

Do Thoughts Make Us Conscious of Things?

The Transitivity principle says that a conscious state is a mental state that we are conscious of ourselves as being in, thus an account of transitive consciousness is key for implementing a higher-order theory. Rosenthal is clear that he thinks that thoughts can sometimes make us conscious of things. Here is what he says in the introduction to Consciousness and Mind

We are conscious of things when we are in mental states that represent those things in some suitable way. We might be conscious of something by seeing it or sensing it in some other way, or by having an appropriate thought about it (p 4)

 In particluar Rosenthal argues that when we think of some object as present we become conscious of that object. This claim is crucial for anyone that wants to hold a higher-order thought version of higher-order theory. 

 In some recent arguing with Pete over at the Brain Hammer, he has denied that thoughts can make us conscious of things. Here is the example that I gave

You get up in the middle of the night to take a leak, it is pitch dark in your room, you can’t see a thing, you think to yourself “there’s a table in this room by the door, I better be careful not to stub my toe”.

I claim that I am conscious of the table. Or consider another case. Suppose that for some reason you think ‘John is here, in this room’ with your eyes closed and where John is in fact in the room. I claim that I would be conscious of John.

Now Pete seems to think that it is obvious that I am NOT conscious of the table or of John in these cases, whereas I seem to think that it is equaly obvious that I am. Does anyone have an argument/intuitions either way?

UPDATE: I think I have actually found an argument for the claim that thoughts makes us conscious, other than the claim that it is intuitive in the above examples that I am. Rosenthal argues that we can be conscious of one and the same experience in various ways and these ways can be more or less exact. So, I could be conscious of an experience of red as a particular shade of red or juast as a generic shade of red, but preseumably the first-order state is in fact a determinate shade. This means that there is more to my conscious experience than the first-order expereinces that I have. We need a higher-order state that is able to capture these kinds of differences and the intentional/conceptual content of thought is arguably the only way to do this. I rather like this arguement…

Applying Frigidity

As commonly understood Kripke’s notion of rigidity is a property that some terms have and that others lack. I argue that there is no such property that is had by some terms and lacked by others; hence there is no rigidity as commonly construed . Recent discussions of rigidity have, I claim, forgotten the importance that stipulation plays in Kripke’s original account. In short the argument is that the truth-conditions of sentences with supposed rigid designators in them can vary depending on the stipulative act of the speaker. But if rigidity were a property of the terms themselves the truth-conditions should not vary! I introduce the notion of frigidity which is not a property that terms have, but something that we do and is a tool that we use to evaluate counter-factuals (Introducing Frigidity). We decide to ‘freeze’ the referent of a term and then try to evaluate counter-factual statements in terms of the constant referent. The ‘freezing’ is accomplished by a stipulative act on the part of the speaker.

Thus it follows that there are two ways to perform the thought experiment of frigid stipulation corresponding to taking one or the other terms flanking the identity sign as frigid and asking ‘what about that in another possible world?’ We decide that we are going to stipulate, trivially as Kripke says, that we want to find out about X in a possible world. So for water=H20 we can ask ‘what if H20, this very chemical substance, was in a world that was different from ours?’ If it turns out that H20 is not ‘watery’ that is OK. We can then also ask ‘what about water? Stuff that acts like this, fills our lakes and etc? What if we found a world that had watery stuff that was not H20?’ And that is OK as well. This has the advantage of explaining why people’s intitions vary about whether twater is water.

However Kripke (Kripke 1980) makes the claim that when it comes to mental kinds we cannot do this because in the case of pains and whatnot their properties are not separable in this way. But once we switch from rigidity to frigidity this is less obvious. We can hold the brain state frigid and ask ‘what is it like to have this brain state in a world that is different from ours?’ It may turn out that that very brain state is not like anything to have at all. On the other hand we can hold the sensation of pain frigid and ask questions about worlds with that sensation. It certainly seems logically possible that some of those worlds will have that sensation and yet not have any brain states at all!

This is just what Kripke’s bjection to the identity theory is. He says “this notion seems to me self-evidently absurd. It amounts to the view that the very pain I now have could have existed without being a mental state at all,” (p.147). Well, yes this is true if what he means is that the very brain state he is in and which is his pain might have existed but was not painful for the creature that had it. This is to do no more than admit that there might exist an unfelt pain. He is wrong if he means that a pained creature, one that felt pain, would not be in pain.

As ‘Corny As I Want To Be

As some of you may know, I have been mounting an offensive against Pete Mandik’s Unicorn argument against higher-order theories of consciousness. We have been having quite a bit of discussion over at the Brain Hammer (Me So ‘Corny) on whether or not my proposed answer works or not, and so I thought I would take this opportunity to sum up the debate so far.

The Argument

Pete’s argument is actually quite simple. Here is the way that he puts it:

First, some quick and dirty definitions of my targets:

[Higher-order Representationalism] – The property of being a conscious state consists in being a represented state.

P1. Things that don’t exist don’t instantiate properties.

P2. We represent things that don’t exist.

P3. Representing something does not suffice to confer a property to that thing.

C1. Representing a state does not suffice to confer the property of being conscious to that state (so [higher-order representationalism] is false).

There is another conclusion (C2) that first-order representationalism is false, but I already knew that and so will ignore it.

Two Ways to Kill a ‘Corn

Now it is not secret that I think that is a bad argument that rests on several misunderstandings of the higher-order theory. It is not a threat to Rosenthal’s version of higher-order theory because he would deny the assumption needed to get P3 and hence C1. Here is the way I put it in Kripke, Consciousness and the ‘Corn.

[T]his argument does not threaten Rosenthal’s version of higher-order theory because for him the higher-order thought does not ‘transfer’ or ‘confer’ the property of consciousness to the first order state. For him the property of being a conscious state consists solely in my representing myself as being in a certain state. The first-order state is not changed in any way by the higher-order thought. The only thing that has changed is that the creature is now aware of itself as being in the state.

Now it may be counter-intuitive to say that the higher-order state in no way changes the first-order state, but intuition is not argument. Also, the transitivity principle commits you to this claim as I detailed in Explaining What It’s Like, and as Rosenthal is well aware of. Here is his response to the problem posed by P2 (The interviewer is Uriah Kriegal)]

Ephilosopher: Professor Rosenthal, let me raise one final difficulty for your theory. According to your theory, what it is like for the subject to be in a conscious state is determined by how that state is represented by the second-order state. But what happens when there is a misrepresentational second-order state, with no first-order state at all? It seems your theory commits you to saying that, in such cases, the subject is under the false impression that she is having a particular kind of conscious experience, when in fact she is not. Doesn’t that strike you as absurd, though?

David Rosenthal: Answering this question requires a lot of care in how we put things. We can get a feel for what’s at issue by considering a case that actually occurs. Dental patients sometimes seem to themselves to feel pain even when the relevant pain nerve endings are dead or anaesthetized. The widely held explanation is that these patients feel sensations of fear and vibration as though those sensations were pain. We certainly have no trouble understanding this explanation. But how should we describe what’s happening specifically in terms of the patient’s conscious states? It’s undeniable that the patient is in some conscious state, but what kind of conscious state is it? From the patient’s subjective, first-person point of view, the conscious state is a pain, but we have substantial independent reason to say that there simply is no pain. How we describe this case depends on whether we focus primarily on the state of which the patient is actually conscious or on the way the patient is conscious of it. The trouble is that these two things come apart; the patient is conscious of sensations of fear and vibration, but conscious of them as pain. So it’s not at all absurd, but only unexpected, that one be conscious of oneself as being in a state that one is not actually in. It’s worth noting that this divergence between the state of which somebody is actually conscious and how that person is conscious of it has practical importance. The area of so-called dental fear is of interest to dentists and to theorists because patients who understand what’s happening readily come to be conscious of their sensations as sensations of vibration and fear, which is not especially bothersome. How one represents one’s experiences does determine what those experiences are like for one. Is this really the kind of case you asked about? You asked about what happens when one has a higher-order thought that one is in a state that doesn’t occur. But maybe we should treat the dental case rather as a higher-order thought that misdescribes its target; it misdescribes sensations of fear and vibration as a sensation of pain. But I think it will never matter which way we describe things. When a higher-order thought occurs, there are always other mental states, as well. So whenever a higher-order thought doesn’t accurately describe any state that actually occurs, we can say either that it misdescribes some actual state or that it’s about some nonexistent state; it won’t make any difference which way we characterize the situation

So on Rosenthal’s view there simply is no difference between saying that the HOT represents a state that does not exist and saying that it misrepresents a state that does exist. So Rosenthal’s versionof higher-order theory is completely unaffected bythe unicorn argument.

Even so, it does commit him to saying some strange sounding things, but there is another way to think of the relation between the higher-order state and the first-order state, and gives rise to the distinction between what I call K-HOTs and Q-HOTs (Two Concepts of Transitive Consciousness). A K-HOT is caused by the first-order state that it represents, whereas a Q-HOT simply ‘accompanies’ the first-order state it reporesents. Rosenthal used to endorse K-HOTs but has since moved to Q-HOTs, but as I argued in ‘Two Concepts’ there is no reason to abandon K-HOTs and the give us a second, more convincing, way to kill the ‘corn. Here’s how.

A K-HOT represents its target state via the concepts at the disposal of the creature in question in just the same way that Rosenthal has spent so long arguing is the case. The difference is that the K-HOT is (theoretically) required to be caused by some first-order state or other and it is that causal link that determines what first-order state the higher-order state is about. So, K-HOTs will NEVER represent a first-order state that does not exist, it will rather ALWAYS represent (or misrepresent) a state that does in fact exist. So the property of being represented is none other than the property of causing a higher-order state. This means that while it may be true that WE represent things that do not exists, K-HOTs do not. So again, P2 and P3 are blocked.

So whether you link Quine and Q-HOTs or Kripke and K-HOTs the unicorn is no threat to higher-order theories. Of course having said that I think there are reasons to prefer K-HOTs butthat is another story. Â