Implementing the Transitivity Principle

A conscious mental state, for Pete, is a complex state made up of two interacting states. One a first-order sensory state that carries information about the world and the other a higher-representation that characterizes the first-order state in terms of the concepts available to the creature and that also has ‘egocentric’ content, which is content to the effect that the state in question belongs to the creature in question. Recently I have been arguing that theories of consciousness like Pete’s and Prinz’s, and Churchland’s are really just implementations of the transitivity principle (even though and in spite of the fact that they do not think that they are implemting it (Is There Such a Thing as a Neurophilosophical Theory of Consciousness?)).

 In Ch. 5 of Pete’s book-in-progress The subjective Brain he address this concern by saying the following.

Aren’t mental representations with conceptualized egocentric contents automatically implementations of the Transitivity Principle?

Nope. According to Transitivity, a state is conscious only if one is conscious of it. However, according to the theory to be further fleshed out in the next chapter, one set of mental representations that would suffice for consciousness would include the following. I have a sensational state that carries the information that, among other things, there is a coffee cup to my left which triggers the conceptualization that there is a coffee cup to my left which in turn (the conceptualization) exerts (yet to be specified) causal influences on the sensational state. What I would be conscious of, on this view, is a coffee cup as being to my left. I would not be conscious of either the sensational state or the conceptual state or their mutual causal interaction. I need not be conscious of any mental state of me. (There being a coffee cup to my left is arguably a state of me, but it is pretty clearly not one of my mental states.) Therefore, the conceptual egocentric representations that suffice for consciousness need not implement Transitivity.

Now one way of responding to this claim, and the way that is currently being debated over at Bran Hammer (Contents, Vehicles, and Transitive Consciousness and more here), is to argue, as Robert Lurz does, that I can be conscious of my mental states by being conscious of what those states represent. If this is true then it is obvious that Pete and company are just offering an alternative way of implementing the transitivity principle. I do not want to talk about this issue here, as it is being debated at Brain Hammer and I am content to let it continue there.

What I do want to talk about is the claim that I have made that everything that Pete says is something that Rosenthal can agree with and so nothing that he has said shows that there is anything wrong with transitivity or that his theory doesn’t implement it. (A Tale of Two T’s). So, I was reading Ch. 4 of Conscious and Mind entitled ‘Introspection and Self-Interpretation’ while following up on my Introspective HOT Zombie of the previous post (more on that later) when I found this nice passage.

When one has a thought that one’s own experience visually represents a red physical object, that thought need not be in any way consciously inferential or based on theory; it might well be independent of any inference of which one is conscious. From a first person point of view, any such thought would seem unmediated and spontaneous. And it is the having of just such thoughts that makes one conscious of one’s experiences. Such a thought, morover, by representing the experience as itself visually representing a red physical object, makes one conscious of the experience as being of the type that qualitatively represents red objects. And being an experience of that type simply is having the relevant mental quality. So, being conscious of oneself as having a sensation of that type is automatically being consciousof oneself as having a sensation with the quality of mental red, and thus of the mental quality itself. (p. 119)

This is interesting because Rosenthal seems to be arguing, in the reverse of Lurz, that being conscious of my self as being in a certain mental state just is being conscious of what the state represents.

So for Rosenthal it will be true that when we introspect we will be conscious of the tomatoe. That is from the first person point of veiw it will seem to us that we are conscious only of the properties of the tomatoe. How is this possible? he makes this a little cearer on the next page where he says,

When one shifts one’s attention from the tomatoe to one’s visual experience of it, it does not seem, subjectively, that some new qualities arise in ones stream of consciousness. This may well seem to underwrite Harman’s insistence that the only quality one is aware of in either case is that of the tomatoe. But that is too quick. As noted earlier, we can be conscious of a particular thing in particular ways. When one see a red tomatoe consciously but unreflectively, one conceptualizes the quality one is aware of as a property of the tomatoe. So that is how one is conscious of that qulity.

So again, we conceptualize the mental quality as a property of the tomatoe when the state is conscious and so we are concious of it as a property of the tomatoe; to us it will seem as though all we are conscious of is the property of the tomatoe. When we introspect we conceptualize the quality as a property of the experience, not of the tomatoe. So Rosenthal can agree that what we are conscious of is the coffee cup or the tomatoe and yet all the while this is just an implementation of the transitivity principle.

Varieties of Higher-Order Zombie

A philosophical zombie is supposed to be a creature that is functionally/physically identical to you but which lacks qualitative consciousness. Although there is something that it is like for you to drink orange juice just after brushing your teeth, there is nothing that it is like for your zombie twin to do the same thing. Of course there is a huge debate over whether these things are really possible or not and if so what they show about consciousness. I don’t really want to get into this traditional problem (my own view is that the answers are ‘no’ and ‘nothing’), but rather want to discuss some kinds of higher-order zombies.

Disregarding the ‘functionally/physically identical’ bit, a zombie on the higher-order theory of consciousness is a creature that has all of my first-order states but none of my higher-order states. There will be nothing that it is like for this creature to have any of its mental states, even though he and I will be pretty much behaviorally indistinguishable (since conscious mental states have very little function on the higher-order theory (but not ‘no function’, as I argued in The Function of Consciousness in Higher-Order Theories)).

I was recently reading Rosenthal’s Metacognition and Higher-Order Thoughts, which is a response to several commentaries on his 2000 Consciousness & Cognition piece. In it Rosenthal addresses the possibility of a HOT zombie, which is a creature “whose inner life is subjectively indistinguishable from ours despite the lack of sensory states.” A HOT zombie is a creature who has all of my higher-order states but none of my first-order states. This is, of course, a radical version of the objection from the ’empty HOT’ and while it is wildly implausible, it is a theoretical possibility and so something must be said about it.

Now some may find the possibility of a HOT zombie to be paradoxical (in fact one of the commentors does). Rosenthal’s response to this is his usual one. He says,

[T]he intuitive paradox rests on an ambiguity in ‘sensory state.’ The sensory states the HOT zombie would lack are only nonconscious states. Since conscious states are states one is conscious of oneself as being in, notional states are allthat matter for the purposes of consciousness.

So me and my HOT zombie twin will have indistinguishable conscious experience but, as Rosenthal notes, we will behave in very different ways. This is because the first-order states that the HOT zombie lacks are the states that have most of the causal efficacy.

Now this is all very interesting in its own right (but I don’t want to discuss it now…Pete and I have argued over this stuff beofre, like here), but last night, as I was introspecting while listening to some live jazz music, I started thinking about another kind of higher-order zombie; an introspective HOT zombie. Introspection, on the higher-order theory, is the occurance of a suitable higher-order state that is about one’s higher-order states. A conscious experience occurs when one is conscious of oneself as being in a certain first-order state and in introspection one becomes conscious of oneself as being conscious of a certain higher-order state. Since introspection is simply the occurance of some third-order state about my second-order states all of the issues about misrepresentation come up again at this higher level.

So we could (theoretically) have a creature who lacked all of my first-order states and all of my second-order states but which had all of my third-order states. This is the introspective HOT zombie. This creature has no conscious states even though it seems to him as though he does. When I see red I will be conscious of the red and conscious of myself as seeing red and were I to introspect I would be conscious of myself as being conscious of myself as seeing red, but the introspective HOT zombie is just conscious of itself as being conscious of itself as seeing red. What will it be like for this ceature? It will be like consciously and introspectively seeing red.

As if this wasn’t bizzare enough we could (again theoretically) have a case of a creature who had a first-order state that was a seeing of red and that had a HOT misrepresenting this first-order state as a seeing of green. What it is like for this creature to have the first-order state will be like seeing green so it will be like seeing green for this creature. Now suppose that this creature introspects its conscious mental states and (for some reason) has a third-order state that represents the second-order state as a seeing of red (that is it accidently gets things right). What will it be like for this creature? Are we to say that this creature is conscious of itself as seeing red and not conscious of itself as seeing red? That what it is like for this creature is like seeing red and not seeing red?

I will have to think about this some more…

Is There Such a Thing as a Neurophilosophical Theory of Consciousness?

Pete has Ch. 4 of his book-in-progress up over at the Brain Hammer, entitled The Neurophilosophy of Consciousness. His stated goal is to discuss

philosophical accounts of state consciousness, transitive consciousness, and phenomenal character that make heavy use of contemporary neuroscientific research in the premises of their arguments.

This is because he defines ‘neurophilosophy’ as the bringing to bear of concepts from neuroscience to solve problems in philosophy, as he says

neurophilosophical work on consciousness proceeds largely by bringing neuroscientific theory and data to bear on philosophical questions such as the three questions of consciousness.

But it is unclear to me in what sense a theory of consciousness can be neurophilophical at all.

For instance, here is how he charecterizes Churchland’s account of what a conscious state is,

Paul Churchland articulates what he calls the “dynamical profile approach” to understanding consciousness (2002). According to the approach, a conscious state is any cognitive representation that is involved in (1) a moveable attention that can focus on different aspects of perceptual inputs, (2) the application of various conceptual interpretations of those inputs, (3) holding the results of attended and conceptually interpreted inputs in a short-term memory that (4) allows for the representation oftemporal sequences.

How is this neurophilophical? To be sure, Churchland goes on to talk about how this could be implemented in a connectionist neural architecture, but the actual theory of what a conscious state is isn’t much different from standard higher-order accounts. It involves being aware of myself as being in a certain state. Nothing neurophilosphical here! And his account of the what it is linke-ness just involves appeal to the representational content of sensory states, again nothing specifically neurophilosophical about this.

The same can be said about Prinz’s AIR model, which Pete quotes a summary of,

When we see a visual stimulus, it is propagated unconsciously through the levels of our visual system. When signals arrive at the high level, interpretation is attempted. If the high level arrives at an interpretation, it sends an efferent signal back into the intermediate level with the aid of attention. Aspects of the intermediate-level representation that are most relevant to interpretation are neurally marked in some way, while others are either unmarked or suppressed. When no interpretation is achieved (as with fragmented images or cases of agnosia), attentional mechanisms might be deployed somewhat differently. They might ‘‘search’’ or ‘‘scan’’ the intermediate level, attempting to find groupings that will lead to an interpretation. Both the interpretation-driven enhancement process and the interpretation-seeking search process might bring the attended portions of the intermediate level into awareness. This proposal can be summarized by saying that visual awareness derives from Attended Intermediatelevel Representations (AIRs). (p. 249)

Again, it is difficult to see how Prinz is doing anything more than discussing a possible implementation of the transitivity principle, which is not neurophilosophical. Pete does note that Prinz does not WANT his theory to be an implementation of the transitivity principle, but the challenge is to explain how it isn’t, not merely indicate that one wants it to be different.

Pete himself makes this clear in his summary of the three positions.

Churchland, Prinz, and Tye agree that conscious states are representational states. They also agree that what will differentiate a conscious representation from an unconscious representation will involve relations that the representation bears to representations higher in the processing hierarchy. For both Churchland and Prinz, this will involve actual interactions, and further these interactions will constitute relations that involve representations in processes of attention, conceptual interpretation and short term memory. Tye disagrees on the necessity of actually interacting with concepts or attention. His account is dispositional meaning that the representations need only be poised for uptake by higher levels of the hierarchy.

So, in so far as these are theories of consciousness, they are the standard ones. Now, I am not denying that these guys are neurophilosophers in the sense that Pete means; they do appeal to detailed neuroscience in the premises of their arguments. But I don’t see how the neuro stuff is supposed to be a theory of consciousness. As I have said, it looks like spelling out ways of implementing the two standard (first-order/higher-order) representational theories of consciousness.

The challenge then, is to spell out a neurophilosophical theoryof consciousness that is distinct from these standard theories which are not themselves neurophilosophical.

Consciousness is Not a Relational Property

I’m Back! At least for the next five days until I go to Vegas for the ASSC on Friday for some more HOT Fun in the Summertime!

Wow, what a trip!!! Toronto is much nicer than I thought it would be, and the East Coast is truely beautiful this time of year (the highlight for me was the saltwater pool in Kennebunkport…almost like being in the ocean in Hawaii, or Jamaica or something, nice!)…but it is good to be back in Brooklyn…

Anyways, here is the passage from p. 211 of Consciousness and Mind that I mentioned in the previous post (Consciousness, Relational Properties, and Higher-Order Theories)

Since there can be something it’s like for one to be in a state with particular mental qualities even if no such state occurs, a mental state’s being conscious is not strictly speaking a relational property of that state. A state’s being conscious consists in its being a state one is conscious of onself as being in. Still, it is convienent to speak loosely of the property of a state’s being conscious as relational so as to stress that it is in any case not an intrinsic property of mental states.

’nuff said? This is the real reason that Rosenthal’s view is not targeted by objections like Pete Mandik’s Unicorn argument, or the common objection from the possibility of the HOT occuring in the absence of the first-order state, or as I argued, from Uriah’s charge that higher-order theories, like Rosenthals’s, that claim that the first-order state does not acquire a new property (i.e. of being a conscious state) are committed to the claim that consciousness is epiphenomenal.

I agree that the confusion is due mostly to Rosenthal’s ‘loose way of speaking’ and his reluctance to disabuse people of this intuitive picture of the higher-order thought theory. This is at least in part because this way of thinking of the theory agrees better with our common sense conception of how things like this should work. This, as I have already said, is yet another reason to prefer K-HOTs to Q-HOTs. 

Consciousness, Relational Properties, and Higher-Order Theories

Greetings from Kennebunkport!

Jen and I went whale watching yesterday (saw a few Hump-backs and a couple of Fin-backs)…but all I could think about was consciousness! That is, aside from trying not to get sea-sick 🙂

In an earlier post (The Function of Consciousness in Higher-Order Theories) I argued that higher-order theories were committed to saying that consciousness had very little function, but not, as Uriah suggested, to saying that consciousness was epiphenomenal. This was found to be puzzling by some, and today I was thinking about why this is.

Intuitively, what people think that higher-order theories construe a mental state’s being conscious as a relational property of the first-order state. This is not suprising, sunce Rosenthal has said in numerous places that on his view consciousness is a relational property. But this is actually not right.

In Sensory Qualities, Consciousness, and Perception he is very clear that consciousness is not a relational property of the first-order state (I do not have my copy of Consciousness and Mind with me, but the passage is in section 5). This is because, on his view, the higher-order state can occur ibn the absence of the first-order state.

So, a state’s being conscious is not an intrinsic property of the state, nor is it strictly speaking a relational property of the state. It simply isn’t a property that the first-order state has at all. Any given first-order state is conscious when a suitable higher-order state represents the creature as being in that state.

Now, I agree that this is odd sounding, but this is what Rosentyhal’s view is…this is yet another reason to prefer K-HOTs to Q-HOTs. On this view the first-order state does come to have the relational property of being conscious in virtue of causing a K-HOT that represents the creature as being in that state, and since there is no real difference between representing a state that does not exist and misrepresenting a state that does exist (on Rosenthal’s view) then the K-HOT account captures everything that Rosenthal wants to say without the odd sounding results.

OK, so back to the pool for me!

On Hallucinating Pain

OK, so one more for the rode…

I was recently re-reading one of Ned Block’s papers (‘Bodily Sensations as an Obstacle for representationism’) where he denies that there is an appearance/reality distinction when it comes to pain. This is a commn view to have about pain (had for instance by Kripke in his argument against the Identity Theory). Here is what he says

 My color experience represents colors, or colorlike properties. (In speaking of colorlike properties, I am alluding to Sydney Shoemaker’s “phenomenal properties”  or “appearance properties” or Michael Thau’s nameless properties.) But, according to me, there is no obvious candidate for an objectively assessable property that bears to pain experience the same relation that color bears to color experience. But first, let us ask a prior question: what in the domain of pain corresponds to the tomato, namely, the thing that is red? Is it the chair leg on which I stub my toe (yet again), which could be said to have a painish or painy quality to it in virtue of its tendency to cause pain–experience in certain circumstances, just as the tomato causes the sensation of red in certain circumstances? Is it the stubbed toe itself, which we experience as aching, just as we experience the tomato as red? Or, given the fact of phantom-limb pain, is it the toeish part of the body image rather than the toe itself? None of these seems obviously better than the others.

Now if one has adopted a higher-order theory of consciousness one will think that there is indeed an appearance/reality distinction to be made here. Since it is the higher-order state, and only the higher-order state, that accounts for there being something that it is like to have a conscious pain it follows that there is the real possibility that one may misrepresent oneself as being in pain when one is not, or as not being in pain when one is.

So it is no suprise to find David Rosenthal saying stuff like this

Just as perceptual sensations make us aware of various physical objects and processes, so pains and other bodily sensations make us aware of certain conditions of our own bodies. In standard cases of feeling pain, we are aware of a bodily condition located where the pain seems phenomenologically to be located. It is, we say, the foot that hurts when we have the relevant pain. and in standard cases we describe teh bodily condition using qualitative words, such as painful, burning, stabbing, and so forth. Descartes’s famous Sixth Meditation appeal to phantom pains reminds us that pains are purely mental statess. But we need not, on that account, detach them from the bodily conditions they reveal in the standard, nonhallucinatory cases. (from Sensory Quality and the Relocation Story)

 So Rosenthal seems to be saying that it is bodily conditions that play the role that the tomatoe does and it is first-order states which constitute an awareness of those conditions which play the role that Block calls ‘representing color or colorlike properties’. If these are all distinct states, then we should expect for them to come apart.

 I have addressed the issue of unconscious pains in some previous posts. An unconscious pain, for Rosenthal and those like him, is a state that makes us conscious of some bodily condition and which will resemble and differ other pains states in ways that are homomorphic to the resembelances and differences between these bodily states. But what about the other case mentioned? Is it even possible to think that one is in pain and be wrong?

Rosenthal cites what he calls ‘the dental fear phenomenon’ as evidence for this claim. Here is what he says (in the same article as before)

Dental patients occasionally report pain when physiological factors make it clear that no pain could occur. The usual explanation is that fear and the non-painful sensation of vibration cause the patient to confabulate pain. When the patient learns this explanation, what it’s like for the patient no longer involves anything painful. But the patient’s memory of what it was like before learning the explanation remains unchanged. Even when what it’s like results from confabulation, it may be no less vivid and convincing than a nonconfabulatory case.

Now, I have always felt that this dental fear stuff was a really convincing way of showing that there really is a reality/appearance distinction for pains, but when I have tried to research this I have not been able to find very much on it (and Rosenthal offers no citations), but it does seem to be a reletively common phenomenon. Here is an excerpt from a paper on dental fear in children that tells a dentists how to deal with this

Problems that a dentist is convinced are associated with misinterpretation of pain may be addressed by explaining the gate theory of pain. A very basic explanation which is suitable for children as young as five is as follows. ‘You have lots of different types of telephone wires called nerves going from your mouth to your brain (touch appropriate body parts). Some of them carry “ouch!” messages and the others carry messages about touch (demonstrate) and hot and cold. The sleeping potion stops the ouch messages being sent, but not the touch and the hot and cold messages. So you will still know that I am touching the tooth and you will still feel the cold of the water. Your brain looks out for messages all the time. If you are convinced that it will hurt, it will. This is because if I make the ouch nerves go off to sleep and I touch you, a touch message gets sent. But your brain is looking for ouch messages and it says to itself, ‘There’s a message coming. It must be an ouch message.’ So you go ‘ouch’ and it hurts, but all I did was to touch you. It’s just that your brain was confused.’ (The language may, of course, be adjusted for older children.) If this fails to work, then active treatment should be stopped. (from Dental Fear in Children)

This is clearly a pain hallucination, as evidenced by the fact that the way they treat it is not with more medication, but with an explanation, pitched at the kids level, of why what they are fealing is not pain.

Now this is very different from what is called neuropathic pain, which is pain that is caused by a misinterpretation of an innocuous stimuli, like touch, or pains like phantom limb pain. This is the result of one kind of stimuli, for one reason or another, causing the bodily state that gives rise to the perception of pain.

Peripheral nociceptive fibers located in tissues and possibly in the nervi nervorum can become hyperexcitable by at least by 4 major mechanisms: a) nociceptor sensitization (“irritable nociceptors”); b) spontaneous ectopic activity; c) abnormal connections between peripheral fibers; and d) hypersensibility to catecholamines. This peripheral sensitization results in increased pain responses from noxious stimuli (primary hyperalgesia) and previously innocuous stimuli elicits pain (peripheral allodynia). Central nociceptive second order neurons in the spinal cord dorsal horn can also be sensitized when higher frequency inputs activate spinal interneurons. This results in the release of neuromodulators that activate glutamate receptors and voltage-gated calcium channels with a net effect of an increase of intracellular calcium that windup action potential discharges. Degeneration of peripheral nociceptive neurons may trigger changes in the properties of low-threshold sensitive neurons and axonal sprouting of the central processes of thesefibers that connect with central nociceptive interneurons. (from Neuropathic Pain Treatment: The Challenge

So it does look like we can distinguish the three states and that we do in fact find cases on one without the other.

Shesh! that turned out to be longer than i expected…but what the hell? I’m Outa Here!

The Function of Consciousness in Higher-Order Theories

I was recently reading through a new paper of Uriah Kriegel’s called The Same-Order Monitoring Theory of Consciousness where he says this

If consciousness were indeed a relational property, M’s being conscious would fail to contribute anything to M’s fund of causal powers. And this would make the property of being conscious epiphenomenal (see Dretske 1995: 117 for an argument along these lines).

This is, by all appearances, a serious problem for HOMT [higher-order monitoring theory a.k.a. Higher-order thought theory]. Why have philosophers failed to press this problem more consistently? My guess is that we are tempted to slide into a causal reading of HOMT, according to which M* produces the consciousness of M, by impressing upon M a certain modification. Such a reading does make sense of the causal efficacy of consciousness: after M* modifies M, this intrinsic modification alters M’s causal powers. But of course, this is a misreading of HOMT. It is important to keep in mind that HOMT is a metaphysical, not causal, thesis. Its claim is not that the presence of an appropriate higher-order representation yields, or gives rise to, or produces, M’s being conscious. Rather, the claim is that the presence of an appropriate higher-order representation constitutes M’s being conscious. It is not that by representing M, M* modifies M in such a way as to make M conscious. Rather, M’s being conscious simply consists in its being represented by M*.

So far this is all right (notice how Uriah has Rosenthal’s account correctly formulated in such a way as to be immune from certain unicorn arguments). I have also pointed out how this implicit assumption about what the higher-order thought theory is keeps people from thinking the theory is anti-Cartesian in certain important respects.

But Uriah goes on to say that

When proponents of HOMT have taken this problem into account, they have responded by downplaying the causal efficacy of consciousness. But if the intention is to bite the bullet, downplaying the causal efficacy is insufficient – what is needed is nullifying the efficacy. The charge at hand is not that HOMT may turn out to assign consciousness too small a fund of causal powers, but that it may deny it any causal powers. To bite the bullet, proponents of HOMT must embrace epiphenomenalism. Such epiphenomenalism can be rejected, however, both on commonsense grounds and on the grounds that it violates what has come to be called Alexander’s dictum: to be is to be causally effective. Surely HOMT would be better off if it could legitimately assign some causal powers to consciousness. But its construal of consciousness as a relational property makes it unclear how it might do so.

Now Rosenthal will be speaking about this issue at the ASSC, and Uriah is right that Rosenthal does not think that there is much, if any, function to consciousness qua consciousness, so I don’t want to get into that stuff. What I want to question is whether or not anyone who agrees with Rosenthal is committed, in the way that Uriah seems to think that they are, to saying that consciousness is epiphenomenal.  

The brunt of the challenge seems to come from the claim that our being conscious of a mental state, and hence that mental state being conscious, does not change, or modify the first-order state in any way and so its causal powers are unaffected by being conscious. I think that this is right; in fact I use this as a premise in my argument that higher-order theories are committed to there being something that it is like for a creature to have a conscious thought. But does this claim entail that consciousness is epiphenomenal? I am not sure that it does.

I think that someone who like the higher-order theory could say that, while the first-order state does not come to have any new causal properties when it become conscious, the creature in which the state occurs does. So, at the very least, even by Rosenthal’s lights, we get the ability to report (as opposed to express) our mental states when they are conscious, and we get the ability to introspect our mental states and thereby come to know what it is like for us to have them.

Now whether they can say there is more to the function of consciousness than this is another question, but at the very least, one does not have to dine on the bullet that Uriah has prepared.