The New New Dualism

Yesterday I attended Miguel Angel Sebastian’s cogsci talk entitled “The Subjective Character of Experiencre: Against HOR and SOR Theories” which was very interesting. Miguel was primarily trying to show that higher-order and same-order representationist theories of consciousness cannot account for the subjective character of an experience by which he means the thing that accounts for the experience being for the subject. His main complaint seemed to be that in order to account for this we need some notion of the self and so he suggested that we need a model where we have representations of teh self interacting with representations of objects and we thus end up with a representation of the form “x for-me”. There were several interesting themes of the discussion and if I have time I will probably come back to some of them but I thought I’d start with this one.

In response to the mis-match problem David has settled on the following view. The phenomenology goes with the HOT. The sensory qualities of the first-order state play no role –other than that of concept acquisition– in determining the phenomenal character of a conscious experience. So in the case of Dental Fear the subject has a first-order state with vibration sensory qualities and a HOT that they are in pain so their conscious phenomenology is like having pain for them. The first-order sensory qualities play a perceptual role in the mental economy of the subject so having them is important but they don’t play a role as far as consciousness is concerned. In fact even if there is no first-order state at all (as may perhaps be the case in Anton’s syndrome) the phenomenology goes with the HOT. Now in the cases where there is no first-order state one still counts as being in a conscious state. The mental state that is conscious is just the one that the HOT represents oneself as being in and so in this case the conscious mental state is a notional state, which is to say that it doesn’t exist. It follows from this that there are conscious mental states that have no neural correlates. We thus end up with a dualism about consciousness of a new variety. There are some conscious mental states that exist physically in the brain and there are other conscious mental states that exist only notionally as the content of a HOT.

What should our reaction to this be? When this first became clear at David’s Mind and Language seminar it prompt Steve Stitch to shout ‘he’s worse than a dualist!’ Miguel seemed to think that at the very least this is a cost of the theory and that if you can have a theory that explains all the data without it that is preferable. David refused to say that this was even a cost for the theory, in fact he seemed to suggest that it wasn’t even counter-intuitive. His reasons seemed to be as follows. I can have a thought about things which are not present and those notional objects can have properties. So, if I think about a squirrel I might think of it as brown, and bushy even if there is no squirrel around yet the squirrel has properties; it is brown and bushy. Thus it is simply a fact about intentional states like thoughts that their contents can be notional and that those notional objects can be said to have properties. If that is right then there is nothing fundamentally mysterious about notional mental states having properties. The second step in his defense seemed to involve an appeal to hallucinations. We hallucinate regularly enough for it to be a common-place of folk psychology. Why doesn’t it make sense to say that we can hallucinate mental states? On this line the notional state is just like my hallucination of a pink elephant: it seems like it is there from my point of view but it isn’t really there. This isn’t mysterious since that just simply means that I represent myself as being in a state that I am not in. Now given various theoretical assumptions this will indeed turn out not to be counter-intuitive and since those who do find it counter-intuitive will do so because of different theoretical assumptions I suppose I can see why David thinks that this is not a cost to the theory.

But suppose that one had different theoretical assumptions?  Suppose that one wanted to avoid this kind of existence dualism and so endorsed some kind of principle like this: For every conscious mental state there is a corresponding brain state. But suppose one also wanted to remain a higher-order theorist…what are the options? The most obvious option is to identify the phenomenally conscious state with the HOT. The HOT is not introspectively conscious –for that it would need to have a third order state targeting it– but it is phenomenally conscious. It is the state in virtue of which there is something that it is like for the subject and so it seems natural to identify the property of phenomenal conscious with having the HOT. Ned Block has argued that if one does this then one has falsified the higher-order theory. Why? The transitivity principle says that a conscious mental state is one which I am conscious of myself as being in but on the previous analysis we have a phenomenally conscious mental state (the HOT itself) of which we are not conscious of ourselves as being in (there is not third-order HOT) thus adopting this view falsifies the transitivity principle. But this may be too quick. This way of formulating the transitivity principle leads us to the view that the HOT transfers or confers the property of being conscious to the first-order state but as we have seen what the transitivity principle really says is that a conscious mental state consists in my being conscious of myself as being in some first-order state. That is, the transitivity principle is a hypothesis about the nature of conscious mental states. It is a mis-reading of the transitivity principle that takes it to postulate consciousness resulting in a relation between the first-order state and the higher-order state. That this is the dominant way of interpreting the transitivity principle is not in doubt; it most certainly is. However, it is misleading and cause way too many problems. I think higher-order theorists need to be more explicit about this mis-reading of the transitivity principle.

To me the second is the best option. However, lots of people seem to think that of one adopts a same-order theory one can avoid these kinds of issues. Since one takes the conscious mental state to be a complex of a first-order content and a second-order content that represents the first-order content we don’t have to worry about notional states. Bit it is far from obvious that this theory has any advantages over the HOT theory. First it is unclear why the higher-order content cannot occur without the first-order content. This seems like an empirical issue that can’t be settled by definitional fiat (I guess I think Anton’s syndrome might be a problem here). Second, even if it turns out that you can’t have one with out the other it is still not clear why there cannot be a content mis-match. Why can’t a red first-order state be coupled with a higher-order content that represents the first as green?

Does the Zombie Argument Rest on a Category Mistake?

re-reading Ryle’s “Descartes’ Myth” I was struck by the following passage

…the Dogma of the Ghost in the Machine does just this. It maintains that there exist both bodies and minds; that there occur physical process and mental process; that there are mechanical causes of coporeal movement and mental causes of coporeal movement. I shall argue that these and other analogous conjunctions are absurd…the phrase ‘there occur mental process’ does not mean the same sort of thing as ‘there occur physical process,’ and, therefore, that it makes no sense to conjoin or disjoin the two. (this is from page 37 in the Chalmers anthology)

I have always been sympathetic to the category mistake move and have viewed it as a precursor to the claim that it is simply question begging to treat mental terms as synonymous for ‘non-physical’. I also think that a lot of my complaints about the intelligibility of substance dualism originate in Ryle’s discussion of the origin of the category mistake.

Re-reading this today I started thinking that maybe one could use this kind of claim to cause problems for the Zombie argument. The first premise of the zombie argument employs the conjunction (P & ~Q) where P are all of the physical facts and process and Q is some qualitative fact like that I feel pain. If it is really logically  illegitimate to conjoin these terms then the zombie argument cannot even get off the ground. So what is the response that the dualist will make here? It seems to me that all of the examples of category mistakes involve concepts that have fairly straightforward conceptual entailment relations between them. So, a pair of gloves just is a left glove and a right glove and we can tell this just by analyzing the concept of PAIR OF GLOVES. The same can be said for teh University, and the battalion. But if course it is not obvious, to say the least, that the same is true for PAIN or SEEING BLUE. To many, myself included, it seems as though there are no conceptual entailment relations between my “pure” phenomenal concept of pain and physical processes (for me the ‘seems as though’ part is especially important).  But maybe it is at just this point that I myself, as well as the dualist, commit the category mistake!

Whoa…I’ll have to come back to that because now I’m off to Miguel’s CogSci talk

Higher-Order Mental Pointing

I recently re-watched the footage of the discussion from Hakwan’s actual talk at NYU. One interesting issue that came up (there were others I may talk about later) was whether a higher-order theory can avoid the mis-match problem.

The problem is this. Suppose you have a first-order state that is a seeing of red while one has a higher-order state to the effect that one is seeing green. David R. argues that the phenomenology goes with the higher-order representation and so in this case the person would have green visual phenomenology. They would be consciously seeing green. A first-order theorist will argue that the phenomenology goes with the first-order state. Block suggests this when he says that it is just the first-order state getting above a certain thresh hold that makes it phenomenally conscious. Hakwan wants to avoid this and so adopts a pointing view. On his view we have a higher-order confidence judgement to the effect that I am such and such % sure that I am in this or that sensory state. Since the higher-order state is just pointing at the first-order state Hakwan suggests that there is no mis-match problem for his view.

But the question then arises: what is mental pointing? On my view mental pointing is just having the right causal connection. That is I have a purely causal theory of reference for higher-order thoughts. However, these are complex demonstratives and have the form I AM IN THAT-RED* STATE. Where the THAT-red* term has it’s reference fixed by the causal connection between the states (sometimes I think it might be because it has the function to do so, sometime I don’t…) but the phenomenal character is determined by the conceptual content of the complex demonstrative. What are the other candidates for mental pointing? When asked later in discussion Hakwan offers the following. Suppose that each sensory state that the brain can be in is labeled 1-n. Suppose that the state labeled ‘1’ has a very good signal but something goes wrong and one has a higher-order confidence judgement that the state labeled ‘4’ is true then one will hallucinate 4 and fail to consciously see 1. But what are these labels if not the kind of complex demonstratives I talked about above?

Interestingly, later in the discussion, Hakwan proposes a nice empirical test that might help to decide between the higher-order view and the first-order view. The higher-order view predicts that one can have a conscious experience of green even when one has a first-order representation of red. Given what we know about the brain this might translate into having certain kinds of activity in the pre-frontal cortex that is different from the activity is V4. Suppose that we could identify, or read-out, stimulus color from the activity in V4 and we were also able to read out the color from activity in the pre-frontal cortex. Suppose that when the stimulus was unconsciously presented we say only the activity in V4 and not in the PFC. Suppose that in the Sperling-type cases we got evidence that the stimulus was represented unconsciously (activity in V4) but in the PFC we only got the read-out told us that they only saw some letters arranged in a grid. This would do what Hakwan suggests; take a prediction that no one in the world believes, do an experiment and see what happens.

I think that until we are in a position to do these kinds of experiments, or someone thinks of a clever way to get at the issue in a different way, we cannot rule the higher-order theory out. It may turn out to be false, but it may turn out to be true. Conceptual objections cannot help us as they only serve to tell us what we find intuitive.

More HOTter, More Better

In an earlier post I outlined the case for qualia realism from the higher-order perspective as I see it. Dave Chalmers worried that one of the moves was too quick. The move in question is the move from concepts making a difference to phenomenal experience to their determining phenomenal experience. Basically the line I was pushing was that if it is the case that applying concepts changes our phenomenal experience then “perhaps it is not too crazy to think that applying concepts is what results in phenomenal feel in the first place,” but Dave is right that there is a lot more that needs to be said.

As I also said, I think that a crucial step in securing this premise in the argument is showing that there can be unconscious states with qualitative character which are not like anything for the creature that has them. If we established that then we would have evidence that it is solely applying concepts that constitutes phenomenal consciousness. There is another line of argument which might show this as well which is given by David Rosenthal in a few different places (see page 155 in Consciousness and Mind for a representative example). Basically it is a subtraction argument. Take some phenomenally conscious experience, like listening to music. We already agree that applying new concepts will change the character of the experience. So, if I were to learn what a bass clarinet was then listening to Herbie Hancock’s Chameleon will sound differently to me. Now suppose that we subtract this concept. My experience will change. More specifically it will lack the bass clarinetiness that my experience had when I applied that concept. Now we can continue subtracting out concepts one by one without altering the first-order state in any way. Since subtracting the concept produces a phenomenal experience that lacks precisely the element corresponding to the concept we can conclude that subtracting these concepts will produce phenomenal consciousness that is sparser and sparser. What are we to say when we have reached teh point where there is just one concept characteriing the first-order state? Suppose that we are at the point where we are only applying the concept SOUND to the experience. Phenomenally it will be like hearing a sound for me but not any particular sounds. Now suppose we subtract that concept. What will it be like for the creature?

The higher-order theorist says that at that point it is no longer like anything for the creature. The other side says that there is still something that it is like, though it may not be like anything for the creature) but what argument could show this? What reason is there for thinking that there is anything phenomenal left over?

Summa Contra Plantinga

I recently reread Alvin Plantinga’s paper Against Materialism and needless to say I am less than impressed. Plantinga presents two “arguments” against materialism each of which is utterly ridiculous.

The first is what he calls the replacement argument (sic). It is possible, Plantinga tells us, that one could have one’s body replaced while one continues to exist; therefore one is not one’s body. Of course the obvious problem with this argument is that it at best shows that I am not identical to a particular body but it does not show that minds are not physical for it does not show that the mind exists with out any body whatsoever. To show that Plantinga needs to appeal to disembodiment and he doesn’t.

It is also clearly possible that one could have one’s immaterial substance replaced and continue to exist; thus one is not an immaterial substance. This is because there is nothing contradictory in supposing that materialism is true and what this shows, as I have argued at length before, is that these a priori arguments are of no use to us at this point.

Now Plantinga, to his credit, realizes that these kinds of intuitions are ultimately question begging so his second argument appeals to an alleged impossibility, which turns out to be none other than the problem of intentionality. The argument turns on our ability to ‘just see’ that it is impossible that a physical thing can think. Just as the the number 7 can not weigh 5 pounds neither can a brain think. Never mind computers and naturalized theories of content, those couldn’t be belief contents. Oh, I see…wait, I don’t.

But of course the real problem here is that it is even more mysterious how an immaterial substance could think. Plantinga spends some time in the paper responding to Van Inwagen’s argument along these lines. Plantinga focuses on Van Inwagen’s claim that we can’t imagine an immaterial substance. The response should be obvious: we can’t imagine lots of stuff (like what a number looks like) but that doesn’t show that they are impossible. Van Inwagen’s second swipe at immaterial substances is that we cannot see how an underlying reality that is immaterial can give rise to thinking any more than we can see how an underlying physical reality can. Plantinga’s response to this is to claim that the soul is a simple and has thinking as an essential attribute in much the same way as an electron is said to be simple and have its charge essentially.

But all of this seems to me to miss the fundamental point that Van Inwagen wants to make. The very concept of an immaterial substance is unintelligible. Attempts to make them intelligible render them into ordinary physical substances at the next level up, so to speak. And it is of course out of the question to simply say that an immaterial substance is perfectly intelligible since they are just minds (as Plantinga seems to do). It is obvious that there is thinking but it is not at all obvious that an immaterial substance could think. What would that even mean?  The upshot then is that substance dualism is not a viable theory.

Two Concepts of Transitive Consciousness

In celebration of my three years in the Blogosphere I will be reposting some of my earlier posts that I am particularly fond of. This piece was originally published May 10th, 2007.
——————————-

In his youthful exuberance Rosenthal argued that for a first-order state to count as a conscious state the first-order state had to cause the higher-order state to occur. But he has come to explicitly reject this causal requirement. He now talks about the higher-order thought ‘accompanying’ the target state. It need not have any causal connection to the first-order state at all. What this amounts to is that there are at least two different ways of thinking about the relation between the first-order state and the higher-order state depending on whether you think intentionality is at bottom a matter of description and functional role and holism or a matter of word-world relations and causation, and compositionality. This leads us to what I have called Q-higher-order thoughts and K-higher-order thoughts.

A K-higher-order thought is a higher-order thought that is caused by its target state and so picks it out in a something like a causally complex-demonstrative way. Something like ‘I am, myself, in (dthat) red state.’ In order to count some first-order state as a conscious state it has to be the cause of the higher-order state that targets it. On the other hand a Q-higher-order thought need not be caused by the state that it represents in order to be about it and for us to be conscious of it.  It picks out the target state purely by description. The Q-higher-order thought characterizes the first-order state in terms of its resemblances and differences to and from other sensory states like it. Something like ‘I, myself, am in a state that is more like pink than it is like blue and more like orange than it is like green…and etc’. So which of these should we prefer? I have been arguing here, and in response to Pete over at the brain Hammer that this kind of higher-order theory allows us answer the ubiquitous objection from the so-called empty higher-order thought, and more recently, that it gives us a nice response to Pete’s Unicorn argument against Higher-order theories. These, I think, are already powerful reasons to think this is the right way to cast the theory, but one may wonder what else speaks in its favor.

Rosenthal gives two very quick arguments against his former K-Higher-order view, both in a footnote that he added in 2005 (p56). His first argument is that requiring the causal connection between the first-order state and the higher-order state in order for the first-order state to count as a conscious state is theoretically unmotivated. The idea behind this is that the transitivity principle requires only that one be conscious of being in the first-order state; it seems to be silent on what actually causes you to become so conscious. However the causal antecedents of the higher-order state will seem to matter very much if one is influenced by the Grice-Kripke-Fodor picture of the mind. So the claim that the causal requirement is theoretically unmotivated by the transitivity principle is more a revealing fact about Rosenthal that about the higher-order theory. A causal theory of reference is itself powerfully motivated, and if it turns ut to be correct, then we had better have a higher-order thoery that incorperates it (that is, if we want to have a higher-order theory in the first place).

The footnote continues by pointing out that one reason why the idea that the first-order states causes the higher-order state is so intuitive is because it is a way of saving the Cartesian insight that there is an intimate connection between mental states and consciousness. If first-order states are in the business of causing higher-order states about themselves we could easily explain why so many philosophers have thought that being conscious is essential to being a mental state. It also explains why we are conscious of our mental states in an immediate, non-inferential way, which is required by higher-order theories. This looks like some kind of theoretical motivation, so what is it that he finds so problematic?

He argues that if we require that the first-order state cause the higher-order state in order for it to be a conscious state, we end up having to say that being conscious is the ‘normal condition’ for mental states. The reason that we do not want to say that being conscious is the normal condition for mental states is because it obscures the important fact that they may occur unconsciously and that that seems like a pretty normal condition for mental states to be in as well.  If the normal condition of a mental state was conscious then it is only if some special causal mechanism intervened in the normal procedure that we would end up with unconscious mental states. But this is wrong because as we saw in the first part of the paper the transitivity principle predicts that any state can occur unconsciously. One is not more normal than the other.

But it is natural to think that some kinds of states are more normally conscious than others. For instance it is natural to think that the sensory sates and other kinds of states that we most likely share with other animals, do normally cause higher-order states about them. Or in other words, it is natural to think that in the case of the sensory states it is more natural to occur consciously, though there are plenty of times when they do not. In the case of thoughts and other more complex forms of mental phenomena it is natural to think that they would be less likely to have to occur consciously being newer perhaps and less in the business of day to day survival. And there are all kinds of stories we can tell about why that is the case and how it would be implemented in a complex system like neural representation. There may be filters, thresholds, feedback networks, both inhibitory and excitatory, and who knows what else.  We can do all this without falling into the trap of thinking that the sensory states must always be conscious.

Containing Phenomenological Overflow

I am going to the Association for the Scientific Study of Consciousness meeting in Toronto to do a poster presentation of the higher-order response to Block’s phenomenological overflow argument. This is important since it is a crucial step in the argument for the naturalization of qualia. The core argument is in this video.

This shows that phenomenological overflow is no threat to the higher-order theory. Is there any reasn to prefer it?  I was rereading Huxley’s On the Hypothesis that Animals are Automata, and Its History and I came across this very interesting passage,

If the spinal cord is divided in the middle of the back, for example, the skin of the feet may be cut, or pinched, or burned, or wetted with vitrol, without any sensation of touch, or of pain, arising in consciousness. So far as the man is concerned, therefore, the part of the central nervous system which lies beyond the injury is cut off from consciousness. It must be admitted, that, if any one think fit to maintain that the spinal cord below the injury is conscious, but that it is cut off from any means of making its consciousness known to the other consciousness in the brain, there is no means of driving him from his position by logic. But assuredly there is no way of proving it, and in the matter of consciousness, if anything, we may hold the rule, “De non apparentibus et de non existentibus eadem est ratio.”

As far as I can tell the latin phrase there means something like “things that can’t be detected don’t exist,” though my latin is rusty. If this is roughly right then Huxley seems to be making an argument similar to the one I was pushing at the Online Consciousness Conference. If the mesh argument doesn’t decide between a Blockian or a Rosenthalian view then we should decide the issue on philosophical grounds. One way of reading the Huxley passage is as a semi-verificationalist move. Since there can be no empirical test of the matter we may treat it as a meaningless hypothesis. I would read this passage differently.

A state is phenomenologically consciousness when there is something that it is like for the creature that has the state. When there is nothing that it is like for the creature then there is no phenomenal consciousness. Thus when there is no what it is likeness around we can assume that there is no phenomenal consciousness hanging about. To imagine otherwise is to imagine that there is something that it is like for me that is not like anything for me…and that sounds like a contradiction.

Importantly, none of this is to deny that unconscious pains have qualitative properties. These qualitative characters, when unconscious, do not have any phenomenal feel but they do resemble and differ other qualitative characters in the right ways and they have causal connections as usual. It is only when we are conscious of them that they have the phenomenology we associate with pain. True, this seems to violate our common sense thinking about pains, though there are some platitudes that cit the other way which just again illustrates that folk theory is often inconsistent.

As Aristotle recommended we must try to save as many of the most basic pre-theoretical platitudes as we can but it may be the case that some will have to go; perhaps the common sense idea that there are unconscious pains that are phenomenally conscious is one of them. The claim turns out to be either paradoxical or merely terminological.

HOT Block

In celebration of my three years in the Blogosphere I will be reposting some of my earlier posts that I am particularly fond of. This piece was originally published July 11th, 2007.
——————————-

I was recently reading Block’s forthcomming BBS paper Consciousness, Accessibility, and the Mesh between Psychology and Neuroscience. It is an interesting paper and I am looking forward to seeing the commentary. The basic puzzle is one that I have heard him talk about before; How could we tell whether the transitivity principle is right or not? What would empirically decide whether there can be a phenomonally conscious state that we are unconscious of having? So, to take Block’s example, suppose that we have a person who is subliminally percieving a face and there is activation in that person’s fusiform face area. Since the subject sincerely reports that they do not see a face it seems we can agree that this is the sensory state in the absence of the higher-order state.

How do we describe this situation? Do we say that the face expeirience is phenomonally unconscious? That there is nothing that it is like to see the face? Does it, as Rosenthal would say, have unconscious qualitative properties? Or do we say that there is something that it is like for the perseon to see the face but that they are unconscious of what it is like for them? The puzzle is that both theories make the same prediction about what the person will report (they don’t see a face) and so we need to find someother way to distinguish the two claims empirically. I don’t really want to talk about Block’s argument that phenomenology overflows our access to it (unless someone does want to talk about it), as all I could do it to repeat the Rosenthal line that the evidence that Block presents (i.e. the change blindness stuff) isn’t good evidence because the subjects can report, as Block acknowledges, that they saw some letters or ‘a rectangle’. Rosenthal can explain this on his account in the following way. In one case we are conscious of the first-order experience as just some rows of some letters or as just a rectangle while in the other we are conscious of the experience as being a row of some specific letters or shapes. So the fact that subjects report that they have some phenomenally conscious experience as Block rightly points out, needen’t be evidence for his claim that there is phenomenology without Awareness.

I think that if one steps far enough back from this debate one can see that it is the distinction between analytic and psyco-functionalism that is causing a lot of the local flare-ups and that this has some bearing on the empirical testability issue and the debate with Mandik that I have been suffering through, but I will leave that for another day.

What I do want to talk about is Block’s dismissal of Rosenthal’s kind of higher-order theory. He makes it very clear that he thinks that the higher-order thought theory is not even a candidate for a serious theory of phenomenal consciousness. As I have said many times before, I do not know if the higher-order thought theory is true or not, but it is at least not obviously false. It is a well formulated theory that could turn out to be right. So what’s Block’s problem?

He makes his case at the beggining of the paper in this rather longish quote.

We may suppose that it is platitudinous that when one has a phenomenally conscious experience, one is in some way aware of having it. Let us call the fact stated by this claim – without committing ourselves on what exactly that fact is – the fact that phenomenal consciousness requires Awareness. Sometimes people say Awareness is a matter of having a state whose content is in some sense “presented” to the self or having a state that is “for me” or that comes with a sense of ownership or that has “meishness” (as I have called it; Block 1995a).

Very briefly, three classes of accounts of the relation between phenomenal consciousness and Awareness have been offered. Ernest Sosa (2002) argues that all there is to the idea that in having an experience one is necessarily aware of it is the triviality that in having an experience, one experiences one’s experience just as one smiles one’s smile or dances one’s dance. Sosa distinguishes this minimal sense in which one is automatically aware of one’s experiences from noticing one’s experiences, which is not required for phenomenally conscious experience. At the opposite extreme, David Rosenthal (2005) has pursued a cognitive account in which a phenomenally conscious state requires a higher order thought to the effect that one is in the state. That is, a token experience (one that can be located in time) is a phenomenally conscious experience only in virtue of another token state that is about the first state. (See also Armstrong 1977, 1978; Carruthers 2000; Lycan 1996) for other varieties of higher order accounts.) A third view, the “Same Order” view says that the consciousness-of relation can hold between a token experience and itself. A conscious experience is reflexive in that it consists in part in an awareness of itself. (This view is discussed in Brentano 1874/1924; Burge 2006; Byrne 2004; Caston 2002; Kriegel 2005; Kriegel & Williford 2006; Levine 2001, 2006; Metzinger 2003; Ross 1961; Smith 1986).

So he is telling us here that his target in the paper is people who think that there is no phenomenology without awareness. Now we could (and should) quibble with the way that Block cast’s Rosenthal’s theory. For instance when he says that it is the view that a token experience that is located in time is a conscious state in virtue of a higher-order thought that is about it. But that is not quite right, as I have spent a lot of time arguing (for instance, Consciousness, Relational Properties, and Higher-Order Theories, Conscioiusness is Not a Relation Property, and The Function of Consciousness in Higher-Order Theories). but waive that for the moment.

He goes on in the next paragraph to say,

The same order view fits both science and common sense better than the higher order view. As Tyler Burge (2006) notes, to say that one is necessarily aware of one’s phenomenally conscious states should not be taken to imply that every phenomenally conscious state is one that the subject notices or attends to or perceives or thinks about. Noticing, attending, perceiving, and thinking about are all cognitive relations that need not be involved when a phenomenal character is present to a subject. The mouse may be conscious of the cheese that the mouse sees, but that is not to say that the mouse is conscious of the visual sensations in the visual field that represent the cheese or that the mouse notices or attends to or thinks about any part of the visual field. The ratio of synapses in sensory areas to synapses in frontal areas peaks in early infancy, and likewise for relative glucose metabolism. (Gazzaniga et al. 2002, p. 642–43). Since frontal areas are likely to govern higher-order thought, low frontal activity in newborns may well indicate lack of higher-order thoughts about genuine sensory experiences.

The relevance of these points to the project of the paper is this: the fact of Awareness can be accommodated by either the same order view or the view in which Awareness is automatic, or so I will assume. So, there is no need to postulate that phenomenal consciousness requires cognitive accessibility of the phenomenally conscious state. Something worth calling “accessibility” may be intrinsic to any phenomenally conscious state, but it is not the cognitive accessibility that underlies reporting.

He is making it very clear that he thinks that he has given decisive reasons for dismissing the higher-order thought theory. Has he? Not suprisingly, I don’t think that he has. Instead he displays a curious prejudice against the higher-order thought theory.

Let us look at what he says. In the first sentence he says that the same-order view, a view like Kriegel’s, is better suited to common sense and science. What follows that remark then looks like what he takes to be common sense evidence against the higher-order thought view and in favor of the same-order view, followed by some scientific evidence that illustrates the same point. The common sense evidence, evidently, rests on our intuition that “[t]he mouse may be conscious of the cheese that the mouse sees, but that is not to say that the mouse is conscious of the visual sensations in the visual field that represent the cheese or that the mouse notices or attends to or thinks about any part of the visual field.” But that is certainly true and neither Rosenthal, nor any other higher-order theoriest, denies it! The mouse is conscious of the cheese by having a first-order sensory state that represents the cheese, so it can be conscious of the cheese without any higher-order thoughts at all.

Presumably, though, what Block means here is that the mouse can have a phenomenally conscious experience of the cheese without having a thought about its first-order mental states. But that is to simply beg the question against Rosenthal. He has a story about why you wouldn’t notice the higher-order thought were it there, and yet how we can still have some evidence that they do occur, and also a story about how the concepts that occur in the higher-order thoughts about sensory states would be easy to come by. So easy to come by in fact, that animals could probably get them. So it is not crazy or absurd to think that the mouse might have a conscious experience of teh cheese by having a higher-order thought to the effect that it is seeing cheese. So the common sense evidence against the higher-order thought theory isn’t any good.

What about the scientific evidence? The suggestion here is that there is empirical evidence that newborns have very low frontal activity and that this would mean that they do not have higher-order thoughts and so do not have any conscious experiences at all. Therefore the higher-order thought theory is at odds with scientific evidence. But there is a suppressed premise in Block’s argument. Namely, the premise that it is obvious that new born infants do in fact have conscious experiences. Now, granted it does seem obvious, what with all the kicking and screaming and facial gesticulation and all, but that is really just more question beggining. According to Rosenthal, if it turned out that babies lack the part of the brain that we KNOW is responsible for higher-order thoughts the he would be committed to saying that newborn infants lack phenomenallt conscious states. And if we could show that that was absurd then his theory would be a bust. He would pack it in. But he challeneges the claim on both accounts.

First, there is some evidence that babies lack the right part of the brain for higher-order thoughts, but Rosenthal also claims that there is some evidnece that they do have it as well and we are not ABSOLUTELY sure about the role that the frontal cortex plays. The science is not in, or at least it is not a lock like Block thinks. Secondly, it is not an absurd claim to say that newborn infants lack phenomenally conscious experience. According to Rosenthal an unconscious pain will play all of the same roles that the conscious one does. It will cause kicking and screaming and hootin’ and a-hollerin’ and facial contoriations and the whole nine. We can even say that it is a bad thing and be motivated to stop it, all the while maintaining that there is nothing that it is like for the infant to have the pain. Of course Block finds this implausible and the point of the paper is to show that this doesn’t happen, but the point is that the baby stuff does not cut against Rosenthal in the way that Block thinks. Or at least he hasn’t made it clear here why it does. So neither the common sense nor the scientific evidence merits such a quick dismissal of Rosenthal’s view.

Finally, why does Block think that this evidence is more favorable for the same-order view? Block seems to assume that the same-order view does not posit a thought-like Awareness and so is more in line with his intuition about the mouse, but, at least for people like Kriegel and Gennaro, the higher-order content is thought-like. So if Rosenthal’s view is too cognitive, then so is the same-order view. Or at least there is no reason to think otherwise. And what about the scientific evidence? Block seems to assume that since the first-order and higher-order content are part of the same state that means that the frontal cortex will not play a role and so the same-order view would not be affected by the experimental evidence showing that infants have low activity there. But that isn’t obvious. On Kriegel’s view, for instance, the two contens are bound together by a ‘psychologically real’ process. But this does not require that the two contents be in the same part of the brain. In fact he explicity appeals to synchrony as a candidate for the psychologically real process and points out that it allows for binding of contents in segregated parts of the brain as one of its virtues.

So either Block’s list of positions to consider just got reduced to one (Sosa’s) or it is back up to three.

3rd Birthday

Tomorrow marks the third anniversary of my starting Philosophy Sucks! I started my blogging career over at Brains and had my first post on April 12, 2007. I had several posts there before I was compelled to start my own blog and as people may know I continue to contribute to Brains and am very pleased to have seen it grow in recent times. I continue to post here as well and limit my posts at Brains to ones that directly relate to philosophy of mind and consciousness.

In these three years I have had over 100,000 hits, nearly 350 posts, and almost 2,000 comments…and next week I will be hosting my third Philosopher’s Carnival (I hosted the 58th and the 50th); not bad! I have had some rough experiences adapting to online discussion (there are some crazies out there as people well know) but all in all the discussion has been extremely helpful and challenging. I have had two papers and numerous presentations (two at the apa Pacific) develop out of discussions that started here. So thanks to everyone and I hope it continues in the future!

The year is still young but here are the most viewed posts so far (see also the best of all time).

10. HOT Qualia Realism
9. Am I a Type-Q Materialist?
8. Why I am not a Type-Z Materialist
7. Consciousness, Consciousness, and More Consciousness
6. More on Identity
5. The Singularity, Again
4. HOT Damn! It’s a HO Down-Showdown
3. Attention & Mental Paint
2. Part-Time Zombies
1. The Identity Theory in 2-D

Pain Asymbolia and A Priori Defeasibility

I listened to the first lecture in David Chalmers’ Locke Lectures currently taking place at Oxford and I was intrigued by the argument he gave in defense of the claim that we can have a priori knowledge and do conceptual analysis even if we cannot give definitions of the concepts that we are analyzing. The argument appealed to the claim that any counter-example to a definition involved reasoning about possible cases and so we could give an account of the a priori in terms of our capacity to think about possible scenarios and our judgments about whether certain sentences are true in those scenarios.

I wanted to find the text of the talk to check on the details of the argument and in the lecure Dave mentioend that he was putting manuscripts up online and I went to his website to see if I could find them…sadly I couldn’t. But I did find this paper which if I am right is probably the text that the fourth lecture will center on. Anyways, I read the paper and now want to say something about it. As I read it the central point is very simple: one can accept Quinian arguments about conceptual revisibility and still have a robust a priori/a posteriori and analytic/synthetic distinction.  One does this by simply stipulating that something is a priori if it is knowable independently of experience without conceptual change. That is given that we hold the conceptual meanings fixed is the statement knowable a priori? Much of the paper is spent fleshing out a suggestion made by Carnap updated with 2-d semantics and Bayesian probability theory aimed at giving an account of conceptual change.

So to put it overly simply one can say to Quine “sure, my concept may change and if so this wouldn’t be true but given that my concepts don’t change we can see that this would be the case.” So to take pain as an example. When we are reasoning a priori about what we would say about pain (can there be pain/pleasure inversion for instance) we can admit that if we change what we mean by pain this or that will be different. But as long as our concept of pain doesn’t change we can say this or that would be true in this or that scenario and therefore bypass the entire Quinian argument altogether. This would seem to give Dave a response to the type-q materialist who has been getting so much attention around here lately. This is because they seem to be saying that since our concept of pain might change we cannot know a priori whether zombies are conscious or not. Dave responds by saying that as long as we do not have to change our concept of pain we can see that zombies are not conscious. I think that this response to the Quinian argument is quite good but I would respond to it differently. I would argue that as of right now we do not know which scenarios are ideally conceivable because we have cases of disagreement about decisive scenarios.

To fill this in with a particular example that I have talked about before let us focus on the notion of pain and Pain Asymbolia. Now many philosophers hold that it is a priori that if something is a pain then it will be painful (and that conversely if something is painful then it will be a pain). Now suppose that one of these philosophers finds out about pain asymbolia and denies that these people are in pain. Now suppose that this person comes to change their mind and instead thinks that they are in pain but that pain and painfulness are (contrary to appearances) only contingently related. What are we to say? In the paper Dave says,

A fifth issue is the worry that subjects might change their mind about a possible case without a change of meaning. Here, one can respond by requiring, as above, that the specifications of a scenario are rich enough that judgments about the scenario are determined by its specification and by ideal reasoning. If so, then if the subject is given such a specification and is reasoning ideally throughout, then there will not be room for them to change their mind in this way. Changes of mind about a fully specified scenario will always involve either a failure of ideal reasoning or a change in meaning.

I can agree with this in principle but since I can clearly conceive pain and painfulness being only contingently related it cannot be the case that we are in a position to determine which concept of pain is the one which will be employed in ideal reasoning. We may have our favorite but there are arguments on both sides and it is not clear where the truth lies. So though we can know a priori that either pain is necessarily painful or that it is contingently painful but we cannot know which is true now. To know that we would have to settle the pain asymbolia case; but that case it hotly contested (pun sadly intended :()

The upshot then is whether or not Dave has a response to Quinian worries about the a priori in principle he has not done enough to show that we are currently in a position to make use of this apparatus and so we are forbidden any of its fruits.