Explaining Consciousness & Its Consequences

Yesterday I presented Explaining Consciousness and its Consequences at the CUNY Cognitive Science Speaker Series which was a lot of fun and a very fruitful discussion. I have a narrated powerpoint rehearsal of the talk and those that are interested can look at that at the end of this post but here I want to discuss some of the things that came up in the discussion yesterday.

The core of the puzzle that I am pressing lies in asking why it is that conscious thoughts are not like anything for the creature that enjoys them. My basic claim is that if one started with the theory of phenomenal consciousness and qualitative character and came to understand and accept it but one hadn’t yet thought about conscious thoughts one would expect that the theory would produce cognitive phenomenology. Granted it wouldn’t be like the phenomenology of our sensations –seeing blue consciously is very different from consciously thinking that there is something blue in front of one– but why is it so different that in one case there is nothing that it is like whatsoever while in the other case there is something that it is like for the creature? The only difference between the contents of HOTs about qualitative states and HOTs about intentional states is that one employs concepts of mental qualities whereas the other employs concepts about thoughts and their intentional contents yet in one case conscious phenomenology –which is to say that there is something that it is like for the creature to have those conscious mental states– in all its glory is produced while in the other case nothing happens. As far as the creature is concerned it is a zombie when its has conscious thoughts. But what could account for this very dramatic difference? It looks like we haven’t really explained what phenomenal consciousness is, all we have done is re-locate the problem to the content of the higher-order thought. This is because no answer can be given to my question except “that how phenomenal concepts work” and so we have admitted that they are special.

Now one thing that came up in the discussion, by David Pereplyotchik, was what I meant by ‘special’ in the above. David P. suggested that qualitative properties may be distinctive without being special. I agree that they are distinctive and that is the reason that thinking that p and seeing blue are different. We move from distinctive to special when we deny that conscious thought have a phenomenology because we can’t explain why they don’t.

One detail that came out was that the way I formulated the HOTs and their contents was misleading. Instead of “I think I see blue*” the HOT has the content “I am in a blue* state”

At some point David said that when he had a conscious thought what it was like for him was like feeling one was about to say the sentence which would express the thought. So when one thinks that there is something blue in front of one what it is like for that creature is like feeling that they were about to say “there is something blue in front of me”. When I said ‘aha, so there is something that it is like for you to have a conscious mental state’ he responded “what does that mean?” This challenge to my use of the phrase “what it’s like for one” was a main theme of the discussion. A lot of the time I ask whether or not there is something that it is like for one to have a conscious thought  and if not why not but David objected that the phrase is multiply ambiguous and is used to confuse the issue more than anything else. One way this came out was in his challenging me to explain what was at stake. What difference is made if we say that there is something that it is like for one to have a conscious thought and what is lost if we deny it? I responded that it is obvious what the reference of the phrase ‘what it is like for one’ is. It is the thing that would be missing in the zombie world. David responded that the zombie world was impossible, which I agree with at the end of a long theoretical journey but we can still intuitively make sense of the zombie world even if only seemingly. That is even if it is the case that zombie are inconceivable we still know what it would mean for there to be zombies and that still helps us zone in on what the explanatory problem is. I take it that the whole point of the ambitious higher-order theory is that it tries to explain how this property, the one we single out via the phrase ‘what it is like for one’ and the zombie and mary cases, could be a perfectly respectful natural property. So what is at stake is whether or not I really am like a zombie when I have a conscious thought and what that means for the higher-order thought theory. If we cannot account for the difference between intentional conscious states and qualitative conscious states then we have not explained anything.

David’s main response to my argument seemed to be to appeal to the different ways in which the concepts that figure in our HOTs are acquired. In the case of the qualitative states we acquire the concepts that figure in our HOTs roughly by noticing that our sensations misrepresent things in the world. So, if I mistakenly see some surface as red and then come to find out that it isn’t red but is, say, under a red light and is really white, this will cause me to have a thought to the effect that the sensation is inaccurate and this requires that I have the concept of the mental quality that the state has. In the case of intentional states the story is different. We are to imagine that there is a creature that has concepts for intentional states but only applies them on the basis of third person behavior. This creature will have higher-order thoughts but they will be mediated by inference and will not seem subjectively unmediated. Eventually this creature will get to the point where it can apply these concepts to itself automatically at which point it will have conscious thoughts. This difference is offered as a way of saying what is different about the concepts that figure in HOTs about qualitative states and those that figure in HOTs about intentional states. It amounts to an elaboration of David Pereplyotchik’s suggestion early on that the qualitative properties are distinctive without being mysterious. They are distinctive in the way that concepts are acquired. But as before how can this be an answer to the question I pose? I agree that there is this difference for the sake of argument. What seems to me to follow from this is what I said before; namely that the phenomenology of thought and the phenomenology of sensations is not the same…but this should be obvious already. So, the claim is not that having a conscious thought should be like seeing blue for me or feel like a conscious pain for me only that it should be like something for me. Basically then, my response is that this will make a difference in what it is like for the creature but doesn’t explain such a drastic difference as absence of something that it is like for one in one case.

Another way I like to put the argument is in terms of mental appearances. David Rosenthal often says that what it is like for one is a matter of mental appearances at which point I argue that the HOT is what determines the mental appearances and so in the case of thinking that p it should appear to me as though I am thinking that p. In response to this David said that while it is the case that phenomenology is a matter of mental appearances it might not be the case that all mental appearances are phenomenological. At this point I have the same response as before…viz. what reason do we have to think that there are these two kinds of appearances? It looks like on is just inserting this into the theory by fiat to solve an unexpected problem. There is no theoretical machinery which explains why we have this disparity. When we ask why applying starred concepts results in appearance of qualitative phenomenology the application of intentional concepts does not so result in intentional phenomenology when we ask why? We are simply told that this is the way phenomenology works. It is as mysterious as ever.

At the close of the talk I touched briefly on Ned Block’s recent paper “The Higher-Order Theory is Defunct” which raises a new objection to the higher-order theory based on the consequences of explaining consciousness as outlined here. The problem that Ned sees is that when one has an empty HOT one has an episode of phenomenal consciousness that is real but that is not the result of a higher-order thought. David’s response seems to be to fall back on his denial that there are ever actually cases of empty higher-order thoughts. I brought up Anton’s syndrome and David responded that in Anton’s syndrome we don’t have any evidence that they actually have visual phenomenology. They don’t want to admit that they are blind but when we ask them to tell us what they see they can’t. If there are never empty higher-order thoughts then Block’s problem goes away.

My response to this problem is to identify the property of p-consciousness with the higher-order thought while still identifying the conscious mental states as the target of the HOT but at that point we adjourned to Brendan’s for some beer and further discussion.

During the discussion at Brendan’s we talked a little bit about my suggestion that we develop a homomorphism theory of teh mental attitudes. David and Myrto wanted to know how many similarities there were between sensory hommorphisms and the mental attitudes. In the sensory case we build up the quality space by presenting pairs of stimuli and noting what kind of discriminations the creature can do. What we end up doing is constructing the quality space from these kinds of discriminatory abilities. So, what kind of discriminations would happen in the mental attitude case? I suggested that maybe we could present pairs of sentences and ask subjects whether they expressed the same thought or different thoughts. Dan wanted to know what the dimensions of the quality space for mental attitudes would be. I suggested that one would be degree conviction, so that whether one doubts something or believes something firmly or just barely will be one dimension of difference but I have yet to think of any others. This has always been a project I hope to get to at some point…right now its just a pretty picture in my head…

Ok well I feel like I have been writing this all day so I am going to stop…

Error
This video doesn’t exist

The New New Dualism

Yesterday I attended Miguel Angel Sebastian’s cogsci talk entitled “The Subjective Character of Experiencre: Against HOR and SOR Theories” which was very interesting. Miguel was primarily trying to show that higher-order and same-order representationist theories of consciousness cannot account for the subjective character of an experience by which he means the thing that accounts for the experience being for the subject. His main complaint seemed to be that in order to account for this we need some notion of the self and so he suggested that we need a model where we have representations of teh self interacting with representations of objects and we thus end up with a representation of the form “x for-me”. There were several interesting themes of the discussion and if I have time I will probably come back to some of them but I thought I’d start with this one.

In response to the mis-match problem David has settled on the following view. The phenomenology goes with the HOT. The sensory qualities of the first-order state play no role –other than that of concept acquisition– in determining the phenomenal character of a conscious experience. So in the case of Dental Fear the subject has a first-order state with vibration sensory qualities and a HOT that they are in pain so their conscious phenomenology is like having pain for them. The first-order sensory qualities play a perceptual role in the mental economy of the subject so having them is important but they don’t play a role as far as consciousness is concerned. In fact even if there is no first-order state at all (as may perhaps be the case in Anton’s syndrome) the phenomenology goes with the HOT. Now in the cases where there is no first-order state one still counts as being in a conscious state. The mental state that is conscious is just the one that the HOT represents oneself as being in and so in this case the conscious mental state is a notional state, which is to say that it doesn’t exist. It follows from this that there are conscious mental states that have no neural correlates. We thus end up with a dualism about consciousness of a new variety. There are some conscious mental states that exist physically in the brain and there are other conscious mental states that exist only notionally as the content of a HOT.

What should our reaction to this be? When this first became clear at David’s Mind and Language seminar it prompt Steve Stitch to shout ‘he’s worse than a dualist!’ Miguel seemed to think that at the very least this is a cost of the theory and that if you can have a theory that explains all the data without it that is preferable. David refused to say that this was even a cost for the theory, in fact he seemed to suggest that it wasn’t even counter-intuitive. His reasons seemed to be as follows. I can have a thought about things which are not present and those notional objects can have properties. So, if I think about a squirrel I might think of it as brown, and bushy even if there is no squirrel around yet the squirrel has properties; it is brown and bushy. Thus it is simply a fact about intentional states like thoughts that their contents can be notional and that those notional objects can be said to have properties. If that is right then there is nothing fundamentally mysterious about notional mental states having properties. The second step in his defense seemed to involve an appeal to hallucinations. We hallucinate regularly enough for it to be a common-place of folk psychology. Why doesn’t it make sense to say that we can hallucinate mental states? On this line the notional state is just like my hallucination of a pink elephant: it seems like it is there from my point of view but it isn’t really there. This isn’t mysterious since that just simply means that I represent myself as being in a state that I am not in. Now given various theoretical assumptions this will indeed turn out not to be counter-intuitive and since those who do find it counter-intuitive will do so because of different theoretical assumptions I suppose I can see why David thinks that this is not a cost to the theory.

But suppose that one had different theoretical assumptions?  Suppose that one wanted to avoid this kind of existence dualism and so endorsed some kind of principle like this: For every conscious mental state there is a corresponding brain state. But suppose one also wanted to remain a higher-order theorist…what are the options? The most obvious option is to identify the phenomenally conscious state with the HOT. The HOT is not introspectively conscious –for that it would need to have a third order state targeting it– but it is phenomenally conscious. It is the state in virtue of which there is something that it is like for the subject and so it seems natural to identify the property of phenomenal conscious with having the HOT. Ned Block has argued that if one does this then one has falsified the higher-order theory. Why? The transitivity principle says that a conscious mental state is one which I am conscious of myself as being in but on the previous analysis we have a phenomenally conscious mental state (the HOT itself) of which we are not conscious of ourselves as being in (there is not third-order HOT) thus adopting this view falsifies the transitivity principle. But this may be too quick. This way of formulating the transitivity principle leads us to the view that the HOT transfers or confers the property of being conscious to the first-order state but as we have seen what the transitivity principle really says is that a conscious mental state consists in my being conscious of myself as being in some first-order state. That is, the transitivity principle is a hypothesis about the nature of conscious mental states. It is a mis-reading of the transitivity principle that takes it to postulate consciousness resulting in a relation between the first-order state and the higher-order state. That this is the dominant way of interpreting the transitivity principle is not in doubt; it most certainly is. However, it is misleading and cause way too many problems. I think higher-order theorists need to be more explicit about this mis-reading of the transitivity principle.

To me the second is the best option. However, lots of people seem to think that of one adopts a same-order theory one can avoid these kinds of issues. Since one takes the conscious mental state to be a complex of a first-order content and a second-order content that represents the first-order content we don’t have to worry about notional states. Bit it is far from obvious that this theory has any advantages over the HOT theory. First it is unclear why the higher-order content cannot occur without the first-order content. This seems like an empirical issue that can’t be settled by definitional fiat (I guess I think Anton’s syndrome might be a problem here). Second, even if it turns out that you can’t have one with out the other it is still not clear why there cannot be a content mis-match. Why can’t a red first-order state be coupled with a higher-order content that represents the first as green?

Does the Zombie Argument Rest on a Category Mistake?

re-reading Ryle’s “Descartes’ Myth” I was struck by the following passage

…the Dogma of the Ghost in the Machine does just this. It maintains that there exist both bodies and minds; that there occur physical process and mental process; that there are mechanical causes of coporeal movement and mental causes of coporeal movement. I shall argue that these and other analogous conjunctions are absurd…the phrase ‘there occur mental process’ does not mean the same sort of thing as ‘there occur physical process,’ and, therefore, that it makes no sense to conjoin or disjoin the two. (this is from page 37 in the Chalmers anthology)

I have always been sympathetic to the category mistake move and have viewed it as a precursor to the claim that it is simply question begging to treat mental terms as synonymous for ‘non-physical’. I also think that a lot of my complaints about the intelligibility of substance dualism originate in Ryle’s discussion of the origin of the category mistake.

Re-reading this today I started thinking that maybe one could use this kind of claim to cause problems for the Zombie argument. The first premise of the zombie argument employs the conjunction (P & ~Q) where P are all of the physical facts and process and Q is some qualitative fact like that I feel pain. If it is really logically  illegitimate to conjoin these terms then the zombie argument cannot even get off the ground. So what is the response that the dualist will make here? It seems to me that all of the examples of category mistakes involve concepts that have fairly straightforward conceptual entailment relations between them. So, a pair of gloves just is a left glove and a right glove and we can tell this just by analyzing the concept of PAIR OF GLOVES. The same can be said for teh University, and the battalion. But if course it is not obvious, to say the least, that the same is true for PAIN or SEEING BLUE. To many, myself included, it seems as though there are no conceptual entailment relations between my “pure” phenomenal concept of pain and physical processes (for me the ‘seems as though’ part is especially important).  But maybe it is at just this point that I myself, as well as the dualist, commit the category mistake!

Whoa…I’ll have to come back to that because now I’m off to Miguel’s CogSci talk

The Unintelligibility of Substance Dualism

Over at Siris Brandon offers some interesting criticism of my argument against substance dualism. He distinguishes two senses in which we may say that a theory is viable. In one sense we simply mean to be asking what reasons someone might have for believing in that kind of thing. In that sense a viable theory is one which there is reason to believe. In another sense we may be asking not what the reasons are to believe it but instead what the thing in question is in the first place.  A viable theory in this sense is one that can tell us what the thing is. Brandon then goes on to show that this distinction corresponds to a distinction between things that a problem for a theory and things that a problem within the theory. Brandon then goes on to argue that my complaint is not a problem for the theory that there are immaterial substances but is rather a problem within the theory of immaterial substances itself and so should be answered by more research into immaterial substance and not with a dismissal of the theory.

The picture that Brandon seems to have is this. We decide whether or not there are good theoretical/common sense reasons to believe that there are immaterial substances and if we decide that there are we then try to construct a theory of what they are. Naturally in doing so we do not know very much about the immaterial substances and so one of the projects of the theory is to say more about what they are. Given this it is a mistake to think that our lack of understanding about what immaterial substances are is any reason to think that they don’t exist.

I completely agree with the spirit of Brandon’s comments but I do not agree with his conclusions. First, to where I agree. We clearly must recognize the kind of distinction that Brandon draws. And while I disagree that there are any real reasons or evidence for immaterial substances I agree that if there were, or if one thought there were, one should then go on to try and give a theoretical account of what they are.

Let us be generous and grant that there are reasons to think that some kind of substance dualism is true. When we then ask what an immaterial substance is we get told that it is the immaterial substrate of thinking and consciousness and that it is not located in space-time as we know it. David Chalmers has offered one way of making sense of this in terms of the matrix, and I won’t rehash it here but it seems clear that this kind of move makes the immaterial substance material outside of the matrix and so isn’t really a threat. What else can we do? At this point we have no further ideas. All we can say is that it is an X we know not what which underlies thinking and consciousness. If the theory never progresses past this point then we may start to think that it is in trouble.

So, to take Brandon’s example of evolution in biology, people had proposed accounts that looked evolution-ish as far back as Democritus, who seemed to have proposed that life as we know it was built up over time from simpler parts but this was not the theory of evolution because he did not have the right mechanism (natural selection). If the theory of evolution had stayed at the level of “evolution is whatever it is that underlies speciation and isn’t God doing it” no one would care about it. So too if the best that substance dualism can do is to say that an “immaterial substance is whatever it is that underlies consciousness and thinking and isn’t physical” it seems uninteresting. One might think this shouldn’t be a problem because lots of theories have been like that in the past (gravity seems to be a notable one) but the problem is that  it has been this way since its inception and not one step forward has been taken in 3,000 years. The most significant advance, if one were to call it that, has been the post-Humean nonchalance to the issue of physical/non-physical causation. If all there is to causation is constant conjunction, and the non-physical events are constantly conjoined with the physical ones then voila! mind-body problem (dis)solved!!

The upshot then is that fleshing out the theory will ultimately shed some light on the reasons for believing it. If we seem in principle unable to advance in specifying what a immaterial substance is, and we have physicalist alternatives that are relatively well understood, substance dualism starts to look impossible and we seem to loose our reason to believe it, which will in turn cause us to re-evaluate the reasons we used to have for believing it.

More HOTter, More Better

In an earlier post I outlined the case for qualia realism from the higher-order perspective as I see it. Dave Chalmers worried that one of the moves was too quick. The move in question is the move from concepts making a difference to phenomenal experience to their determining phenomenal experience. Basically the line I was pushing was that if it is the case that applying concepts changes our phenomenal experience then “perhaps it is not too crazy to think that applying concepts is what results in phenomenal feel in the first place,” but Dave is right that there is a lot more that needs to be said.

As I also said, I think that a crucial step in securing this premise in the argument is showing that there can be unconscious states with qualitative character which are not like anything for the creature that has them. If we established that then we would have evidence that it is solely applying concepts that constitutes phenomenal consciousness. There is another line of argument which might show this as well which is given by David Rosenthal in a few different places (see page 155 in Consciousness and Mind for a representative example). Basically it is a subtraction argument. Take some phenomenally conscious experience, like listening to music. We already agree that applying new concepts will change the character of the experience. So, if I were to learn what a bass clarinet was then listening to Herbie Hancock’s Chameleon will sound differently to me. Now suppose that we subtract this concept. My experience will change. More specifically it will lack the bass clarinetiness that my experience had when I applied that concept. Now we can continue subtracting out concepts one by one without altering the first-order state in any way. Since subtracting the concept produces a phenomenal experience that lacks precisely the element corresponding to the concept we can conclude that subtracting these concepts will produce phenomenal consciousness that is sparser and sparser. What are we to say when we have reached teh point where there is just one concept characteriing the first-order state? Suppose that we are at the point where we are only applying the concept SOUND to the experience. Phenomenally it will be like hearing a sound for me but not any particular sounds. Now suppose we subtract that concept. What will it be like for the creature?

The higher-order theorist says that at that point it is no longer like anything for the creature. The other side says that there is still something that it is like, though it may not be like anything for the creature) but what argument could show this? What reason is there for thinking that there is anything phenomenal left over?

Summa Contra Plantinga

I recently reread Alvin Plantinga’s paper Against Materialism and needless to say I am less than impressed. Plantinga presents two “arguments” against materialism each of which is utterly ridiculous.

The first is what he calls the replacement argument (sic). It is possible, Plantinga tells us, that one could have one’s body replaced while one continues to exist; therefore one is not one’s body. Of course the obvious problem with this argument is that it at best shows that I am not identical to a particular body but it does not show that minds are not physical for it does not show that the mind exists with out any body whatsoever. To show that Plantinga needs to appeal to disembodiment and he doesn’t.

It is also clearly possible that one could have one’s immaterial substance replaced and continue to exist; thus one is not an immaterial substance. This is because there is nothing contradictory in supposing that materialism is true and what this shows, as I have argued at length before, is that these a priori arguments are of no use to us at this point.

Now Plantinga, to his credit, realizes that these kinds of intuitions are ultimately question begging so his second argument appeals to an alleged impossibility, which turns out to be none other than the problem of intentionality. The argument turns on our ability to ‘just see’ that it is impossible that a physical thing can think. Just as the the number 7 can not weigh 5 pounds neither can a brain think. Never mind computers and naturalized theories of content, those couldn’t be belief contents. Oh, I see…wait, I don’t.

But of course the real problem here is that it is even more mysterious how an immaterial substance could think. Plantinga spends some time in the paper responding to Van Inwagen’s argument along these lines. Plantinga focuses on Van Inwagen’s claim that we can’t imagine an immaterial substance. The response should be obvious: we can’t imagine lots of stuff (like what a number looks like) but that doesn’t show that they are impossible. Van Inwagen’s second swipe at immaterial substances is that we cannot see how an underlying reality that is immaterial can give rise to thinking any more than we can see how an underlying physical reality can. Plantinga’s response to this is to claim that the soul is a simple and has thinking as an essential attribute in much the same way as an electron is said to be simple and have its charge essentially.

But all of this seems to me to miss the fundamental point that Van Inwagen wants to make. The very concept of an immaterial substance is unintelligible. Attempts to make them intelligible render them into ordinary physical substances at the next level up, so to speak. And it is of course out of the question to simply say that an immaterial substance is perfectly intelligible since they are just minds (as Plantinga seems to do). It is obvious that there is thinking but it is not at all obvious that an immaterial substance could think. What would that even mean?  The upshot then is that substance dualism is not a viable theory.

HOT Block

In celebration of my three years in the Blogosphere I will be reposting some of my earlier posts that I am particularly fond of. This piece was originally published July 11th, 2007.
——————————-

I was recently reading Block’s forthcomming BBS paper Consciousness, Accessibility, and the Mesh between Psychology and Neuroscience. It is an interesting paper and I am looking forward to seeing the commentary. The basic puzzle is one that I have heard him talk about before; How could we tell whether the transitivity principle is right or not? What would empirically decide whether there can be a phenomonally conscious state that we are unconscious of having? So, to take Block’s example, suppose that we have a person who is subliminally percieving a face and there is activation in that person’s fusiform face area. Since the subject sincerely reports that they do not see a face it seems we can agree that this is the sensory state in the absence of the higher-order state.

How do we describe this situation? Do we say that the face expeirience is phenomonally unconscious? That there is nothing that it is like to see the face? Does it, as Rosenthal would say, have unconscious qualitative properties? Or do we say that there is something that it is like for the perseon to see the face but that they are unconscious of what it is like for them? The puzzle is that both theories make the same prediction about what the person will report (they don’t see a face) and so we need to find someother way to distinguish the two claims empirically. I don’t really want to talk about Block’s argument that phenomenology overflows our access to it (unless someone does want to talk about it), as all I could do it to repeat the Rosenthal line that the evidence that Block presents (i.e. the change blindness stuff) isn’t good evidence because the subjects can report, as Block acknowledges, that they saw some letters or ‘a rectangle’. Rosenthal can explain this on his account in the following way. In one case we are conscious of the first-order experience as just some rows of some letters or as just a rectangle while in the other we are conscious of the experience as being a row of some specific letters or shapes. So the fact that subjects report that they have some phenomenally conscious experience as Block rightly points out, needen’t be evidence for his claim that there is phenomenology without Awareness.

I think that if one steps far enough back from this debate one can see that it is the distinction between analytic and psyco-functionalism that is causing a lot of the local flare-ups and that this has some bearing on the empirical testability issue and the debate with Mandik that I have been suffering through, but I will leave that for another day.

What I do want to talk about is Block’s dismissal of Rosenthal’s kind of higher-order theory. He makes it very clear that he thinks that the higher-order thought theory is not even a candidate for a serious theory of phenomenal consciousness. As I have said many times before, I do not know if the higher-order thought theory is true or not, but it is at least not obviously false. It is a well formulated theory that could turn out to be right. So what’s Block’s problem?

He makes his case at the beggining of the paper in this rather longish quote.

We may suppose that it is platitudinous that when one has a phenomenally conscious experience, one is in some way aware of having it. Let us call the fact stated by this claim – without committing ourselves on what exactly that fact is – the fact that phenomenal consciousness requires Awareness. Sometimes people say Awareness is a matter of having a state whose content is in some sense “presented” to the self or having a state that is “for me” or that comes with a sense of ownership or that has “meishness” (as I have called it; Block 1995a).

Very briefly, three classes of accounts of the relation between phenomenal consciousness and Awareness have been offered. Ernest Sosa (2002) argues that all there is to the idea that in having an experience one is necessarily aware of it is the triviality that in having an experience, one experiences one’s experience just as one smiles one’s smile or dances one’s dance. Sosa distinguishes this minimal sense in which one is automatically aware of one’s experiences from noticing one’s experiences, which is not required for phenomenally conscious experience. At the opposite extreme, David Rosenthal (2005) has pursued a cognitive account in which a phenomenally conscious state requires a higher order thought to the effect that one is in the state. That is, a token experience (one that can be located in time) is a phenomenally conscious experience only in virtue of another token state that is about the first state. (See also Armstrong 1977, 1978; Carruthers 2000; Lycan 1996) for other varieties of higher order accounts.) A third view, the “Same Order” view says that the consciousness-of relation can hold between a token experience and itself. A conscious experience is reflexive in that it consists in part in an awareness of itself. (This view is discussed in Brentano 1874/1924; Burge 2006; Byrne 2004; Caston 2002; Kriegel 2005; Kriegel & Williford 2006; Levine 2001, 2006; Metzinger 2003; Ross 1961; Smith 1986).

So he is telling us here that his target in the paper is people who think that there is no phenomenology without awareness. Now we could (and should) quibble with the way that Block cast’s Rosenthal’s theory. For instance when he says that it is the view that a token experience that is located in time is a conscious state in virtue of a higher-order thought that is about it. But that is not quite right, as I have spent a lot of time arguing (for instance, Consciousness, Relational Properties, and Higher-Order Theories, Conscioiusness is Not a Relation Property, and The Function of Consciousness in Higher-Order Theories). but waive that for the moment.

He goes on in the next paragraph to say,

The same order view fits both science and common sense better than the higher order view. As Tyler Burge (2006) notes, to say that one is necessarily aware of one’s phenomenally conscious states should not be taken to imply that every phenomenally conscious state is one that the subject notices or attends to or perceives or thinks about. Noticing, attending, perceiving, and thinking about are all cognitive relations that need not be involved when a phenomenal character is present to a subject. The mouse may be conscious of the cheese that the mouse sees, but that is not to say that the mouse is conscious of the visual sensations in the visual field that represent the cheese or that the mouse notices or attends to or thinks about any part of the visual field. The ratio of synapses in sensory areas to synapses in frontal areas peaks in early infancy, and likewise for relative glucose metabolism. (Gazzaniga et al. 2002, p. 642–43). Since frontal areas are likely to govern higher-order thought, low frontal activity in newborns may well indicate lack of higher-order thoughts about genuine sensory experiences.

The relevance of these points to the project of the paper is this: the fact of Awareness can be accommodated by either the same order view or the view in which Awareness is automatic, or so I will assume. So, there is no need to postulate that phenomenal consciousness requires cognitive accessibility of the phenomenally conscious state. Something worth calling “accessibility” may be intrinsic to any phenomenally conscious state, but it is not the cognitive accessibility that underlies reporting.

He is making it very clear that he thinks that he has given decisive reasons for dismissing the higher-order thought theory. Has he? Not suprisingly, I don’t think that he has. Instead he displays a curious prejudice against the higher-order thought theory.

Let us look at what he says. In the first sentence he says that the same-order view, a view like Kriegel’s, is better suited to common sense and science. What follows that remark then looks like what he takes to be common sense evidence against the higher-order thought view and in favor of the same-order view, followed by some scientific evidence that illustrates the same point. The common sense evidence, evidently, rests on our intuition that “[t]he mouse may be conscious of the cheese that the mouse sees, but that is not to say that the mouse is conscious of the visual sensations in the visual field that represent the cheese or that the mouse notices or attends to or thinks about any part of the visual field.” But that is certainly true and neither Rosenthal, nor any other higher-order theoriest, denies it! The mouse is conscious of the cheese by having a first-order sensory state that represents the cheese, so it can be conscious of the cheese without any higher-order thoughts at all.

Presumably, though, what Block means here is that the mouse can have a phenomenally conscious experience of the cheese without having a thought about its first-order mental states. But that is to simply beg the question against Rosenthal. He has a story about why you wouldn’t notice the higher-order thought were it there, and yet how we can still have some evidence that they do occur, and also a story about how the concepts that occur in the higher-order thoughts about sensory states would be easy to come by. So easy to come by in fact, that animals could probably get them. So it is not crazy or absurd to think that the mouse might have a conscious experience of teh cheese by having a higher-order thought to the effect that it is seeing cheese. So the common sense evidence against the higher-order thought theory isn’t any good.

What about the scientific evidence? The suggestion here is that there is empirical evidence that newborns have very low frontal activity and that this would mean that they do not have higher-order thoughts and so do not have any conscious experiences at all. Therefore the higher-order thought theory is at odds with scientific evidence. But there is a suppressed premise in Block’s argument. Namely, the premise that it is obvious that new born infants do in fact have conscious experiences. Now, granted it does seem obvious, what with all the kicking and screaming and facial gesticulation and all, but that is really just more question beggining. According to Rosenthal, if it turned out that babies lack the part of the brain that we KNOW is responsible for higher-order thoughts the he would be committed to saying that newborn infants lack phenomenallt conscious states. And if we could show that that was absurd then his theory would be a bust. He would pack it in. But he challeneges the claim on both accounts.

First, there is some evidence that babies lack the right part of the brain for higher-order thoughts, but Rosenthal also claims that there is some evidnece that they do have it as well and we are not ABSOLUTELY sure about the role that the frontal cortex plays. The science is not in, or at least it is not a lock like Block thinks. Secondly, it is not an absurd claim to say that newborn infants lack phenomenally conscious experience. According to Rosenthal an unconscious pain will play all of the same roles that the conscious one does. It will cause kicking and screaming and hootin’ and a-hollerin’ and facial contoriations and the whole nine. We can even say that it is a bad thing and be motivated to stop it, all the while maintaining that there is nothing that it is like for the infant to have the pain. Of course Block finds this implausible and the point of the paper is to show that this doesn’t happen, but the point is that the baby stuff does not cut against Rosenthal in the way that Block thinks. Or at least he hasn’t made it clear here why it does. So neither the common sense nor the scientific evidence merits such a quick dismissal of Rosenthal’s view.

Finally, why does Block think that this evidence is more favorable for the same-order view? Block seems to assume that the same-order view does not posit a thought-like Awareness and so is more in line with his intuition about the mouse, but, at least for people like Kriegel and Gennaro, the higher-order content is thought-like. So if Rosenthal’s view is too cognitive, then so is the same-order view. Or at least there is no reason to think otherwise. And what about the scientific evidence? Block seems to assume that since the first-order and higher-order content are part of the same state that means that the frontal cortex will not play a role and so the same-order view would not be affected by the experimental evidence showing that infants have low activity there. But that isn’t obvious. On Kriegel’s view, for instance, the two contens are bound together by a ‘psychologically real’ process. But this does not require that the two contents be in the same part of the brain. In fact he explicity appeals to synchrony as a candidate for the psychologically real process and points out that it allows for binding of contents in segregated parts of the brain as one of its virtues.

So either Block’s list of positions to consider just got reduced to one (Sosa’s) or it is back up to three.

108th Philosophers’ Carnival

Welcome to the 108th edition of the Philosophers’ Carnival! I don’t know what is going on with the Carnival but  the last few editions have not had very many interesting submissions and I did not get a lot of acceptable submissions for this issue…but I know that there are interesting posts out there  so I scoured the internets to find the best that the philosophy blogosphere has to offer…I also checked a few other disciplines for some food for thought.
Submitted:
  1. Tuomas Tahko presents Draft: The Metaphysical Status of Modal Statements posted at ttahko.net.
  2. Andrew Bernardin presents Beneath Reason: An Iceburg of Unconscious Processes posted at 360 Degree Skeptic.
  3. Eric Michael Johnson presents Chimpanzees Prefer Fair Play To Reaping An Unjust Reward posted at The Primate Diaries.
  4. Terrance Tomkow presents Means and Ends posted at Tomkow.com, saying, “If your only available means of doing something are impermissible, does it follow that it is impermissible for you to do that thing? Judith Jarvis Thomson says, “yes”. Tomkow argues, “no”.”
  5. Thom Brooks presents The Brooks Blog: Thom Brooks on “A New Problem with the Capabilities Approach” posted at The Brooks Blog.
Found:
  1. Over at Conscious Entities Peter discusses Justin Sytsma’s recent JCS paper in Skeptical Folk Theory Theory Theory
  2. Over at Alexander Pruss’s Blog said blogger discusses Video Games as Art
  3. Not to long ago we had a very interesting post over at Brains on breeding pain free livestock. Anton Alterman has a somewhat polemical but interesting response at Brain Scam in Pains in the Brain: On LIberating Animals from Feeling
  4. Over at Siris we are reminded how malleable language is and the effect it has on reading past philosophers in Every Event Has a Cause
  5. Over at Practical Ethics Toby Ord asks Is It Wrong to Vote Tactically? I don’t want to spoil it for you but he thinks the answer is ‘no’
  6. Over at Evolving Thoughts John Wilkins discusses Plantinga’s argument that naturalism is self-refuting in You and Me, Baby, Ain’t Nothing But Mammals
  7. Did you know that a Quine is a computer program that can print its own code? It’s true and over at A Piece of Our Mind John Ku discusses them in Meta Monday: Ruby Quines
  8. Over at Neuroschannells Eric sums up his current views on perception and consciousness in Consciousness (13): The Interpreter versus the Scribe
  9. Over at Specter of Reason there is a discussion of Pete Mandik’s Swamp Mary thought experiment in Swamp Deviants, Part II
  10. Over at the Arche Methodology Blog Derek Ball asks Do Philosophers Seek Knowledge? Should They?
  11. Over at Philosophy on the Mesa Nina Rosenstrand wonders if Neanderthal’s raped early Humans in They Are Us? News from the Primate Research Front
  12. Is the idea that the mind in the head an a priori prejudice? Ken Aizawa thinks not in So, why does common sense say the mind is in the head?
  13. Over at Inter Kant Gary Benham discusses Free Speech and Twitter
  14. Over at The Ethical Werewolf Neil Shinhababu discusses his recent run on Bloggingheads and Hedonism
  15. Over at Logical Matters Peter Smith talks about Squeezing Arguments and comments on Fields characterization of them in Saving Truth from Paradox
  16. Over at In Living Color Jean Kazez discusses just how outrageous espousing moral realism really is in Torturing Babies Just for Fun is Wrong
  17. Over at Philosophy Talk: The Blog Ken Taylor discusses Culture and Mental Illness
  18. Over at In the Space of Reasons Tim Thornton discusses Aesthetic Self-Knowledge
  19. Over at the Philosophy North Blog Aiden McGlyn discusses The Problem of Vanishing Warrant
  20. Finally, have you heard about this Philosopher’s Football match? Virtual Philosopher has a nice report of the madness in Philosopher’s Football -Match Report from the Ref.
That concludes this edition. Submit your blog article to the next edition of philosophers’ carnivalusing our carnival submission form. Past posts and future hosts can be found on our blog carnival |

3rd Birthday

Tomorrow marks the third anniversary of my starting Philosophy Sucks! I started my blogging career over at Brains and had my first post on April 12, 2007. I had several posts there before I was compelled to start my own blog and as people may know I continue to contribute to Brains and am very pleased to have seen it grow in recent times. I continue to post here as well and limit my posts at Brains to ones that directly relate to philosophy of mind and consciousness.

In these three years I have had over 100,000 hits, nearly 350 posts, and almost 2,000 comments…and next week I will be hosting my third Philosopher’s Carnival (I hosted the 58th and the 50th); not bad! I have had some rough experiences adapting to online discussion (there are some crazies out there as people well know) but all in all the discussion has been extremely helpful and challenging. I have had two papers and numerous presentations (two at the apa Pacific) develop out of discussions that started here. So thanks to everyone and I hope it continues in the future!

The year is still young but here are the most viewed posts so far (see also the best of all time).

10. HOT Qualia Realism
9. Am I a Type-Q Materialist?
8. Why I am not a Type-Z Materialist
7. Consciousness, Consciousness, and More Consciousness
6. More on Identity
5. The Singularity, Again
4. HOT Damn! It’s a HO Down-Showdown
3. Attention & Mental Paint
2. Part-Time Zombies
1. The Identity Theory in 2-D

Pain Asymbolia and A Priori Defeasibility

I listened to the first lecture in David Chalmers’ Locke Lectures currently taking place at Oxford and I was intrigued by the argument he gave in defense of the claim that we can have a priori knowledge and do conceptual analysis even if we cannot give definitions of the concepts that we are analyzing. The argument appealed to the claim that any counter-example to a definition involved reasoning about possible cases and so we could give an account of the a priori in terms of our capacity to think about possible scenarios and our judgments about whether certain sentences are true in those scenarios.

I wanted to find the text of the talk to check on the details of the argument and in the lecure Dave mentioend that he was putting manuscripts up online and I went to his website to see if I could find them…sadly I couldn’t. But I did find this paper which if I am right is probably the text that the fourth lecture will center on. Anyways, I read the paper and now want to say something about it. As I read it the central point is very simple: one can accept Quinian arguments about conceptual revisibility and still have a robust a priori/a posteriori and analytic/synthetic distinction.  One does this by simply stipulating that something is a priori if it is knowable independently of experience without conceptual change. That is given that we hold the conceptual meanings fixed is the statement knowable a priori? Much of the paper is spent fleshing out a suggestion made by Carnap updated with 2-d semantics and Bayesian probability theory aimed at giving an account of conceptual change.

So to put it overly simply one can say to Quine “sure, my concept may change and if so this wouldn’t be true but given that my concepts don’t change we can see that this would be the case.” So to take pain as an example. When we are reasoning a priori about what we would say about pain (can there be pain/pleasure inversion for instance) we can admit that if we change what we mean by pain this or that will be different. But as long as our concept of pain doesn’t change we can say this or that would be true in this or that scenario and therefore bypass the entire Quinian argument altogether. This would seem to give Dave a response to the type-q materialist who has been getting so much attention around here lately. This is because they seem to be saying that since our concept of pain might change we cannot know a priori whether zombies are conscious or not. Dave responds by saying that as long as we do not have to change our concept of pain we can see that zombies are not conscious. I think that this response to the Quinian argument is quite good but I would respond to it differently. I would argue that as of right now we do not know which scenarios are ideally conceivable because we have cases of disagreement about decisive scenarios.

To fill this in with a particular example that I have talked about before let us focus on the notion of pain and Pain Asymbolia. Now many philosophers hold that it is a priori that if something is a pain then it will be painful (and that conversely if something is painful then it will be a pain). Now suppose that one of these philosophers finds out about pain asymbolia and denies that these people are in pain. Now suppose that this person comes to change their mind and instead thinks that they are in pain but that pain and painfulness are (contrary to appearances) only contingently related. What are we to say? In the paper Dave says,

A fifth issue is the worry that subjects might change their mind about a possible case without a change of meaning. Here, one can respond by requiring, as above, that the specifications of a scenario are rich enough that judgments about the scenario are determined by its specification and by ideal reasoning. If so, then if the subject is given such a specification and is reasoning ideally throughout, then there will not be room for them to change their mind in this way. Changes of mind about a fully specified scenario will always involve either a failure of ideal reasoning or a change in meaning.

I can agree with this in principle but since I can clearly conceive pain and painfulness being only contingently related it cannot be the case that we are in a position to determine which concept of pain is the one which will be employed in ideal reasoning. We may have our favorite but there are arguments on both sides and it is not clear where the truth lies. So though we can know a priori that either pain is necessarily painful or that it is contingently painful but we cannot know which is true now. To know that we would have to settle the pain asymbolia case; but that case it hotly contested (pun sadly intended :()

The upshot then is whether or not Dave has a response to Quinian worries about the a priori in principle he has not done enough to show that we are currently in a position to make use of this apparatus and so we are forbidden any of its fruits.