HOT (still) Implies PAM

In the comments on There’s Somethign About Jerry Josh and I have been having a nice discussion fo Rosenthal’s objection to my HOT implies PAM argument. The main challenge of my argument is a request to know what the difference is between conscious pains and conscious beliefs that results in there being something that it is like for the creature who has one but not when it has the other according to the higher-order theory. Whatever is offered it must be something that does not render the nature of qualitative properties mysterious and unexplainable. Josh’s suggestion is that the difference lies in the kind of property that one is attributing to oneself. As he says,

in the sensory [case], I attribute a state with such-and-such similarities and differences, arranged in such-and-such a mental space (space*) centered on me. The attribution of sensory qualities and mental spatial qualites, centered on the subject, makes it seem to me that I am in a sensory state. With intentional states, I do not attribute to myself this kind of thing; rather, I attribute to myself the property of believing that such-and-such is the case. No similarities and differences, no “egocentric” spatial stuff. So it will seem very different to me.

In my response I noted that I can agree that it will seem different to me, but it still remains a mystery as to how we get from ‘it’s different’ to ‘there is nothing that it’s like for me to have a belief’.

But as I was thinking about this I seddenly realized that this line of response fails for another reason (or, at least I can illustrate the reason on another way). So, mental states fall in four rough kinds for Rosenthal. There are sensations, perceptions (sensations+thought), thoughts (i.e. propositional attitudes), and emotions. The emotions, according to Rosenthal, do have intentional content, and so are propositional,

But it is equally plain[, he continues,] that the emotions are not just special cases of propositional attitudes, but are a distinctive type of mental state. What is it, then, that distinguishes the emotions fromt he cognitive states? Part of teh answer doubtless lies with the phenomenal feel that emotions exhibit. There is normally a particular way one feels when one is angry, joyful, jeleous, afraid, or sad, whereas the propsotional attitudes have no such phenomenal aspect. (C&M p 306)

Now when I am unconsciously angry there will be nothing that it is like for me to be angry, just like there will benothing that it is like for me to have an unconscious pain. When I am consciously angry I have a higher-order thought which attributes a qualitative mental attitude towrds some propsotional content (or state of affairs or whatever). So, I am conscious of myself as being angry that the guy in front of me won’t go when the light is green and then it is like being angry for me. So why is it the case that when I consciously believe that the guy in front of me won’t go when the light is green it isn’t like believing that for me? The difference between sensory qualities and beliefs isn’t present and yet there is still something that it is like for the creature (according to the theory)…so the difference that Josh and Rosenthal point to can’t be the difference that matters.

There’s Something about Jerry

 Here is how I describe Jerry in the earlier post

Given that we think that there could be unconscious beliefs, consider the following super-scientist Jerry. Imagine that Jerry has been raised in a special room, much like Mary and Gary, but instead of never seeing red (Mary) or never having a desire (Gary), Jerry has never had a conscious belief. He has had plenty of unconscious beliefs, but none of them have been conscious. Let us imagine that we have finally discovered the difference between conscious and unconscious beliefs and that we have fitted Jerry with a special implant that keeps all of his beliefs unconscious, no matter how much he introspects. Let us also imagine that this device is selective enough so that it wipes out only the beliefs and so Jerry has plenty of other conscious experiences. He consciously sees red, has pain, wants food, fears that he will be let out of his room one day, wonders what the molecular structure of Einsteinium is, etc.

Now imagine that one of Jerry’s occurrent, unconscious, beliefs suddenly becomes a conscious belief. For the first time in Jerry’s life he has a conscious belief.

Now, I can use Jerry as a way of motivating the intuition behind my HOT implies PAM argument. Let’s call ‘T1’ the time just before Jerry’s belief becomes conscious and ‘T2’ the moment when his belief becomes conscious. According to Rosenthal there is no difference in what it is like for Jerry. What it is like for Jerry at T1 is exactly the same as what it is like for him at T2 even though at T2 he has a conscious mental state he did not have before. 

Now, in the case of a pain we get a very different story. If the pain is unconscious at T1 then there is, according to Rosenthal, nothing that it is like for Jerry to have that pain but at T2 there is something that it is like for Jerry; it is painful for him.  Does this seem right to you?

It doesn’t to me, but this is just an intuition. Luckily, I have an argument which supports the intuition. Rosenthal claims that when we are conscious of ourselves as being in an intentional state (a mental state with intentional properties) there isn’t anything that it is like for us to have that intentional state, but when we are conscious of ourselves as being in a qualitative state (a state with qualitative properties) then there is something that it is like for us to have the qualitative state. But a qualitative property for Rosenthal is just a property that plays a certain functional role for the creature. It is the property in virtue of which the creature is conscious of the physical property that the mental property is homomorphic to. So, the mental qualitative property ‘red’ is the property in virtue of which the creature is conscious of physical red. When we are conscious of ourselves as being in a state with that kind of property it will be like seeing red for us.

So, what then is a belief for Rosenthal? It is a mental state that consists of two parts; a distinctive mental attitude (in this case, an ‘assertive’ one) that is held towards some propositional (a.k.a. intentional) content. So my (occurant) belief that it is Sunday is composed of an assertive mental attitude towards the intentional content ‘today is Sunday’. Mental states are mental because they make us conscious of something, so what does this make me conscious of? It makes me conscious of the fact, proposition, state of affairs, or what ever you want to call it, that the intentional content of the belief represents. So what reason, that flows from the theory as opposed to independent intuitions about what SHOULD be the case, dictates that there should be something that it is like for Jerry in one case (the qualitative one) and nothing that it is like for Jerry in the other (the cognitive one)?

Remember, what drew us to the higher-order theory in the first place was a desire to explain qualitative consciousness in a way that is compatible with physicalism and at the same time is philosophically non-mysterious. The purpoted explanation, viz that we are conscious of ourselves in a subjectively unmediated way as being in those states, now appears to be inadequate. So to retain the explanatory power of the theory we need to say that there being something that it is like for an organism to have a mental state just is that organism being conscious of itself in a subjectively unmediated way as being in that mental state. Why is there something that it is like? Because we are conscious of ourselves as being in that state. This is the only way that the theory can deliver on its promise of explaining consciousness.

Pain Asymbolia and Higher-Order Theories of Consciousness

I was reading this NDPR review of  Feeling Pain and Being in Pain (reviewed by Murat Ayede, who wrote the pain entry at SEP). It is a nice review and actually convinced me to buy the book. This is mostly because it reminded me of something I had heard about a long time ago but forgotton. That is a condition called pain asymbolia where a patient is able to report that they are in pain, even saying what kind of pain it is (i.e. burning vs. piercing, etc) and yet they do not find it to be unpleasent, nor are they extremely motivated to have it stop. In fact they are completely indifferent to it. Some even laugh or giggle when they are exposed to pain-inducing stimuli (like being pricked with a needle or shocked)!

I started to think about this condition from the point of view of the higher-order theory of consciousness, as I tend to do. On Rosenthal’s version of the higher-order theory a conscious pain is a pain that I am conscious of having. Pain states are first-order representational states that represent some property in the world in virtue of having a distinctive sensory quality, just like all sensory mental states. The words for the sensory qualities (e.g. ‘red’ ‘green’ ‘hot’ ‘cold’ ‘burning’ ‘shooting’, etc) have their extensions fixed via the conscious occurences of the the states. We use this as a way to single out some set of states for further examination, so there is a role for introspection to play. But then we investigate these states using the third-person tools of science and we learn things about them that might be suprising to the ordinary person. One of these is that they can occur unconsciously. They occur even though the subject denies that they occur. So, in the priming studies I talked about earlier, people deny that they see anything, but none the less we cans how that they did see it and that it has an effect that is predicable and noticable. This confirms a prediction that higher-order theories make and so counts as empirical evidence in support of these kinds of theories.

But what then does it mean to have a sensory quality on Rosenthal’s account? As I have shown before this is where Rosenthal invokes his homomorphism theory. A state represents red if it has a properety which is homomorphic to the property that physical objects have in virtue of which they cause that kind of experience in us. The physical color properties form a family of properties that vary from each other in sytematic ways. So physical red is more similar to physical pink than it is to physical green, etc. ‘physical red’ etc pick out some physical property (probably wavelength of light reflected, or something). The mental color properties form a family that preserves the homomorphisms found between the physical color properties. So, the property that is the mental representation of red (the red sensory quality) is a physical property that is more like mental pink than it is to mental blue, etc. These mental properties have the function of making the organism conscious of the physical color properties.

But all of these states can occur unconsciously. When they do there is nothing that it is like for the organism to have those states. So, a creature who is is a mental state with a red sensroy quality will be conscious of the physical color property. It will respond in all the normal ways it has in its reprotaire vis a vis the physical property of red. When the creature is in addition conscious of itself as being in that state (i.e. the state with the red sensory quality, i.e. the state with the property that is more like mental pink than mental blue, etc) it will then be like seeing red for the creature.

The same story is told in the case of pain. There is a family of physical properties which we have homomorphic mental qualities that represent them. The physical properties are bodily conditions. So, the mental sensory quality ‘stabbing pain’ is homomorphic to the physical damage prduced by stabbing injuries. ‘Burning pain’ homomorphic to tissue damage produced by burning damage, etc.  So a mental state has a painful sensory quality if it is a mental state that has a proprty that is more like sharp stabbing pain than it is like dull throbbing pain, etc. These states can occur unconsciously and when they do they are bad for the organism. They have all of their regular causes and effects. So, an unconscious pain will produce wincing and shrieks and crys and will interupt concentration and etc. All the while though, there will be nothing that it is like for the creature that has this pain. It will not feel painful to the creature, even though it is acting like it is in pain. When the creature becomes conscious of the mental state with the painful quality it will then become painful for the creature.

At first glance it might seem like pain asymbolia is a counter-example to an account like Rosenthal’s (in fact I think the author of the book and Ayede agree on this, though neither mentions higher-order theories explicitly, or at least Ayede doesn’t…I will have to wait to get the book to find out about the author (whose name escapes me right now). The reason is as follows. The subjects with pain asymbolia report that they are in pain and can indentify the particular sensory quality that the pain has. This is good evidence that the pain is a conscious pain. This means, according to what we have been saying so far, that they must be conscious of themselves as being in a state that is more like pinching than it is like breaking, etc. They have he requisite higher-order thought (ex hypothesi) but lack the painful what it’s like for them to have the conscious pain.

But this is too quick. In the first place it is not the case that the subjects report that there is nothing that it is like for them to have the conscious pain. It is, presumably, like something for them to percieve the bodily damage, no? It is presuambly like being stuck with a needle, but not in a a bad way for these subjects. Now there is no mystery as to why this happens to these people. They have a specific type of brain damage and so are clearly lacking a certain kind of information. So Rosenthal can say that the subject is conscious of the first-order pain state as a state that is more like piercing than burning, etc, but not conscious of it in respect of its negative affect.

But now notice that he can no longer have his objection to my argument that beliefs must be qualitative as well…or so I’ll argue in the next post….I have to go and wash some dishes 😦

Rosenthal’s Objection

In the last post I laid out and responded to a couple of objections to my argument that higher-order theories of consciousness are all committed to there being a Phenomenal Aspect for all Mental states (HOT Implies PAM, get it? 🙂 ) I want to now address an objection raised by David Rosenthal. Let me set up the argument in a slightly different way. Consider (1) and (2), they are tennents of the higher-order theory.

(1) A conscious belief=(ex hypothesi) a belief that I am conscious of myself as having

(2) A conscious pain=(ex hypothesi) a pain that I am conscious fo myself as having

All higher-order theories accept this much. What they will disagree on is the specific way that I am conscious of the first-order state. The argument works at this very general level and so, I think, applies to all versions of higher-order theory. In one case we are told that there is something that it is like for the creature to have the conscious mental state while in the other case there is nothing that it is like for the creature to have the conscious mental state. There is something that it is like to have (2) but nothing it is like to have (1). I argue that if it works for (2), it better work for (1) as well. Or if not explain the difference between the two cases. Any thing that is pointed out as a difference will render the attempt at an explanation of qualitative consciousness ineffectual and so obviates the very motivation for accepting the higher-order theory in the first place.

Rosenthal tries to explain the difference between the cases as follows. The difference is that in one case the higher-order state represents you as being in a painful state whereas in the other case it represents you as believing something. This objection draws on the specifics of the higher-order thought version of the higher-order strategy. Intentional representation is always representation AS. So, in (1) one is represented as believing, and in (2) one is represented AS being in pain. Since in (2) one is conscious of oneself as being in a painful state it will seem painful to you and since in (1) you are conscious of yourself as believing (say) p it will seem to you that you believe p.

This is a very natural kind of response for Rosenthal to make, as it is part and parcel of the higher-order thought theory that differences in representational content result in differences in conscious experience. The common sense example here is in wine tasting. When one starts to learn about wine (or Scotch Wiskey, as I prefer 😉 one statrs to learn a techinical vocabulary to describe the experience that one has when tatsting. Acquiring these new concepts allows one to become conscious of ones experience in different ways, thus making the conscious tastes themselves richer and fuller. Another example that I like is the following. I once put some salad dressing on my salad which I thought was Ranch. When I tasted it I was suprised to find that it was the worst tasting ranch dressing I had ever had. When I said as much to my girlfried she responded ‘that’s not ranch, it’s blue cheese!’ At which point I realized that it was not a terrible tasting ranch but a nice tasting blue cheese. The way I was conscious of this one and same taste made a huge difference to what it was like for me to consciously taste it.By hypothesis the first-order states do not change. What changes is our consciousness of those states. So differences in representation content matter and show up as differences in conscious experience.

It is also important what kind of state one is represented as being in. It is because the states are represented as my mental states that there is something that it is like for me. This is Rosenthal’s familiar response to the problem of the rock. Why is it that thoughts about my mental states makes them conscious mental states while my thoughts about that rock over there do not make it conscious? It is because I do not represent the rock in the right way. I do not represent it as a mental state that I am in. I represent it, the rock, as a certain shape, size, color, etc. That is what makes me conscious of the rock. But that state, the one that makes me conscous of the rock, only becomes conscious when I represent it as the state that I am in. So, then, there is nothing wrong with saying that the difference between (1) and (2) is similiar. It is the difference between being represented as a qualitative state and being represented as an intentional state. Of course, the objection continues, IF beliefs were qualitative states the higher-order thought theory could handle that by positing that the higher-order thought represented beliefs as qualitative states. So the issue of whether beliefs are qualitative or not is a seperate issue and the higher-order theory itself does not force us one way or another.

But this seems to me to beg the question against me. I wanted to know what the difference between (1) and (2) was such that in one case there is something that it is like for me to have it and not in the other. The answer is that in one case I represent myself as being in pain (and we all know that there is something that it is like to have a conscious pain), while in the other case I represent myself as believing someting (and we all know that there is nothing that it is like to belileve something). No evidence is given as to why this difference in representation should make such a huge difference to our conscious life. Why should being represented as one kind of mental state rather than another result in this huge difference? I mean, I agree with Rosenthal that differences in representational content will result in changes in what it is like for us (for instance, I may represent one and the same first-order state as either ‘blue’ or ‘baby blue’ and what it is like for will change). But this is a change in what it is like for me, not the cessation of what it is like.

The only model we have for that is the response to the rock. Being represented as a mental state or not results in very different kinds of experience. But in that case we have an independent motivation. A mental state is a state which makes me conscious of something, so rocks aren’t mental states and so we don’t owe an explanation for what it is that my thoughts about the rock make it conscious. But in the case of the qualitative versus intentional states issue this response does not work. What we are trying to do is to give an explanation of the nature of qualiltative consciousness in a way that is not naturalistically mysterious. We are not trying to explain what it means for something to be a mental state. We have a seperate theory of what it is to be a mental state. This is part and parcel of the higher-order strategy. But now if we say that there is something special about qualitative properties such that for some unknown reason when we are conscious of them there is something that it is like for us to have the first-order state, we lose the ability to explain what qualitative consciousness is supposed to be.

There is more that I want to say about this, but I have to go and move my car for alternate side parking!!!!

HOT Fun in the Wintertime?

I am starting to think about my talk in April. The main difference between this this talk and the one that I gave at the ASSC will be that I will consider a couple of objections to the argument and make some replies. In the earlier version I went on to trying to reconcile and incorperate qualitative beliefs into the general framework of the higher-order theory that Rosenthal defends, especially his homomorphism theory of the sensory qualities.

So here is a summary of the original argument

Given that the transitivity principle says that a conscious mental state is a mental state that I am conscious of myself as being in the argument for the commiotment to the qualitative nature of conscious beliefs is pretty simple and straight-forward.

  1.   HOT Implies PAM
    1. The transitivity principle commits you to the claim that any mental state can occur unconsciously and so to the claim that pains can occur unconsciously
    2. An unconscious pain is a pain that is in no way painful for the creature that has it (the transitivity principle commits you to this as well, on pain of failing to be able to give an account, as promised, of the nature of conscious qualitative states)
    3. It is the higher-order state, and solely the higher-order state, that is responsible for there being something that it is like to have a conscious pain.
    4. So, when a higher-order state of the appropriate kind is directed at a beleif it should make it the case that there is something that it is like for the creature that has the belief, otherwise there is more to conscious mental states than just higher-order representation.

So I got two objections that I owe to Rocco. The first is that (3) is too quick. There are some (like Rocco) who think that the pain is only painful when the higher-order state and the lower-order state that it targets occur together. So, it is not solely the higher-order state that does the work. It is the higher-order state occuring in conjunction with its target. This is attractive because it rules out the possibility of the higher-order state occuring in the absence of the lower-order state. This is thought by many to be an embarresment for the higher-order theory since one will be forced to say that one seems to be in a conscious state that one is in fact not in. This sounds strange indeed! But though unintuitive there is nothing incoherent or even implausable about this once one becomes familiar with the theory. The transitivity principle says that a conscious state is a state that I am conscious of myself as bein in, so if I am conscious of myself as being in some state, then I am ina consious state (the one that I am conscious of my self as being in). It may turn out that the first order state is not there, in which case I am conscious of myself as being in a state (and so have a conscious mental state) that I am not in fact in…it just seems to me that I am in it when I am not. Now, from the first person point of view these two happening will be indistinguisable. That is, whether or not the lower-order state does in fact occur or not my conscious experience will be the same (given that I have the same higher-order state in each case). But, there are third-person techniques that would allow us to tell when the first-order state was really there or not and so allow us to differentiate the two cases. This turns the problem into an empirical one and we will have to wait until the brain sciences are suffiently sophisticated (on a side not, onwe of the session in Tucson will be on Brain imaging as mind reading, something I am very interested in!!!!). But even if we grant to the objector the premise that the higher and lower-order content must both occur for there to be somethign that it is like for the creature to have the pain, the same question arises. So, this is more an objection to the way that I formulated the argument, rather than to the argument itself.

Another thing that Rocco pointed out was that the argument loses its force if one thinks that all beliefs are dispositions and I grant that. However, I don’t believe that all beliefs are dispositions (though most of them may well be, there have to be some occurent beliefs!). The argument is directed at people who think that the propositional attitudes (belief, fear, desire, love, hate, joy, despair, etc) are, or at least can be, occurent mental states (whether or not these occurent mental states are language like is a seperate question)

The next group of objections come from Rosenthal and I will address those in a seperate post.

Back in the Swing of Things

So I am back in NYC and settling into the Winter session course I am teaching…I am also mastering Assassin’s Creed on the Play Station3 🙂

 I hope that everyone had an exceptional New Years…I started the new year with some good news. I found out that I will be going to the Towards a Science of Consciousness meeting in Tucson to present HOT Implies PAM: Why Higher-Order Theories of Consciousness are committed to a Phenomenal Aspect for all Mental States, even Beliefs (which is a re-worked version of the first half of my paper Consciousness, (Higher-Order) Thoughts, and What it’s Like…you can see the virtual presentation from this summer’s Association for the Scientific Study of Consciousness meeting in Vegas HERE). I am very exited to do this as I have had lots of great feedback and discussions about my argument with David Rosenthal and Rocco Gennaro and I think the argument is stronger than ever…

 Before I left for vacation I was having a very interesting discussion about Christmas and whether or not it is a Christian holiday (and whether or not, even if it is, atheists and agnostics ought to celebrate it). Let me re-cap what I think my argument was supposed to be.

1. The argument from etymology– The word ‘Christmas’ means ‘The Christian holiday celebrating the birth of Jesus Christ’ in English. There is no definition of the word in any dictionary which lists it as a secular holiday

This indicates that ‘Christmas’ designates a Christian holiday. Now, there have been two sorts of response to this argument.

R1. The actual holiday is a pagan holiday that the Christians took over and renamed, so whatever you call it, Christmas is not a Christian Holiday at all, but just the disguised pagan holiday

This doesn’t seem right to me. It is true that rituals of Christmas are taken over from pagen religions, but this was a common strategy that the Church employed to boost its numbers. The locals are less reluctant to convert when the new religion has familiar attriibutes but none the less the Church (in around 300 CE) created a new holiday to commerate the birth of Jesus Christ and they decided to call it Christmas (originally Christ’s Mass). The practices that we have today derive from that Chriatian tradition, not the earlier pagan one. The fact that the celebration occurs on a day that no one actually believes marks the actual annevesery of Jesus’ birth does not matter. We do not celebrate President’s day on Washington’s actual birthday, but it is a celebration of his birth even still…Nothing similar has happened that would make Christmas a non-religious holiday…This leads us to the second response that was made,

R2. That may be the meaning of the word, in some external sense, but what matters is what the person intends to be celebrating (the internal meaning of the holiday). So, if I celebrate Christmas in a completely secular way, not intending to be performing any religious rituals, or to be giving thanks for the incarnation of God in the flesh, then I am not celebrating a religious holiday.

But is this right? Suppose that I decided to celebrate Adolf Hitler’s birthday (April 20th, I *think*)? Suppose that when challenged I replied that I was not intending to commemorate the mass murdering individual that was the Fuhrer of Germany, but rather the artistic vegetarian that Hitler was in his youth. It is important, I might continue, that we remember not to squander our talents. Hitler was a powerful persuasive personality and if only he had used his powers for good instead of evil the world might have been a very different place. So it is important to remember his birth.

Or again, suppose that I chose to celebrate Osama Bin Laden’s birthday? Suppose I gave the same sort of justification as above. It seems to me that whatever I intend to be doing, I am celebrating the birth of these hateful and wicked men.

Now, this response might be taken to mean that there is a separate holiday that is a secular celebration of family and helping the disadvantaged that just so happens to be celebrated on the same day as the Christian holiday (sort of like 4/20 a ‘stoner’ holiday is celebrated (accidentally I hope) on the same day as Hitler’s birthday). I don’t think that this is actually the case now (though maybe we are in the transition period and in the future ‘Christmas’ will be ambiguous in English as between a Christian and a secular holiday). At anyrate, I am sympathetic to this idea (this was the idea behind my ‘Family Day’ or, as I prefer now ‘Giftmas’ 🙂 but I think we ought to femphasize, and help formalize this process with the coining of a new name and specifically dedicating it to secular celebration.

Doing some research about this I discovered that the issue has been taken to court by some atheists. They argued that the fact that we get Christmas day off amounts to state endorsement of Christianity and so violates the seperation of church and state. Here is a nice little article on the case from About.com. The judge rules against the claim and denies that there is a violation of the seperation between church and state. The reason is not becaus ethe judge finds that Christmas is not a religious holiday but because the day off serves a “valid secular purpose’. Having Christmas day off de facto serves the purpose of bringing families togeher and that is a secular purpose of the holiday. I think this is right, but that doesn’t mean that the holiday is itself a secular one, unless someone declares it to be so…

Conceptual Atomism, Functionalism, and the Representational Theory of Mind

There was once optimism among philosophers that functionalism could give a complete account of the mind. Today philosophers are a lot less sure of this due mostly to the arguments expounded by Block in his now classic “Troubles with Functionalism,” (Block 1993), as well as his later “Inverted Earth” (Block 1997), where he argued that functionalism cannot account for qualitative states. There are at least two strategies that one could take in response to Block’s arguments. First there is what Block has called the ‘containment response’. One gives up on qualitative states but holds that beliefs, indeed thoughts in general, can still be given a purely functional account. This sometimes takes the form of ‘belief box’ talk. One says that p is in one’s belief box and this is supposed to be shorthand for ‘p is playing the belief function,’ where this means that p has characteristic connections to characteristic inputs and outputs.

This is the strategy that Fodor has adopted for years. I think it is well known that he endorses a functional account of what beliefs are (though this is not to say that they have functional definitions) and that this is part and parcel of the representational theory of mind. He has recently gone on to argue that in order for the representational theory of mind to be successful it needs to be able to provide an account of what concepts are. Where at the common sense level concepts are the components out of which beliefs are made. So, on his usage, the belief that grass is green is made up of the concepts GRASS, IS and GREEN. The reason that it is the belief that grass is green (as opposed to the belief that water is wet) is because of the concepts which are in the belief box (are playing the belief role).  It is also well known that he has argued that of all the theories that are out and about in cognitive science none of them stand up to the various requirements that things like compositionality and systematicity require. This has led him to formulate conceptual atomism. Sadly, though there is a problem. Conceptual atomism is not compatible with a functional account of what the attitude part of the propositional attitudes consists in. Since Fodor thinks that atomism is the only theory of concepts compatible with the representational theory of mind, this is a big problem indeed. First I will rehearse the inverted qualia argument and then argue that a version of this argument can be run on beliefs if atomism is true.

 The inverted qualia argument, you will remember, goes as follows. We imagine two twins, let’s call them Pat and Tap. Now Tap has special lenses installed in his eyes at birth. These are the infamous ‘inverting lenses’ which cause the person in whom they are implanted to have inverted qualitative experiences. Thus Tap sees what Pat sees when looking at fire trucks (i.e. red) while he (Tap) is looking at grass (i.e. green) and vice versa. These children then grow up as usual. By the time they are in High School the two twins function identically. They use all the color words correctly, each calling red things red and green things green but one of them sees what we call green when looking at red things. They have inverted qualitative states but identical functional states and this suggests that qualitative character is not captured by the functional description of the twins. Once one has gone this far it is a short step to the absent qualia argument, which just supposes that we might have the functional state without any qualitative aspect to it at all. If one does not want to take the containment response then one can try and show that absent qualia are impossible and that will help to save the theory. This is the strategy that Shoemaker famously takes. He argues that the qualitative states will have many connection to belief states tsuch that we would not have the releveant kinds of belief states in the absence of the qualitative state.

It is generally taken for granted that the propositional attitudes are immune to this kind of argument, partly due to the alleged fact that these states do not have any qualitative character associated with them.  Block sums up the common sense view in Troubles with Functionalism when he says

…it is very hard to see how to make sense of the analog of spectrum inversion with respect to nonqualitative states. Imagine a pair of persons one of whom believes that p is true and that q is false while the other believes that q is true and that p is false. Could these two persons be functionally equivalent? It is hard to see how they could. Indeed, it is hard to see how two persons could have only this difference in beliefs and yet there be no possible circumstance in which this belief difference would reveal itself in different behavior. (p. 247)

Suppose that P is ‘dogs are nice’ and Q is ‘cats are nice’ then Pat would have to believe that dogs are nice and that cats are not nice while Tap would believe that cats are nice and that dogs are not nice. It is hard to see how this difference in belief would not result in some difference in behavior regarding cats and dogs. If there are differences in their behavior then these two are not functionally identical.

But then in the footnote to this passage Block admits that there is a sense in which we can have inverted beliefs. He asks us to imagine two distinct afflictions. One is the lenses that we are familiar with from the inverted qualia argument; this he calls ‘Stimulus Switching.’ A person wearing these lens will calls red things ‘green’ because he (falsely) believes them to be green. The second ailment, called ‘Word Switching’ is an ailment where the victim simply uses the incorrect (but opposite) words for the colors. This person, then, calls red things ‘green’ but has normal color beliefs; in other words he will call something ‘green’ but only accidentally, he really means red, and he believes that the object is red.

Now suppose that a victim of Stimulus Switching suddenly becomes a victim of Word Switching…He speaks normally, applying ‘green’ to green patches and ‘red’ to red patches. Indeed he is functionally normal. But his beliefs are just as abnormal as they were before he became a victim of Word switching…So two people can be functionally the same, yet have incompatible beliefs. Hence the inverted qualia problem infects belief as well as qualia (though presumably only qualitative belief).

To illustrate this again imagine our two twins: When Pat and Tap are both looking at a red apple, both will say that it looks red and both will behave in just the same ways towards the apple as would the other. Except that Pat believes that the apple is red while Tap believes that the apple is green. Calling ‘the apple is red’ p and ‘the apple is green’ q we can see that Pat believes that p is true and q is false while Tap believes that p is false and q is true. So this really is a case of belief inversion in the way that Block says is hard to imagine happening. This seems to me to be the same kind of thing that happened to Locke when he imagined his missing shade of blue but then goes on to dismiss it as unimportant.

What does Block mean when he says ‘presumably only qualitative belief’? He (presumably) means those beliefs that are connected to qualitative states, and this would seem to block Shoemaker’s defense of functionalism. This will include more than just beliefs about colors. It will include all of our perceptual beliefs as well as any beliefs that stem from them. So we cannot define qualitative similarity in functional terms in the way that Shoemaker needs. Shoemaker’s response depends on it being the case that to believe that we are in pain and yet not actually be in pain cannot happen. But there is some reason to think that this may be possible. And the fact that we can have massive perceptual belief inversion means that the connection to other states cannot help us to pin down the pain state functionally.

As I mentioned earlier, Fodor argues that for the representational theory of mind to work it needs conceptual atomism, so let me briefly say what that is. He has argued that anyone who endorses a RTM has to endorse conceptual atomism. Concepts are primitive and acquire their content via some ‘locking relation’ to things in the world. There are two choices for two the ‘locking relation’. One is the Causal/historical kind that is taken by Kripke, Devitt, and Millikian. Fodor has argued that these kinds of accounts can’t provide sufficient condition for concept acquisition. As he puts it, ‘causally interacting with doorknobs’ could not be enough to acquire the concept something in the head must have happened, presumably in the head! Since he thinks that it can’t be learning there is only one option left. Concepts must work like appearance properties.Red things are the things that produce in us a certain predetermined qualitative state. Nothing fishy here, standard Empiricism, really; just as red ‘triggers’ a preset state in a sensory space, so too with doorknobs. Being a doorknob is being the kind of thing that creatures with minds like ours ‘resonate’ to. This is his controversial claim that all concepts are innate

Now we can see why atomism is subject to inversion argument. Let’s again take P to be ‘cats are nice’ and Q to be ‘dogs are nice’. If MOST concepts are appearance concepts the we can run Block’s argument on ‘cat’ and ‘dog’ instead of ‘red’ and ‘green’. We imagine a device that when worn inverts the perception of cats and dogs. Thus when Tap wears the device he will see a dog where Pat sees a cat and a cat where Pat sees a dog. Imagine Pat and Tap both looking at my wiener dog Frankie. This is exactly analogous to the case before. These two are functionally identical people yet one believes that Frankie is a dog and not a cat while the other believes that Frankie is a cat and not a dog. So functionalism cannot account for intentional states if concepts are appearance properties.

So the situation is that if one thinks that the representational theory of mind is important and that it would be nice if something like that could work then one is committed to atomism. But atomism means that functionalism about the attitudes can’t be right.

Unconscious Change Detection, Priming, and the Function of Consciousness

So, if you have been around here lately you will have noticed that I have been talking a lot about priming, change blindness and the function of conscious mental states in the higher-order theory. I have been arguing that some recent results on priming effects in change blindness suggest that there is some function for conscious mental states (even/especially for those who like higher-order accounts (of whatever type). David’s response to this has been to admit that this shows that there is some functionality for conscious metal states but then to insist that it is not enough to justify calling it ‘the function of consciousness’ or anything like that. He then points out stuff like this article and argues that change detection is pretty big stuff, maybe even the stuff that you thought might turn out to be The Function of Consciousness but even that can be done unconsciously.

But after thinking about this, I am not sure that the Fernandez et al stuff really shows as much as David thinks that it does.  So, consider the experiemtn that Fernandez et al did as summed up in the figure below (from their paper).

fig-1.jpg

The only difference between the two pictures is whether one sees George or Not-George. Subjects then see figure b and are forced to guess which of the two highlighted bars was the one that changed. The study reports that people pick the correct one even though they say that they did not see the change.

But notice that in figure b subjects are presented with Not-George. They did not check to see what would happen if they presented subjects with George and asked the same question. Mow, though they didn’t do this, the Silverman-Mack experiments predict that George should have been just as good at allowing subjects to perform above chance. This would suggest (it seems to me) that, though the subjects are conscious that there is a difference, they are not conscious of what the difference consists in. When they are conscious of the difference as the difference (that is, when the consciously see the difference) the Silverman-Mack results predict that only Not-Geroge would show any effect. The representation of George would be supressed. So the kind of change detection that happens consciously serves a distict function from the kind that happens unconsciously. Conscious change detection serves to bias the system; inhibiting some representations and thereby enhancing others, unconscious ones don’t. This biasing is important for survival since it helps to determine which representations can be assesed for action (like button pushing) and so this is a function for perceptual consciousness that is pretty important.

Stay tuned…there’s bound to be more of this after the big talk tomorrow!

Priming, Change Blindness, and the Function of Consciousness

This Wednesday David Rosenthal will be giving  a talk at the Graduate Center entitled ‘The Poverty of Consciousness’. If you happen to be in the New York Area and you have a hankering for some hot and heavy philosophy of consciousness, come on down! (see the Cog Blog for some details).

I have been thinking about this issue and in light of my last post on priming and change blindness where I voiced my suspicion that the results posed a problem for Rosenthal’s claim about the function of consciousness. This lead to soem emailing between David and I and so I figured I would take some time to sort this stuff out.

Rosenthal’s main contention is that there is no evolutionary (read: reproductive)advantage to an organism’s having consious mental states. This is to be distinguished from the claim that there is no evolutionary advantage to the animal being conscious (creature consciousness), which quite obviously gives the creature a huge evolutionary advantage (e.g. being awake often helps one get away from predators…that is unless one has taken ambien!!!). The primary reason that he thinks this is because he endorses the higher-order theory of consciousness which claims that a mental state is conscious when I am conscious of myself as being in that state (and of course there is some experimental results which support the claim 🙂 ). This view commits one to the claim that any mental state can in principle occur unconsciously and this seems to suggest that most of a states causal powers will be had by the state whether it is conscious or not. If so then what purpose could (state) consciousness add?

When people hear this they usually think that it means that consciousness is completely epiphenomenal (has no causal efficacy). But this isn’t right, as I discussed in this post on Uriah Kriegal’s version of this argument. As Rosenthal says, 

Lack of function does not imply that the consciousness of these states has no causal impact on other psychological processes, but that causal impact is too small, varied, or neutral in respect of benefit to the organism to sustain any significant function. So my conclusion about function for does not imply epiphenomenalism.

His claim is that whatever causal powers a state’s being conscious endows it with they are too ‘small, varied, or neutral with respect to benefit’ to count as serving any function. O.K., so if this is your view then you have your work cut ouot for you because you have to A.) examine and refute all of the proposed functions for consciousness out there (from ‘deliberate control of action and rational inference’ to ‘enhances creativity’) and B.) provide an alternate explanation for how in the world conscious mental states ever cam about in the first place (tune in on Wed. to hear Rosenthal’s answers to these questions, though I gather that he will mostly be talking about intentional states and not qualitative states).

O.K., so now enter the priming results that I talked about previously (and which Rosenthal is aware of and has read and cites in his forthcoming papers/book on this subject). What that paper showed is that when one is presented with two pictures. A and B, which have some difference between them (like an extra tree or something), D, then when one is presented with A and B and one is not conscious of the difference then both A and B show priming effects (i.e. one will complete a degraded picture with what one unconsciously saw in A and B) but when one consciously notices that there is a difference between A and B then only B (i.e. not A) shows priming effects.

Now, if this is evidence for anything it will be evidence for there being a function for preceptual states (qualitative states). It would still be an open question, what, if any, function intentional states have (unless of course one, like me, thinks that intentional states are qualitative states). But is it evidence for a function of conscious states?

I suggested that it is evidence that a state’s being conscious inhibits previous ‘outdated’ representations and so serves to guide certain representations (i.e. the conscious ones) to greater causally efficacy and so to greater effect on behavior. If this were true, it seems to me that that would definitely give some evolutionary advatage to having conscious states. Suppose, for instance, that a bear is charging at you and that there is a spear that is just out of reach. The bear is running straight at you and you are casting frantically about for something to defend yourself with. As you look around, wildly, you first see the spear out of reach, and then in another pass you see the spear within reach (say it was knowcked towards you in the chaos of the bear stampeding towards you). Now let us assume that in one case you do not consciously see this difference and in the other case you do. In both cases you will have representations of the scene with the spear out of reach and with the spear within reach. But only in the case that yo consciously see the change (that is, consciously see that the spear is not in reach). The previous representation is now inhibited and the representation of the spear is now moral causally active and liable to cause you to reach for the spear and (maybe!) stave of the bear. This doesn’t seem like some minor or neutral thing. This sounds like an important function for perceptual consciousness!

During our email discussion he reffered me to the following paper,

Fernandez-Duque, Diego, and Ian M. Thornton, “Change Detection without Awareness: Do Explicit Reports Underestimate the Representation of Change in the Visual System?“, Visual Cognition 7, 1-3 (January-March 2000): 324-344.

His argument seems to be that, while I am right that these results do suggest some ‘utility’ for conscious perceptual states, it is not as useful as change detection, and that can happen unconsciously! I am still thinking about that, and will come back to it…but right now I have to go and move my car for street cleaning!!!!

Some Cool Links

(via David Pereplyotchik)

Below are links to some examples of talks that fall well within the cognitive science arena. I’ve found, however, that many of the non-cogsci talks are more interesting, because they introduce one, often in a vivid way, to a subject matter that is less familiar. (For instance, Wade Davis’s talk on anthropological fieldwork was, for me, genuinely exciting.)

You can browse the talks by clicking on the topic links at the bottom right of each video’s page. Or just start here

Enjoy.

David Pereplyotchik