Where Am I?

I’m back!!

 The plane ride there was long and super bumpy (and I hate flying!!) and then I got strep throat and the plane ride back was a red eye that got into JFK at six a.m. (and I REALLY hate flying!!!!)…but other than that California was fantastic! 🙂

The APA was fun, though I got there on the last day of the conference and since I wasn’t feeling well (I was chaining Sucrets one after the other) I left after my talk. But I did see the session before mine, by Hanna Kim, on a proposed compositional semantics for metaphors which was interesting. She sketched an account that borrowed Jason Stanely’s idea of a hidden unarticulated variable that was context sensitive to metaphorical meaning. This would allow one to get the meaning of the metaphor in a way that was completely determined by the meaning of the parts (including the hidden, context sensitive variable). Marga Reimer responded with a couple of objections. One of which was the Gricean kind of objection one would expect. She invoked Grice’s modified Occam’s razor and asked why we need a semantic account of metaphor’s when we have a perfectly good account from Grice that appeals to speaker’s intentions and doesn’t posit all of these weird hidden variables? (Here! Here!) Kim’s answer, in part, was to point out that Grice’s account cannot take care of ‘impossible metaphors”.  The basic idea behind impossible metaphors is that there are semantic and syntactic constraints on what kinds of sentences we can make metaphors from. I don’t recall any of her examples and I can’t find the handout…but still, I wonder about this kind of strategy. Why is an objection to Gricean theories to point out that sentence construction is constrained by syntax? A speaker is constrained by what she can reasonably assume will alert a hearer to her communicative intention and thereby fulfil that very intention. The syntax of a language is definitely one thing that would suggest itself as something which would constrain which utterances a speaker can reasonably expect a hearer to successfully infer what one is communicating. No problem.

My talk went well, I think. We had some interesting discussion. The commentator (Imogen Dickie) posed a dilemma for me. If we can have rigid designation in thought then either the problem of necessary existence reoccurs at that level and we haven’t solved the problem or we can have rigid designation without the problem of necessary existence (in thought) and so we shouldn’t be worried about it in language. This is especially pressing when we think that S5 is attractive because it is supposed to be a logic for thought.     I responded that the problem of necessary existence is only a problem when we try to regiment our thoughts into a formal language. There is no problem with having a singular thought about Socrates, the problem is trying to formalize a sentence representing that thought. This is the evidence that we have that we need an separate account of the semantics of language. But S5 is still a logic of modal thought because we can formulate descriptions in it that ‘single out’ the object of thought without rigid designators. The absence of singular terms in our logic is nothing more than an inconvenience. She also mentioned, in passing, that Williamson thinks that necessary existence is not as terrible as one might think. One might argue that I exist in all possible worlds but in some worlds I exist without any properties. This was quite shocking to me, as I can’t really fathom what that would mean. Really, what does that mean? Anyone know?

From the audience I was asked several good questions. One was from Tim Lewis on how I felt about the fact that names on my account would fail the Church translation test. That is, we expect that ‘Richard’ and ‘Ricardo’ to be synonyms but if the really stand for ‘the bearer of “Richard”‘ and ‘The bearer of “Ricardo”‘ then pretty clearly they aren’t synonyms since they each have a separate quoted name in them. I thought that was a pretty nice objection. At the time I said that I would argue that names are not part of a language. So, in a complete dictionary of English there would be no ‘Richard’ or ‘Doug’ (forget about the dictionaries around now, they are half encyclopedia, I am talking about just a list of the words of a language and their conventional meanings, pronunciation guide, and syntactical/grammatical categories. That seems right to me, but then on the plane home, in a half trance, I started to think that maybe we could use Seller’s notion of ‘dot quotes’ to solve the problem if people don’t like the position on names. So instead of ‘the bearer of “Richard”‘ we could have ‘The bearer of *Richard*’ where ‘*P*’ is ‘dot-quote P’ and basically serves to single out all of the functional types that play the role that ‘Richard’ does in English. This would allow one to preserve the intuition that other language cognates of English names are synonyms. Or so it seemed on the plane…and besides I like the bit about names not being part of the language…

The other question that I remember was from Adam Sennet (there were a couple of others that I am forgetting). He echoed Williamson’s point that since we know quite well what a rigid designator is and how one would introduce them into a formal langauage it is then quite odd to say that there aren’t any. I responded that we know what it would be like for there to be all kinds of things that don’t exist. I know what it be like for there to be square circles (it would be for there to be one object that is both square anc circular at the same time), but that doesn’t mean that there are any. This is exactly what one would expect. We know what it would be like for there to be flogisten or tachyons or any other theoretical posit we come up with. It would be like finding the thing that we posited, but someimes we find out that they don’t exist. Interpreting that syntactical category proper noun as a rigid designator is a natural attempt at capturing what it is that we do when we think about some particular thing but when we do model that category that way we get the problems with necessary existence, which means that it is a mistake to model it in that way. I compared it to what happens when we try to mix quantuum theory with relativity theory. When we try to calculate the probabilities for things which we have well worked out answers for we get crazy results (like the probability of some event occuring being infinite). This let’s us know that there is a problem and then you get all of the different answers to solve the problem. Our finding the proofs for necessary existence in S5 are like the infinite probabilities in physics; it is an indicator that something needs to be done.

This is, by the way, why I disagree with Chappell’s charge that logic is over rated and that, in particular, my

employed logical apparatus merely serves to build in misunderstandings. The formal steps of the argument may be flawless, but that’s all for naught if the entire argument is based on a mistake — due to failing to understand precisely what all those formalisms really mean.

I understand what the formalisms mean and I am using them to apply pressure to a person who holds a certain kind of view. The proofs count as evidence that some assumptions don’t work. This is exactly what formal logic is good for…though I do agree that one needs to also make the argument in prose as well as symbols.

OK, well that’s enough for now, I gotta get to work on my Tucson presentation and grade some exams!!!!!!!

Language, Thought, Logic, and Existence

Well, I’m off to go present my paper at the APA! I’ll be back on Monday. I guess I have Philosophy Sucks! to thank, since I was noticing that the paper grew out of some interesting discussion I had here last year. Thanks to everyone who participated!!

You can enjoy the virtual version here (and on the sidebar with the other virtual presentations), which is a recording of a rehersal I did today (It may take a second to open since I recorded it in stereo, which I haven’t before).

Homomorphism Theory and the Mental Attitudes

OK, so I have been distracted the last few days with thoughts about Berkely and the relationship of God to quantum mechanics, but today I have to get back to work on my consciousness stuff…April will be here before you know it, and I have still got to turn this into a powerpoint presentaion!

 So, before my ADD kicked in I was addressing Josh and Rosenthal’s response to my question about the difference between conscious pains and conscious thoughts that resukts in one being qualitative while the other isn’t. Their response is that the difference between the two cases is the result of the difference between the kind of property that one attributes to onself. I argued that they still haven’t told me why one isn’t like anything at all for the creature and that it is inconsistent with Rosenthal’s view about the emotions.

However, even if one is not moved by the above considerations, a closer look at Rosenthal’s account of thought and its relation to speech reveals something which closely resembles his homomorphism theory of the sensory qualities. He may be right that we cannot give a hommorphism theory for the content of beliefs, but we may be able to give one for the mental attitudes themselves.

On Rosenthal’s view there is a tight connection between thought and language. So for him thoughts consist in taking some mental attitude towards some propositional content. These thoughts are expressed in speech acts that (most often) have the same propositional content and an illocutionary force that matches the mental attitude of the thought. So, for example, if I think ‘it’s snowing’ (that is, if I believe that it is snowing) I can express that by saying ‘it’s snowing’ and my speech act has assertive illocutionary force that matches the mental attitude of the thought. This is in general true for him. As he says,

When a speech act expresses an intentional state, not only are the contents of both the state and the speech act the same; the speech act and the thought also have the same force. Both, that is, will involve suspecting, denying, wondering, affirming, doubting, and the like. Whenever a speech act expresses an intentional state, the illocutionary force of the speech act corresponds to the mental attitude of that intentional state. (p. 286)

So there are families of mental attitude among which similarities and differences will hold. So believing will be more like suspecting than it will be like wondering.

What are we to say about the actual homomorphism to perceptible properties? Is there any set of properties that the mental attitudes are homomorphic to? That is, is there a set of properties that have similarities and differences which resemble and differ in a way that preserves the similarities and differences between the mental attitudes? This is important since we need a way to specify the attitudes apart from their qualitative component. As I have suggested beofe we can hypothesize that the homomorphic properties are the illocutionary forces of speech acts.

So the differences between beliefs that p and desires that p are homomorphic to the differences between the illocutionary force of the utterance of some linguistic item in the process of expressing the belief or desire. Rosenthal’s overall view even suggests this. For instance he says,

It is arguable that speech acts inherit their intentionality from mental states by being a part of an overall causal network that involves those mental states…If so, then not only is the intentionality of speech acts due to their causal connections with thoughts; the intentionality of mental states themselves consists, in part, in the causal relations those states bear to speech acts. (p97)

Thus there are no relevant difference between these kinds of states. We are left wanting an explanation for why it is that one kind of thought results in there being something that it is like for me to have the conscious experience while in the case of the other kind of thought this is denied. Now perhaps there is an another worked out theory of the qualitative properties that could be able to supply a satisfying answer to this question; but I have not seen it. I am doubtful that one can be given.

New Virtual Presentation!!

So, I just got a recording of my What is a Brain State? talk which I gave at the 2006 Towards a Science of Consciousness conference. I used that to record the narration to the powerpoint slides, and voila! A new virtual presentation. The conference uses a service called ‘conference recordings’ and it was easy to get the recording, but I think in the future I am going to try and record the narration as I am giving the talk…I think my mac laptop has a built in microphone…

This was by far the largest audience for a talk that I have had, and I was extremely nervous! So, I apologise in advance for all of the ‘um’s

The Connectome

Researchers at Harvard have develped a device that allows them to slice brain tissue ultra-thin and then scan it with an electron microscope in order to create a complete mapping of the cell kinds and connections in a mouse brain (wired story here). The resulting map is called a connectome…very cool. This kind of research is exactly what we need in order to move forward in our quest to fill in the theoretical place-holder term ‘brain state’.

On a related note it also brings us one step closer to being able to end our relience on real animals to do chemical manipulations/lesions in. If these can be simulated a lot of animal suffering could be stoped.

Rosenthal’s Objection

In the last post I laid out and responded to a couple of objections to my argument that higher-order theories of consciousness are all committed to there being a Phenomenal Aspect for all Mental states (HOT Implies PAM, get it? 🙂 ) I want to now address an objection raised by David Rosenthal. Let me set up the argument in a slightly different way. Consider (1) and (2), they are tennents of the higher-order theory.

(1) A conscious belief=(ex hypothesi) a belief that I am conscious of myself as having

(2) A conscious pain=(ex hypothesi) a pain that I am conscious fo myself as having

All higher-order theories accept this much. What they will disagree on is the specific way that I am conscious of the first-order state. The argument works at this very general level and so, I think, applies to all versions of higher-order theory. In one case we are told that there is something that it is like for the creature to have the conscious mental state while in the other case there is nothing that it is like for the creature to have the conscious mental state. There is something that it is like to have (2) but nothing it is like to have (1). I argue that if it works for (2), it better work for (1) as well. Or if not explain the difference between the two cases. Any thing that is pointed out as a difference will render the attempt at an explanation of qualitative consciousness ineffectual and so obviates the very motivation for accepting the higher-order theory in the first place.

Rosenthal tries to explain the difference between the cases as follows. The difference is that in one case the higher-order state represents you as being in a painful state whereas in the other case it represents you as believing something. This objection draws on the specifics of the higher-order thought version of the higher-order strategy. Intentional representation is always representation AS. So, in (1) one is represented as believing, and in (2) one is represented AS being in pain. Since in (2) one is conscious of oneself as being in a painful state it will seem painful to you and since in (1) you are conscious of yourself as believing (say) p it will seem to you that you believe p.

This is a very natural kind of response for Rosenthal to make, as it is part and parcel of the higher-order thought theory that differences in representational content result in differences in conscious experience. The common sense example here is in wine tasting. When one starts to learn about wine (or Scotch Wiskey, as I prefer 😉 one statrs to learn a techinical vocabulary to describe the experience that one has when tatsting. Acquiring these new concepts allows one to become conscious of ones experience in different ways, thus making the conscious tastes themselves richer and fuller. Another example that I like is the following. I once put some salad dressing on my salad which I thought was Ranch. When I tasted it I was suprised to find that it was the worst tasting ranch dressing I had ever had. When I said as much to my girlfried she responded ‘that’s not ranch, it’s blue cheese!’ At which point I realized that it was not a terrible tasting ranch but a nice tasting blue cheese. The way I was conscious of this one and same taste made a huge difference to what it was like for me to consciously taste it.By hypothesis the first-order states do not change. What changes is our consciousness of those states. So differences in representation content matter and show up as differences in conscious experience.

It is also important what kind of state one is represented as being in. It is because the states are represented as my mental states that there is something that it is like for me. This is Rosenthal’s familiar response to the problem of the rock. Why is it that thoughts about my mental states makes them conscious mental states while my thoughts about that rock over there do not make it conscious? It is because I do not represent the rock in the right way. I do not represent it as a mental state that I am in. I represent it, the rock, as a certain shape, size, color, etc. That is what makes me conscious of the rock. But that state, the one that makes me conscous of the rock, only becomes conscious when I represent it as the state that I am in. So, then, there is nothing wrong with saying that the difference between (1) and (2) is similiar. It is the difference between being represented as a qualitative state and being represented as an intentional state. Of course, the objection continues, IF beliefs were qualitative states the higher-order thought theory could handle that by positing that the higher-order thought represented beliefs as qualitative states. So the issue of whether beliefs are qualitative or not is a seperate issue and the higher-order theory itself does not force us one way or another.

But this seems to me to beg the question against me. I wanted to know what the difference between (1) and (2) was such that in one case there is something that it is like for me to have it and not in the other. The answer is that in one case I represent myself as being in pain (and we all know that there is something that it is like to have a conscious pain), while in the other case I represent myself as believing someting (and we all know that there is nothing that it is like to belileve something). No evidence is given as to why this difference in representation should make such a huge difference to our conscious life. Why should being represented as one kind of mental state rather than another result in this huge difference? I mean, I agree with Rosenthal that differences in representational content will result in changes in what it is like for us (for instance, I may represent one and the same first-order state as either ‘blue’ or ‘baby blue’ and what it is like for will change). But this is a change in what it is like for me, not the cessation of what it is like.

The only model we have for that is the response to the rock. Being represented as a mental state or not results in very different kinds of experience. But in that case we have an independent motivation. A mental state is a state which makes me conscious of something, so rocks aren’t mental states and so we don’t owe an explanation for what it is that my thoughts about the rock make it conscious. But in the case of the qualitative versus intentional states issue this response does not work. What we are trying to do is to give an explanation of the nature of qualiltative consciousness in a way that is not naturalistically mysterious. We are not trying to explain what it means for something to be a mental state. We have a seperate theory of what it is to be a mental state. This is part and parcel of the higher-order strategy. But now if we say that there is something special about qualitative properties such that for some unknown reason when we are conscious of them there is something that it is like for us to have the first-order state, we lose the ability to explain what qualitative consciousness is supposed to be.

There is more that I want to say about this, but I have to go and move my car for alternate side parking!!!!

Conceptual Atomism, Functionalism, and the Representational Theory of Mind

There was once optimism among philosophers that functionalism could give a complete account of the mind. Today philosophers are a lot less sure of this due mostly to the arguments expounded by Block in his now classic “Troubles with Functionalism,” (Block 1993), as well as his later “Inverted Earth” (Block 1997), where he argued that functionalism cannot account for qualitative states. There are at least two strategies that one could take in response to Block’s arguments. First there is what Block has called the ‘containment response’. One gives up on qualitative states but holds that beliefs, indeed thoughts in general, can still be given a purely functional account. This sometimes takes the form of ‘belief box’ talk. One says that p is in one’s belief box and this is supposed to be shorthand for ‘p is playing the belief function,’ where this means that p has characteristic connections to characteristic inputs and outputs.

This is the strategy that Fodor has adopted for years. I think it is well known that he endorses a functional account of what beliefs are (though this is not to say that they have functional definitions) and that this is part and parcel of the representational theory of mind. He has recently gone on to argue that in order for the representational theory of mind to be successful it needs to be able to provide an account of what concepts are. Where at the common sense level concepts are the components out of which beliefs are made. So, on his usage, the belief that grass is green is made up of the concepts GRASS, IS and GREEN. The reason that it is the belief that grass is green (as opposed to the belief that water is wet) is because of the concepts which are in the belief box (are playing the belief role).  It is also well known that he has argued that of all the theories that are out and about in cognitive science none of them stand up to the various requirements that things like compositionality and systematicity require. This has led him to formulate conceptual atomism. Sadly, though there is a problem. Conceptual atomism is not compatible with a functional account of what the attitude part of the propositional attitudes consists in. Since Fodor thinks that atomism is the only theory of concepts compatible with the representational theory of mind, this is a big problem indeed. First I will rehearse the inverted qualia argument and then argue that a version of this argument can be run on beliefs if atomism is true.

 The inverted qualia argument, you will remember, goes as follows. We imagine two twins, let’s call them Pat and Tap. Now Tap has special lenses installed in his eyes at birth. These are the infamous ‘inverting lenses’ which cause the person in whom they are implanted to have inverted qualitative experiences. Thus Tap sees what Pat sees when looking at fire trucks (i.e. red) while he (Tap) is looking at grass (i.e. green) and vice versa. These children then grow up as usual. By the time they are in High School the two twins function identically. They use all the color words correctly, each calling red things red and green things green but one of them sees what we call green when looking at red things. They have inverted qualitative states but identical functional states and this suggests that qualitative character is not captured by the functional description of the twins. Once one has gone this far it is a short step to the absent qualia argument, which just supposes that we might have the functional state without any qualitative aspect to it at all. If one does not want to take the containment response then one can try and show that absent qualia are impossible and that will help to save the theory. This is the strategy that Shoemaker famously takes. He argues that the qualitative states will have many connection to belief states tsuch that we would not have the releveant kinds of belief states in the absence of the qualitative state.

It is generally taken for granted that the propositional attitudes are immune to this kind of argument, partly due to the alleged fact that these states do not have any qualitative character associated with them.  Block sums up the common sense view in Troubles with Functionalism when he says

…it is very hard to see how to make sense of the analog of spectrum inversion with respect to nonqualitative states. Imagine a pair of persons one of whom believes that p is true and that q is false while the other believes that q is true and that p is false. Could these two persons be functionally equivalent? It is hard to see how they could. Indeed, it is hard to see how two persons could have only this difference in beliefs and yet there be no possible circumstance in which this belief difference would reveal itself in different behavior. (p. 247)

Suppose that P is ‘dogs are nice’ and Q is ‘cats are nice’ then Pat would have to believe that dogs are nice and that cats are not nice while Tap would believe that cats are nice and that dogs are not nice. It is hard to see how this difference in belief would not result in some difference in behavior regarding cats and dogs. If there are differences in their behavior then these two are not functionally identical.

But then in the footnote to this passage Block admits that there is a sense in which we can have inverted beliefs. He asks us to imagine two distinct afflictions. One is the lenses that we are familiar with from the inverted qualia argument; this he calls ‘Stimulus Switching.’ A person wearing these lens will calls red things ‘green’ because he (falsely) believes them to be green. The second ailment, called ‘Word Switching’ is an ailment where the victim simply uses the incorrect (but opposite) words for the colors. This person, then, calls red things ‘green’ but has normal color beliefs; in other words he will call something ‘green’ but only accidentally, he really means red, and he believes that the object is red.

Now suppose that a victim of Stimulus Switching suddenly becomes a victim of Word Switching…He speaks normally, applying ‘green’ to green patches and ‘red’ to red patches. Indeed he is functionally normal. But his beliefs are just as abnormal as they were before he became a victim of Word switching…So two people can be functionally the same, yet have incompatible beliefs. Hence the inverted qualia problem infects belief as well as qualia (though presumably only qualitative belief).

To illustrate this again imagine our two twins: When Pat and Tap are both looking at a red apple, both will say that it looks red and both will behave in just the same ways towards the apple as would the other. Except that Pat believes that the apple is red while Tap believes that the apple is green. Calling ‘the apple is red’ p and ‘the apple is green’ q we can see that Pat believes that p is true and q is false while Tap believes that p is false and q is true. So this really is a case of belief inversion in the way that Block says is hard to imagine happening. This seems to me to be the same kind of thing that happened to Locke when he imagined his missing shade of blue but then goes on to dismiss it as unimportant.

What does Block mean when he says ‘presumably only qualitative belief’? He (presumably) means those beliefs that are connected to qualitative states, and this would seem to block Shoemaker’s defense of functionalism. This will include more than just beliefs about colors. It will include all of our perceptual beliefs as well as any beliefs that stem from them. So we cannot define qualitative similarity in functional terms in the way that Shoemaker needs. Shoemaker’s response depends on it being the case that to believe that we are in pain and yet not actually be in pain cannot happen. But there is some reason to think that this may be possible. And the fact that we can have massive perceptual belief inversion means that the connection to other states cannot help us to pin down the pain state functionally.

As I mentioned earlier, Fodor argues that for the representational theory of mind to work it needs conceptual atomism, so let me briefly say what that is. He has argued that anyone who endorses a RTM has to endorse conceptual atomism. Concepts are primitive and acquire their content via some ‘locking relation’ to things in the world. There are two choices for two the ‘locking relation’. One is the Causal/historical kind that is taken by Kripke, Devitt, and Millikian. Fodor has argued that these kinds of accounts can’t provide sufficient condition for concept acquisition. As he puts it, ‘causally interacting with doorknobs’ could not be enough to acquire the concept something in the head must have happened, presumably in the head! Since he thinks that it can’t be learning there is only one option left. Concepts must work like appearance properties.Red things are the things that produce in us a certain predetermined qualitative state. Nothing fishy here, standard Empiricism, really; just as red ‘triggers’ a preset state in a sensory space, so too with doorknobs. Being a doorknob is being the kind of thing that creatures with minds like ours ‘resonate’ to. This is his controversial claim that all concepts are innate

Now we can see why atomism is subject to inversion argument. Let’s again take P to be ‘cats are nice’ and Q to be ‘dogs are nice’. If MOST concepts are appearance concepts the we can run Block’s argument on ‘cat’ and ‘dog’ instead of ‘red’ and ‘green’. We imagine a device that when worn inverts the perception of cats and dogs. Thus when Tap wears the device he will see a dog where Pat sees a cat and a cat where Pat sees a dog. Imagine Pat and Tap both looking at my wiener dog Frankie. This is exactly analogous to the case before. These two are functionally identical people yet one believes that Frankie is a dog and not a cat while the other believes that Frankie is a cat and not a dog. So functionalism cannot account for intentional states if concepts are appearance properties.

So the situation is that if one thinks that the representational theory of mind is important and that it would be nice if something like that could work then one is committed to atomism. But atomism means that functionalism about the attitudes can’t be right.

Unconscious Change Detection, Priming, and the Function of Consciousness

So, if you have been around here lately you will have noticed that I have been talking a lot about priming, change blindness and the function of conscious mental states in the higher-order theory. I have been arguing that some recent results on priming effects in change blindness suggest that there is some function for conscious mental states (even/especially for those who like higher-order accounts (of whatever type). David’s response to this has been to admit that this shows that there is some functionality for conscious metal states but then to insist that it is not enough to justify calling it ‘the function of consciousness’ or anything like that. He then points out stuff like this article and argues that change detection is pretty big stuff, maybe even the stuff that you thought might turn out to be The Function of Consciousness but even that can be done unconsciously.

But after thinking about this, I am not sure that the Fernandez et al stuff really shows as much as David thinks that it does.  So, consider the experiemtn that Fernandez et al did as summed up in the figure below (from their paper).

fig-1.jpg

The only difference between the two pictures is whether one sees George or Not-George. Subjects then see figure b and are forced to guess which of the two highlighted bars was the one that changed. The study reports that people pick the correct one even though they say that they did not see the change.

But notice that in figure b subjects are presented with Not-George. They did not check to see what would happen if they presented subjects with George and asked the same question. Mow, though they didn’t do this, the Silverman-Mack experiments predict that George should have been just as good at allowing subjects to perform above chance. This would suggest (it seems to me) that, though the subjects are conscious that there is a difference, they are not conscious of what the difference consists in. When they are conscious of the difference as the difference (that is, when the consciously see the difference) the Silverman-Mack results predict that only Not-Geroge would show any effect. The representation of George would be supressed. So the kind of change detection that happens consciously serves a distict function from the kind that happens unconsciously. Conscious change detection serves to bias the system; inhibiting some representations and thereby enhancing others, unconscious ones don’t. This biasing is important for survival since it helps to determine which representations can be assesed for action (like button pushing) and so this is a function for perceptual consciousness that is pretty important.

Stay tuned…there’s bound to be more of this after the big talk tomorrow!

Priming, Change Blindness, and the Function of Consciousness

This Wednesday David Rosenthal will be giving  a talk at the Graduate Center entitled ‘The Poverty of Consciousness’. If you happen to be in the New York Area and you have a hankering for some hot and heavy philosophy of consciousness, come on down! (see the Cog Blog for some details).

I have been thinking about this issue and in light of my last post on priming and change blindness where I voiced my suspicion that the results posed a problem for Rosenthal’s claim about the function of consciousness. This lead to soem emailing between David and I and so I figured I would take some time to sort this stuff out.

Rosenthal’s main contention is that there is no evolutionary (read: reproductive)advantage to an organism’s having consious mental states. This is to be distinguished from the claim that there is no evolutionary advantage to the animal being conscious (creature consciousness), which quite obviously gives the creature a huge evolutionary advantage (e.g. being awake often helps one get away from predators…that is unless one has taken ambien!!!). The primary reason that he thinks this is because he endorses the higher-order theory of consciousness which claims that a mental state is conscious when I am conscious of myself as being in that state (and of course there is some experimental results which support the claim 🙂 ). This view commits one to the claim that any mental state can in principle occur unconsciously and this seems to suggest that most of a states causal powers will be had by the state whether it is conscious or not. If so then what purpose could (state) consciousness add?

When people hear this they usually think that it means that consciousness is completely epiphenomenal (has no causal efficacy). But this isn’t right, as I discussed in this post on Uriah Kriegal’s version of this argument. As Rosenthal says, 

Lack of function does not imply that the consciousness of these states has no causal impact on other psychological processes, but that causal impact is too small, varied, or neutral in respect of benefit to the organism to sustain any significant function. So my conclusion about function for does not imply epiphenomenalism.

His claim is that whatever causal powers a state’s being conscious endows it with they are too ‘small, varied, or neutral with respect to benefit’ to count as serving any function. O.K., so if this is your view then you have your work cut ouot for you because you have to A.) examine and refute all of the proposed functions for consciousness out there (from ‘deliberate control of action and rational inference’ to ‘enhances creativity’) and B.) provide an alternate explanation for how in the world conscious mental states ever cam about in the first place (tune in on Wed. to hear Rosenthal’s answers to these questions, though I gather that he will mostly be talking about intentional states and not qualitative states).

O.K., so now enter the priming results that I talked about previously (and which Rosenthal is aware of and has read and cites in his forthcoming papers/book on this subject). What that paper showed is that when one is presented with two pictures. A and B, which have some difference between them (like an extra tree or something), D, then when one is presented with A and B and one is not conscious of the difference then both A and B show priming effects (i.e. one will complete a degraded picture with what one unconsciously saw in A and B) but when one consciously notices that there is a difference between A and B then only B (i.e. not A) shows priming effects.

Now, if this is evidence for anything it will be evidence for there being a function for preceptual states (qualitative states). It would still be an open question, what, if any, function intentional states have (unless of course one, like me, thinks that intentional states are qualitative states). But is it evidence for a function of conscious states?

I suggested that it is evidence that a state’s being conscious inhibits previous ‘outdated’ representations and so serves to guide certain representations (i.e. the conscious ones) to greater causally efficacy and so to greater effect on behavior. If this were true, it seems to me that that would definitely give some evolutionary advatage to having conscious states. Suppose, for instance, that a bear is charging at you and that there is a spear that is just out of reach. The bear is running straight at you and you are casting frantically about for something to defend yourself with. As you look around, wildly, you first see the spear out of reach, and then in another pass you see the spear within reach (say it was knowcked towards you in the chaos of the bear stampeding towards you). Now let us assume that in one case you do not consciously see this difference and in the other case you do. In both cases you will have representations of the scene with the spear out of reach and with the spear within reach. But only in the case that yo consciously see the change (that is, consciously see that the spear is not in reach). The previous representation is now inhibited and the representation of the spear is now moral causally active and liable to cause you to reach for the spear and (maybe!) stave of the bear. This doesn’t seem like some minor or neutral thing. This sounds like an important function for perceptual consciousness!

During our email discussion he reffered me to the following paper,

Fernandez-Duque, Diego, and Ian M. Thornton, “Change Detection without Awareness: Do Explicit Reports Underestimate the Representation of Change in the Visual System?“, Visual Cognition 7, 1-3 (January-March 2000): 324-344.

His argument seems to be that, while I am right that these results do suggest some ‘utility’ for conscious perceptual states, it is not as useful as change detection, and that can happen unconsciously! I am still thinking about that, and will come back to it…but right now I have to go and move my car for street cleaning!!!!

Some Cool Links

(via David Pereplyotchik)

Below are links to some examples of talks that fall well within the cognitive science arena. I’ve found, however, that many of the non-cogsci talks are more interesting, because they introduce one, often in a vivid way, to a subject matter that is less familiar. (For instance, Wade Davis’s talk on anthropological fieldwork was, for me, genuinely exciting.)

You can browse the talks by clicking on the topic links at the bottom right of each video’s page. Or just start here

Enjoy.

David Pereplyotchik