Can We Think About Non-Existent Objects?

I am scheduled to record a conversation with Pete Mandik for Philosophy TV tomorrow on higher-order approaches to consciousness and in the course of preparing for it I was rereading Pete’s Unicorn paper where, among other things, Pete gives several arguments that we are in fact able to think about non-existent objects. I do not think that we can.

It may seem quite natural to think that the answer to the above question is ‘yes’. For instance, we think of Count Dracula, unicorns, Santa Claus, and many other examples of this kind. If we take ‘thinking about’ to involve having some kind of relationship with the thing that is thought about this can seem crazy. If I am thinking about Santa Claus, for instance, that would mean that there would have to be some object that I was related to and since Santa doesn’t exist the object would seem to be a very strange one indeed! What should we conclude from this? Should we conclude that ‘thinking about’ doesn’t really involve a relationship between the thinker and the thing thought about?

Suppose that one accepted some kind of causal-historical account of the reference of (at least some of) our concepts and that thinking about x means tokening a thought containing a mental representation of x with the approriate causal-historical connection to x. So, to rehearse a familiar picture, Some child is born, his parents say “let’s call him  ‘Saul Kripke'”, other people are told “this is Saul Kripke” and thereby acquire the ability to refer to this child. Over time this name propagates, like a chain, link by link to us. So that when I think about Saul Kripke I employ a thought token that traces a causal-historical route back to the initial “baptism”. If this were the case, and one thought that natural kind terms worked like this as well, one would end up denying that we think about non-existent objects. The concept UNICORN has as its reference whatever it is that actually turns out to have been “baptized”. This may turn out to be a deformed goat, a hallucination, or maybe an imaginative act on the part of a person, whatever it actually turns out to be is what we are thinking about when we think about unicorns and that thing exists. So too for Dracula, Santa Claus, Jackalopes, etc.

But what about when we think thoughts like ‘there are no square circles’? Aren’t we thinking about square circles? I don’t think we are. Rather I think we are having an existentially quantified thought to the effect that nothing is both square and circular at the same time. Aha! Aren’t existentially quantified statements that are actually false examples of thinking about non-existent objects? If I think that the present King of France is bald, and there is no present King of France, are not I thinking about a non-existent object? Of course not! What you are thinking is that there is someone or other who is the present King of France and that is just plain, ordinary, boring false. There is no non-existent object which is correctly described as the one you are thinking about.

But isn’t denying that we can think about non-existent objects self refuting? What have we been talking about this whole time if not whether or not there are any of this kind of thought! So denying that there are any just shows that we have been thinking about non-existent objects all along! The very thoughts about non-existent objects that we have been discussing. But this is too quick. This is again just another example of an existentially quantified statement. ‘There are no thoughts about non-existent objects’ is really just saying that thoughts about non-existent objects don’t exist but that does not thereby mean that I am thinking about some non-existent objects! And this is for just the same reason as above; there are no objects which can be correctly described as the ones that I am thinking about.

So I am inclined to deny that we can think about non-existent objects…I am not saying that everyone should but only that there is a reasonable view, one that we ought to accept for other reasons not gone into here, and which denies that we think about non-existent objects. What this has to do with consciousness and Pete’s unicorn argument I will save for tomorrow’s discussion.

Levine on the Phenomenology of Thought

On Wednesday I attended the inaugural session of the Graduate Center’s philosophy colloquium.  The speaker was Joe Levine and he wanted to examine two of the arguments for the phenomenology of thought as given by people like David Pitt and Charles Siewert and argue that they were not up to the task that supporters thought they were.

The two arguments were what he called the self-knowledge argument and the phenomenological argument. The self-knowledge argument claims that the only way we could have genuine acquaintance-like knowledge of our cognitive states was if they had a phenomenology. Levine rejects this argument as question begging. The second argument he takes more seriously. The phenomenological argument points to several distinct kind of phenomena. So, take an ambiguous sentence like ‘visiting relatives can be boring’. When one understands it to mean that the relatives who are visiting are boring and when one understands it to mean going to visit relatives is boring there seems to be a difference and this difference intuitively seems to be phenomenal. Or take listening to someone speaking a language you don’t understand versus one that do. When people are speaking a language you do not understand it often sounds as though they are speaking really fast and that there are no spaces or pauses in their speaking but this is very different from listening to a language you do understand. The idea is supposed to be that there is a distinctive cognitive phenomenology that goes beyond any associated internal monologue or mental imagery. Levine admitted that he felt there was something to theses kinds of cases and argued that intuitively it is just as string an intuition as that there is something that it is like to see red or feel pain. I agree. The question, then, is what does this force us to conclude about the phenomenology of thought?

As a contrast Levine introduced a null hypothesis, what he called the Non-Phenomenal Functional Representation thesis. NPFR, as he calls it, is basically a standard higher-order view about self-knowledge. When one knows what one is thinking one tokens a higher-order state the content of which is that one is in the first-order state. This is why the self-knowledge argument doesn’t really pull any weight. Both camps have an explanation of how we have self-knowledge. What about the phenomenological argument?

In order to respond to this Levine distinguishes two versions of the claim that there is a phenomenology that is distinctive to thought that he calls a pure and and an impure view. On the pure view there is a phenomenal character of an occurrent thought that is not tied to any sensory state while on the impure view “attributes phenomenal character only to sensory states, but allows that cognitive states can create phenomenal distinctions among otherwise identical sensory states,” (from the handout). The pure view is the just the usual idea that there is a distinctive phenomenology for thought. The impure view is a bit harder to get ahold of but the basic idea seems to be based on an analogy with the way sensory states work. So, take the higher-order view about consciously seeing red. On the HOT view there is a first-order sensory state that has phenomenal character and then there is a higher-order state that represents oneself as being in a red sensory state. One can a higher-order thought to the effect that one is in a generic red state or that one is in a specific red state and this will determine what it is like for you to have the experience but the HOT itself has no phenomenal character. So by analogy then Levine’s impure view seems to be that we have a first-order state, say a hearing or seeing  of ‘visiting relatives can be boring’ and one’s higher-order state can then represent it as either being about the relatives coming or your going to them and this will result in a distinctive phenomenology. That is to say that on the impure view what it is like to hear the sentence will be different when one is aware of it one way or the other but there is no cognitive phenomenology. All there is is two different kinds of auditory phenomenology.

I think my own view about cognitive phenomenology is similar except that I think that this can happen in the case of a propositional attitude and not just through some sensory state. For instance when one has a conscious belief that p I claim that it will be like believing that p for you and this is because one is aware of oneself as believing that p. This makes it a version of the pure theory. So, is there any reason to prefer the impure theory to the pure one? Levine argued that the phenomenological argument supported only the impure account and so it was no reason to think that the pure view was correct. His idea seemed to be that since the data was hearing a sentence one way versus hearing it another we only had evidence that there were two different ways of hearing the argument.

At the end of his talk he introduced another distinction between transparent and opaque cognitive phenomenology. On the transparent view “what the cognitive state is about, what it is representing, constitutes the “look” of the cognitive state while on the opaque view there is only a contingent relationship between what is represented and the cognitive state. The issue here seemed to be diagnosed buy whether one thought that there was any possibility that one could find out that one was radically mistaken about what one thought. His example seemed to be the standard brain in the vat scenario. If one came to be convinced that one was a brain in the vat, or that Quinian indeterminacy of referce, were correct one might come to find out that one was radically wrong about what one thought. On some theories of mental content one wouldn’t be mistaken, but let that slide. The point he was trying to make was that we could “wrap our heads” around the idea that our cognitive states are not transparent. He compared the opaque view to Block’s view about mental paint.

During discussion Levine discussed a comparison with people like David Pitt and Sussanna Seigel. Seigel argues that the content of our perceptions is richer than we thought (e.g. it is part of our perception of a tree that it is a tree)  and in so doing end up making perception more like cognitive states while people like Pitt argue that thoughts have a phenomenal feel and thereby make thoughts more like perceptions. This led some to wonder how we might distinguish between the two states. On my own view this is wrong headed. What we should take this as is a trajectory towards a unified account of the mark of the mental.

Identifying the Identity Theory

While I was perusing the new entries over at PhilPapers yesterday I came across Tom Polger‘s forthcoming paper in Philosophical Psychology Are Sensations Still Brain Processes? The paper is very interesting (disclaimer: I have a special interest in this stuff; see for instance The Identity Theory in 2-D) and I thought I would summarize its main points and then say something about where we disagree towards the end.

The first part of the paper Polger identifies eight theses that Smart defended in his celebrated paper. These are,

1. Sensation reports are genuine reports

2. Sensation reports do not refer to anything irreducibly psychical

3. Sensations are “nothing over and above” brain processes

4. Sensations are identical to brain processes

5. The identity theory is a metaphysical theory, not a semantic proposal or an
empirical hypothesis

6. Metaphysical theories of the nature of the mind do not make competing
empirical predictions; so they should be evaluated by their theoretical virtues,
e.g., simplicity and parsimony

7. For any thing or kind x, there are “logically” necessary conditions for being a thing of that kind.

8. Sensation expressions are topic-neutral.

The first claim is simply an endorsement of realism about phenomenal consciousness. Claims 2-4 spell out commitments to physicaism and the identity theory. Claims 5 and 6 spell out Smart’s distinctive views about the identity theory. Claim 7  basically asserts that we can know a priori what pains are essentially. Claim 8 amounts to the idea that concepts like ‘pain’ etc do not entail a commitment to any kind of ontology all by themselves.

Polger goes on to argue that of these every one but 7 should be accepted by contemporary identity theorists. Claim 6 should be accepted but not interpreted too narrowly. The identity theory should be accepted for broadly ‘inference to the best explanation’ reasons. Parsimony and simplicity play a role in that inference but there are other things that also play a role; As Polger says, “There are also what Jaegwon Kim has called explanatory and causal arguments for the identity theory”. The reason that 7 should be rejected according to Polger is the kind of resources that Kripkean arguments give to the identity theorist. In place of 7 above Polger suggests 7*

7*. A Posteriori. The identity of sensations and brain processes is a posteriori

7* is then an updated version of Smart’s claim that mind/brain identities are to be construed as ordinary scientific identities. We now have a post-Kripkean understanding of these kinds of identities and the contemporary identity theory should reflect that.

In the second part of the paper Polger goes on to formulate a master argument against the identity theory that he thinks subsumes all arguments against it and then responds to the various particular objections. The master argument goes as follows;

(P1) If the identity theory is true, then there is a necessary one-to-one relation between sensations and brain processes.15 (necessity of identity)

(P2) If VARIATION then there is not a necessary one-to-one relation between sensations and brain processes. (definition of VARIATION)

(P3) VARIATION.

(P4) There is not a necessary one-to-one relation between sensations and brain processes. (P2, P3)

(C2) The identity theory is false. (P1, P4)

The particular objections that we find spell out varieties of variation claims: actual, nomological, metaphysical, logical. Polger identifies one major figure and style of objection this way. So, Putnam’s worries about octopi and Fred’s pain at 6:00 v.s. Fred’s pain at 6:15 count as actual variation while Fodor’s worries count as nomological, Kripke’s modal argument is metaphysical, and Chalmer’s zombie argument is logical. All of these arguments are united by trying to show that there is or can be variation.

Polger has a lot of interesting things to say in response to each of these objections.  Against actual variation he argues that even if we grant, as we might not, that we find the very same psychological properties across species on Earth (that is to say, even if an octopus can feel the very same kind of sensation that I do when I experience pain) there is still very little reason to think that psychological properties are multiply realizable in a way that is threatening to the identity theory. Sure there may be differences between species but that is no reason to rule out similarities a priori! Some people cite neural plasticity as a possible source of trouble. To this Polger replies, “evidence from plasticity is compatible with the neurobiological variations being variants within a more general kind that is also neurobiological.” Lacking any reason to believe in actual variation we also have no reason to believe in nomological variation, what about metaphysical variation? Here Polger endorses type-b physicalism and argues that Kripke’s argument is question begging. If the mind-brain identities are true then they are necessarily true. This leads Polger to the last kind of variation which he calls logical variation. It is here that we find Polger’s discussion of Chalmer’s zombie argument. His main complaint is that the argument rests on an assumption about the nature of reduction that the type-b physicalist will reject.

In the final section of the paper Polger introduces two further claims which he thinks should be endorsed by contemporary identity theorists.

9. Variability. Sensation processes are multiply constituted.

10. Strong Physicalism. Physicalism is necessarily true; all worlds are physicalist worlds.

In defense of accepting 9 Polger argues as follows,

accepting that there is…variability in the world is a far cry from accepting that it is the kind of variability that would be problematic for identity theories. Identity theories claim that sensations are brain processes, but they do not take any stand on the nature of brain processes. In particular, the identity theorist need not suppose that the world is organized into homogeneous columns of organization so that there is a one-to-one relation between sensations and microphysical processes. The identity theorist identifies sensations with brain processes, not with molecular or subatomic processes that occur inside brains.

I have always been sympathetic to this kind of argument and have seen some of my own work as generally supporting it. But what about 10? Why ought we accept that? The basic reason is to avoid the following reductio of the identity theory;

C1. Sensations are identical to brain processes in all possible worlds. (identity theory)

C2. Physicalism is contingent; there are some non-physicalist worlds containing non-physical sensations. (contingent physicalism)

C3. There are some worlds in which sensations are not identical to brain processes. (from C2)

C4. The identity theory is false.

Polger’s answer to this argument is to give up C2 thereby blocking C3. This may seem dramatic and I take 10, together with 7*, to entail that there are strong necessities in Dave Chalmers’ sense, “but”, says Polger, “so it goes. Just as there are necessary a posteriori truths, there are necessary a posteriori falsehoods.”

But it is just at this point that the difference between the kind of identity theory that Polger has and one that is in 2-D. Once we start thinking in 2 dimensional semantics we can see an equivocation in the redictio. C1 should be modified as C1*

C1* The secondary intension of ‘Sensations are brain processes’ is necessary; the primary intension of ‘sensations are brain processes’ is contingent (identity theory in 2-D)

Once we do that we do not have the worry about the reductio. Adopting C1* is tantamount to a compromise between 7 and 7*. In effect we agree that there is an a priori knowable description or reference fixer and an a posteriori identified physical state. Given that we know independently that identities like this are 2-necessary in Dave Chalmers’ sense we can conclude that those identities are necessary in spite of possible worlds where the a priori knowable description picks out a non-physical property.

Applied Mathematics and Scrutability

Also via Leiter’s blog I was perusing the Philosopher’s Annual list of the ten best papers of 2008. The paper on Mill is very interesting and I have heard a lot about belief and alief lately but what really caught my attention is Penelope Maddy’s How Applied Mathematics Became Pure.

The whole paper is really very interesting and I would highly recommend that you read the whole thing but I want to quickly discuss one of the morals that she draws from the story she tells. She says,

This story has morals, it seems to me, about how mathematics functions both in application and in its pure pursuit. One clear moral for our understanding of mathematics in application is that we are not in fact uncovering the underlying mathematical structures realized in the world; rather, we are constructing abstract mathematical models and trying our best to make true assertions about the ways in which they do and do not correspond to the physical facts. There are rare cases where this correspondence is something like isomorphism – we have touched on elementary arithmetic and the simple combinatorics of beginning statistical mechanics, and there are probably others, like the use of finite group theory to describe simple symmetries – but most of the time, the correspondence is something more complex, and all too often, it is something we simply do not yet understand: we do not know the small-scale structure of space-time or the physical structures that underlie quantum mechanics. And even this leaves out the additional approximations and accommodations required to move from the initial mathematical model to actual predictions.

I wonder if this is right if it causes problems for the kinds of scrutability claims that David Chalmers wants to defend, and which for the most part I am highly sympathetic to (of course where we differ is over whether we need to include phenomenal truths in the base truths or not…I think probably not since they can be derived just as easily as other ordinary macroscopic truths).

The problem, it seems to me, is that if this is right (i.e. if at the limit we do not end up with a unified mathematical model of the world but rather patchwork models that apply only in various respects) then which mathematical model we apply or assumption we make will crucially depend on empirical knowledge (for instance knowing that the equations for a harmonic oscillator  are a good model of a molecule’s vibration only in the region of the minimum (see page 35)). Am I missing an easy response?

I’ll have to think about it later because now I’m off to Jared Blank’s cogsci talk

Zombies and Impossible Worlds

Via Leiter’s blog I happened to be looking at the list of recent SEP entries. I read this entry on impossible worlds that got me thinking.

The response to the zombie argument that I have been developing over the last couple of year appeals to the distinction between prima facie and ideal conceivability. Something is prima facie conceivable, roughly, if there is no obvious contradiction in the imagined scenario. Something is ideally conceivable if, roughly, there is no contradiction in the imagined scenario even upon ideal reflection. I have tried to argue that zombies are merely prima facie conceivable and may not turn out to be ideally conceivable (another way of putting it that is roughly equivalent is that zombies are epistemically possible but not metaphysically possible) since there are equally plausible parity arguments (zoombies and shombies). As a corollary of this line of defense I have argued that what people like Dave actually succeed in imagining when they *think* they imagine the zombie world is really just a world that is very similar to the actual world. Just as a point of clarification I have always meant this to be a different claim than the Russellian response that the zombie world may have different ‘inscrutable’ fundamental physical properties. What I mean is that since we do not yet know all of the facts about the brain, physics, or theories of consciousness, we may be inadvertently failing to include some crucial physical law, property, or theory of consciousness. So it is very easy, I claim, to imagine a world that is physical in roughly the same sense that ours is but where there is no consciousness. For instance, if the higher-order thought theory of consciousness is right then the ‘zombie’ world is really just a world like ours that lacks higher-order thoughts.

Now people like Dave often claim that they can conceive that we add this feature and yet still it is intuitive that those creature could lack consciousness. If this is really the case and the higher-order theory is true then Dave has imagined an impossible world. But it seems to me that we can at this point admit that the traditional zombie world is conceivable and go on to argue for a restriction on the second premise of zombie argument, which to remind us, is the claim that if zombie are conceivable then they are possible. This premise becomes possibly false since it may be the case that zombies are conceivable but not metaphysically possible, where this means that they inhabit an impossible world.

One response to this line of thought might be that the use of ‘conceivability’ here isn’t the same as that employed by the zombie argument. As used by Dave ‘conceivable’ means roughly imaginable without contradiction but in these impossible worlds we conceive of a world with a contradiction (by stipulation it contains a contradiction). But, of course, the point here is that one may not notice or be in a position to spot the contradiction, which is exactly one of the reasons for postulating impossible worlds (or in this case impossible scenarios in Dave’s sense). If one takes this line, as I am inclined to do myself, then the issue reduces to the original one of the difference between prima facie conceivability and ideal conceivability. But if one has a more generic version of conceivability one can argue that zombies are conceivable and impossible in way that seems different from the usual type-b line…

Explaining Consciousness & Its Consequences

Yesterday I presented Explaining Consciousness and its Consequences at the CUNY Cognitive Science Speaker Series which was a lot of fun and a very fruitful discussion. I have a narrated powerpoint rehearsal of the talk and those that are interested can look at that at the end of this post but here I want to discuss some of the things that came up in the discussion yesterday.

The core of the puzzle that I am pressing lies in asking why it is that conscious thoughts are not like anything for the creature that enjoys them. My basic claim is that if one started with the theory of phenomenal consciousness and qualitative character and came to understand and accept it but one hadn’t yet thought about conscious thoughts one would expect that the theory would produce cognitive phenomenology. Granted it wouldn’t be like the phenomenology of our sensations –seeing blue consciously is very different from consciously thinking that there is something blue in front of one– but why is it so different that in one case there is nothing that it is like whatsoever while in the other case there is something that it is like for the creature? The only difference between the contents of HOTs about qualitative states and HOTs about intentional states is that one employs concepts of mental qualities whereas the other employs concepts about thoughts and their intentional contents yet in one case conscious phenomenology –which is to say that there is something that it is like for the creature to have those conscious mental states– in all its glory is produced while in the other case nothing happens. As far as the creature is concerned it is a zombie when its has conscious thoughts. But what could account for this very dramatic difference? It looks like we haven’t really explained what phenomenal consciousness is, all we have done is re-locate the problem to the content of the higher-order thought. This is because no answer can be given to my question except “that how phenomenal concepts work” and so we have admitted that they are special.

Now one thing that came up in the discussion, by David Pereplyotchik, was what I meant by ‘special’ in the above. David P. suggested that qualitative properties may be distinctive without being special. I agree that they are distinctive and that is the reason that thinking that p and seeing blue are different. We move from distinctive to special when we deny that conscious thought have a phenomenology because we can’t explain why they don’t.

One detail that came out was that the way I formulated the HOTs and their contents was misleading. Instead of “I think I see blue*” the HOT has the content “I am in a blue* state”

At some point David said that when he had a conscious thought what it was like for him was like feeling one was about to say the sentence which would express the thought. So when one thinks that there is something blue in front of one what it is like for that creature is like feeling that they were about to say “there is something blue in front of me”. When I said ‘aha, so there is something that it is like for you to have a conscious mental state’ he responded “what does that mean?” This challenge to my use of the phrase “what it’s like for one” was a main theme of the discussion. A lot of the time I ask whether or not there is something that it is like for one to have a conscious thought  and if not why not but David objected that the phrase is multiply ambiguous and is used to confuse the issue more than anything else. One way this came out was in his challenging me to explain what was at stake. What difference is made if we say that there is something that it is like for one to have a conscious thought and what is lost if we deny it? I responded that it is obvious what the reference of the phrase ‘what it is like for one’ is. It is the thing that would be missing in the zombie world. David responded that the zombie world was impossible, which I agree with at the end of a long theoretical journey but we can still intuitively make sense of the zombie world even if only seemingly. That is even if it is the case that zombie are inconceivable we still know what it would mean for there to be zombies and that still helps us zone in on what the explanatory problem is. I take it that the whole point of the ambitious higher-order theory is that it tries to explain how this property, the one we single out via the phrase ‘what it is like for one’ and the zombie and mary cases, could be a perfectly respectful natural property. So what is at stake is whether or not I really am like a zombie when I have a conscious thought and what that means for the higher-order thought theory. If we cannot account for the difference between intentional conscious states and qualitative conscious states then we have not explained anything.

David’s main response to my argument seemed to be to appeal to the different ways in which the concepts that figure in our HOTs are acquired. In the case of the qualitative states we acquire the concepts that figure in our HOTs roughly by noticing that our sensations misrepresent things in the world. So, if I mistakenly see some surface as red and then come to find out that it isn’t red but is, say, under a red light and is really white, this will cause me to have a thought to the effect that the sensation is inaccurate and this requires that I have the concept of the mental quality that the state has. In the case of intentional states the story is different. We are to imagine that there is a creature that has concepts for intentional states but only applies them on the basis of third person behavior. This creature will have higher-order thoughts but they will be mediated by inference and will not seem subjectively unmediated. Eventually this creature will get to the point where it can apply these concepts to itself automatically at which point it will have conscious thoughts. This difference is offered as a way of saying what is different about the concepts that figure in HOTs about qualitative states and those that figure in HOTs about intentional states. It amounts to an elaboration of David Pereplyotchik’s suggestion early on that the qualitative properties are distinctive without being mysterious. They are distinctive in the way that concepts are acquired. But as before how can this be an answer to the question I pose? I agree that there is this difference for the sake of argument. What seems to me to follow from this is what I said before; namely that the phenomenology of thought and the phenomenology of sensations is not the same…but this should be obvious already. So, the claim is not that having a conscious thought should be like seeing blue for me or feel like a conscious pain for me only that it should be like something for me. Basically then, my response is that this will make a difference in what it is like for the creature but doesn’t explain such a drastic difference as absence of something that it is like for one in one case.

Another way I like to put the argument is in terms of mental appearances. David Rosenthal often says that what it is like for one is a matter of mental appearances at which point I argue that the HOT is what determines the mental appearances and so in the case of thinking that p it should appear to me as though I am thinking that p. In response to this David said that while it is the case that phenomenology is a matter of mental appearances it might not be the case that all mental appearances are phenomenological. At this point I have the same response as before…viz. what reason do we have to think that there are these two kinds of appearances? It looks like on is just inserting this into the theory by fiat to solve an unexpected problem. There is no theoretical machinery which explains why we have this disparity. When we ask why applying starred concepts results in appearance of qualitative phenomenology the application of intentional concepts does not so result in intentional phenomenology when we ask why? We are simply told that this is the way phenomenology works. It is as mysterious as ever.

At the close of the talk I touched briefly on Ned Block’s recent paper “The Higher-Order Theory is Defunct” which raises a new objection to the higher-order theory based on the consequences of explaining consciousness as outlined here. The problem that Ned sees is that when one has an empty HOT one has an episode of phenomenal consciousness that is real but that is not the result of a higher-order thought. David’s response seems to be to fall back on his denial that there are ever actually cases of empty higher-order thoughts. I brought up Anton’s syndrome and David responded that in Anton’s syndrome we don’t have any evidence that they actually have visual phenomenology. They don’t want to admit that they are blind but when we ask them to tell us what they see they can’t. If there are never empty higher-order thoughts then Block’s problem goes away.

My response to this problem is to identify the property of p-consciousness with the higher-order thought while still identifying the conscious mental states as the target of the HOT but at that point we adjourned to Brendan’s for some beer and further discussion.

During the discussion at Brendan’s we talked a little bit about my suggestion that we develop a homomorphism theory of teh mental attitudes. David and Myrto wanted to know how many similarities there were between sensory hommorphisms and the mental attitudes. In the sensory case we build up the quality space by presenting pairs of stimuli and noting what kind of discriminations the creature can do. What we end up doing is constructing the quality space from these kinds of discriminatory abilities. So, what kind of discriminations would happen in the mental attitude case? I suggested that maybe we could present pairs of sentences and ask subjects whether they expressed the same thought or different thoughts. Dan wanted to know what the dimensions of the quality space for mental attitudes would be. I suggested that one would be degree conviction, so that whether one doubts something or believes something firmly or just barely will be one dimension of difference but I have yet to think of any others. This has always been a project I hope to get to at some point…right now its just a pretty picture in my head…

Ok well I feel like I have been writing this all day so I am going to stop…

Error
This video doesn’t exist

Dream a Little Dream

One of the other issues that came up at Miguel’s cogsci talk was that of the empirical testability of the HOT theory. Miguel suggested that we might have the following argument against HOT. Experimental evidence suggests very strongly that the dorsal lateral pre-frontal cortex is likely to be the home of HOTs. David has said several times that if we did not find activity in the DLPFC when we had evidence that there were conscious mental states about this would be very bad for the HOT theory. So t if we think that we have conscious mental states in our dreams and we accept the evidence that shows that the DLPFC is deactivated during REM sleep this would seem to count as evidence against the HOT theory. David seemed to think that there were basically two plausible responses to this argument. One copuld deny that there are conscious mental states during dreaming or one could argue that the HOTs have a summer home that we haven’t found yet. A lot of the discussion centered on whether or not we had any evidence that dreams are conscious in the way we think they are. David argued that we didn’t Miguel that we did.

David’s argument seemed to me to be the following. The evidence we have that dreams are conscious are the reports that people make when they are awake and remembering the dream. But it is equally consistent with this that the dreams were all unconscious and only seem to be conscious when we reflect on them in the morning. Miguel seemed to think that it was obvious that dreams were conscious. I suggested that perhaps the kind of work that Eric does on dreams suggests that our naive views about dreams are wrong. Pete suggested that we had good experimental evidence that dreams were conscious from teh kind of studies where subjects are given instructions of the sort that if they see a flashing object in the dream they should clap five times. During the discussion the phenomenon of lucid dreaming came up and David reported that in lucid dreaming the DLPFC was active and so lucid dreams count as conscious mental states.During REM sleep subjects then can be seen to make clapping motions. But is it clear that this counts as a report in the relevant sense? This activity could be the result of unconscious dreams just as well as the result of conscious dreams. In David’s terminology we can ask whether the clapping is an expression of their mental states or whether it is a report. If it truly counts as a report and there is no activity in the DLPFC then David’s view would be in trouble.

This got me to thinking; how could we devise an actual empirical test of these kinds of issues? Hakwan suggested an interesting conceptual approach earlier which led me to think about binocular rivalry. If you could just have subjects in a scanner looking at  stimuli that are known to induce binocular rivalry without having the subjects do any kind of reporting we could then look at the DLPFC and see if the activity there reliably correlates with the conscious percept. A quick search on this led me to this article which seems to get results that line up with HOT theory very nicely, though with scalp EEG and with a button push which is a confound…

The New New Dualism

Yesterday I attended Miguel Angel Sebastian’s cogsci talk entitled “The Subjective Character of Experiencre: Against HOR and SOR Theories” which was very interesting. Miguel was primarily trying to show that higher-order and same-order representationist theories of consciousness cannot account for the subjective character of an experience by which he means the thing that accounts for the experience being for the subject. His main complaint seemed to be that in order to account for this we need some notion of the self and so he suggested that we need a model where we have representations of teh self interacting with representations of objects and we thus end up with a representation of the form “x for-me”. There were several interesting themes of the discussion and if I have time I will probably come back to some of them but I thought I’d start with this one.

In response to the mis-match problem David has settled on the following view. The phenomenology goes with the HOT. The sensory qualities of the first-order state play no role –other than that of concept acquisition– in determining the phenomenal character of a conscious experience. So in the case of Dental Fear the subject has a first-order state with vibration sensory qualities and a HOT that they are in pain so their conscious phenomenology is like having pain for them. The first-order sensory qualities play a perceptual role in the mental economy of the subject so having them is important but they don’t play a role as far as consciousness is concerned. In fact even if there is no first-order state at all (as may perhaps be the case in Anton’s syndrome) the phenomenology goes with the HOT. Now in the cases where there is no first-order state one still counts as being in a conscious state. The mental state that is conscious is just the one that the HOT represents oneself as being in and so in this case the conscious mental state is a notional state, which is to say that it doesn’t exist. It follows from this that there are conscious mental states that have no neural correlates. We thus end up with a dualism about consciousness of a new variety. There are some conscious mental states that exist physically in the brain and there are other conscious mental states that exist only notionally as the content of a HOT.

What should our reaction to this be? When this first became clear at David’s Mind and Language seminar it prompt Steve Stitch to shout ‘he’s worse than a dualist!’ Miguel seemed to think that at the very least this is a cost of the theory and that if you can have a theory that explains all the data without it that is preferable. David refused to say that this was even a cost for the theory, in fact he seemed to suggest that it wasn’t even counter-intuitive. His reasons seemed to be as follows. I can have a thought about things which are not present and those notional objects can have properties. So, if I think about a squirrel I might think of it as brown, and bushy even if there is no squirrel around yet the squirrel has properties; it is brown and bushy. Thus it is simply a fact about intentional states like thoughts that their contents can be notional and that those notional objects can be said to have properties. If that is right then there is nothing fundamentally mysterious about notional mental states having properties. The second step in his defense seemed to involve an appeal to hallucinations. We hallucinate regularly enough for it to be a common-place of folk psychology. Why doesn’t it make sense to say that we can hallucinate mental states? On this line the notional state is just like my hallucination of a pink elephant: it seems like it is there from my point of view but it isn’t really there. This isn’t mysterious since that just simply means that I represent myself as being in a state that I am not in. Now given various theoretical assumptions this will indeed turn out not to be counter-intuitive and since those who do find it counter-intuitive will do so because of different theoretical assumptions I suppose I can see why David thinks that this is not a cost to the theory.

But suppose that one had different theoretical assumptions?  Suppose that one wanted to avoid this kind of existence dualism and so endorsed some kind of principle like this: For every conscious mental state there is a corresponding brain state. But suppose one also wanted to remain a higher-order theorist…what are the options? The most obvious option is to identify the phenomenally conscious state with the HOT. The HOT is not introspectively conscious –for that it would need to have a third order state targeting it– but it is phenomenally conscious. It is the state in virtue of which there is something that it is like for the subject and so it seems natural to identify the property of phenomenal conscious with having the HOT. Ned Block has argued that if one does this then one has falsified the higher-order theory. Why? The transitivity principle says that a conscious mental state is one which I am conscious of myself as being in but on the previous analysis we have a phenomenally conscious mental state (the HOT itself) of which we are not conscious of ourselves as being in (there is not third-order HOT) thus adopting this view falsifies the transitivity principle. But this may be too quick. This way of formulating the transitivity principle leads us to the view that the HOT transfers or confers the property of being conscious to the first-order state but as we have seen what the transitivity principle really says is that a conscious mental state consists in my being conscious of myself as being in some first-order state. That is, the transitivity principle is a hypothesis about the nature of conscious mental states. It is a mis-reading of the transitivity principle that takes it to postulate consciousness resulting in a relation between the first-order state and the higher-order state. That this is the dominant way of interpreting the transitivity principle is not in doubt; it most certainly is. However, it is misleading and cause way too many problems. I think higher-order theorists need to be more explicit about this mis-reading of the transitivity principle.

To me the second is the best option. However, lots of people seem to think that of one adopts a same-order theory one can avoid these kinds of issues. Since one takes the conscious mental state to be a complex of a first-order content and a second-order content that represents the first-order content we don’t have to worry about notional states. Bit it is far from obvious that this theory has any advantages over the HOT theory. First it is unclear why the higher-order content cannot occur without the first-order content. This seems like an empirical issue that can’t be settled by definitional fiat (I guess I think Anton’s syndrome might be a problem here). Second, even if it turns out that you can’t have one with out the other it is still not clear why there cannot be a content mis-match. Why can’t a red first-order state be coupled with a higher-order content that represents the first as green?