Explaining Consciousness & Its Consequences

Yesterday I presented Explaining Consciousness and its Consequences at the CUNY Cognitive Science Speaker Series which was a lot of fun and a very fruitful discussion. I have a narrated powerpoint rehearsal of the talk and those that are interested can look at that at the end of this post but here I want to discuss some of the things that came up in the discussion yesterday.

The core of the puzzle that I am pressing lies in asking why it is that conscious thoughts are not like anything for the creature that enjoys them. My basic claim is that if one started with the theory of phenomenal consciousness and qualitative character and came to understand and accept it but one hadn’t yet thought about conscious thoughts one would expect that the theory would produce cognitive phenomenology. Granted it wouldn’t be like the phenomenology of our sensations –seeing blue consciously is very different from consciously thinking that there is something blue in front of one– but why is it so different that in one case there is nothing that it is like whatsoever while in the other case there is something that it is like for the creature? The only difference between the contents of HOTs about qualitative states and HOTs about intentional states is that one employs concepts of mental qualities whereas the other employs concepts about thoughts and their intentional contents yet in one case conscious phenomenology –which is to say that there is something that it is like for the creature to have those conscious mental states– in all its glory is produced while in the other case nothing happens. As far as the creature is concerned it is a zombie when its has conscious thoughts. But what could account for this very dramatic difference? It looks like we haven’t really explained what phenomenal consciousness is, all we have done is re-locate the problem to the content of the higher-order thought. This is because no answer can be given to my question except “that how phenomenal concepts work” and so we have admitted that they are special.

Now one thing that came up in the discussion, by David Pereplyotchik, was what I meant by ‘special’ in the above. David P. suggested that qualitative properties may be distinctive without being special. I agree that they are distinctive and that is the reason that thinking that p and seeing blue are different. We move from distinctive to special when we deny that conscious thought have a phenomenology because we can’t explain why they don’t.

One detail that came out was that the way I formulated the HOTs and their contents was misleading. Instead of “I think I see blue*” the HOT has the content “I am in a blue* state”

At some point David said that when he had a conscious thought what it was like for him was like feeling one was about to say the sentence which would express the thought. So when one thinks that there is something blue in front of one what it is like for that creature is like feeling that they were about to say “there is something blue in front of me”. When I said ‘aha, so there is something that it is like for you to have a conscious mental state’ he responded “what does that mean?” This challenge to my use of the phrase “what it’s like for one” was a main theme of the discussion. A lot of the time I ask whether or not there is something that it is like for one to have a conscious thought  and if not why not but David objected that the phrase is multiply ambiguous and is used to confuse the issue more than anything else. One way this came out was in his challenging me to explain what was at stake. What difference is made if we say that there is something that it is like for one to have a conscious thought and what is lost if we deny it? I responded that it is obvious what the reference of the phrase ‘what it is like for one’ is. It is the thing that would be missing in the zombie world. David responded that the zombie world was impossible, which I agree with at the end of a long theoretical journey but we can still intuitively make sense of the zombie world even if only seemingly. That is even if it is the case that zombie are inconceivable we still know what it would mean for there to be zombies and that still helps us zone in on what the explanatory problem is. I take it that the whole point of the ambitious higher-order theory is that it tries to explain how this property, the one we single out via the phrase ‘what it is like for one’ and the zombie and mary cases, could be a perfectly respectful natural property. So what is at stake is whether or not I really am like a zombie when I have a conscious thought and what that means for the higher-order thought theory. If we cannot account for the difference between intentional conscious states and qualitative conscious states then we have not explained anything.

David’s main response to my argument seemed to be to appeal to the different ways in which the concepts that figure in our HOTs are acquired. In the case of the qualitative states we acquire the concepts that figure in our HOTs roughly by noticing that our sensations misrepresent things in the world. So, if I mistakenly see some surface as red and then come to find out that it isn’t red but is, say, under a red light and is really white, this will cause me to have a thought to the effect that the sensation is inaccurate and this requires that I have the concept of the mental quality that the state has. In the case of intentional states the story is different. We are to imagine that there is a creature that has concepts for intentional states but only applies them on the basis of third person behavior. This creature will have higher-order thoughts but they will be mediated by inference and will not seem subjectively unmediated. Eventually this creature will get to the point where it can apply these concepts to itself automatically at which point it will have conscious thoughts. This difference is offered as a way of saying what is different about the concepts that figure in HOTs about qualitative states and those that figure in HOTs about intentional states. It amounts to an elaboration of David Pereplyotchik’s suggestion early on that the qualitative properties are distinctive without being mysterious. They are distinctive in the way that concepts are acquired. But as before how can this be an answer to the question I pose? I agree that there is this difference for the sake of argument. What seems to me to follow from this is what I said before; namely that the phenomenology of thought and the phenomenology of sensations is not the same…but this should be obvious already. So, the claim is not that having a conscious thought should be like seeing blue for me or feel like a conscious pain for me only that it should be like something for me. Basically then, my response is that this will make a difference in what it is like for the creature but doesn’t explain such a drastic difference as absence of something that it is like for one in one case.

Another way I like to put the argument is in terms of mental appearances. David Rosenthal often says that what it is like for one is a matter of mental appearances at which point I argue that the HOT is what determines the mental appearances and so in the case of thinking that p it should appear to me as though I am thinking that p. In response to this David said that while it is the case that phenomenology is a matter of mental appearances it might not be the case that all mental appearances are phenomenological. At this point I have the same response as before…viz. what reason do we have to think that there are these two kinds of appearances? It looks like on is just inserting this into the theory by fiat to solve an unexpected problem. There is no theoretical machinery which explains why we have this disparity. When we ask why applying starred concepts results in appearance of qualitative phenomenology the application of intentional concepts does not so result in intentional phenomenology when we ask why? We are simply told that this is the way phenomenology works. It is as mysterious as ever.

At the close of the talk I touched briefly on Ned Block’s recent paper “The Higher-Order Theory is Defunct” which raises a new objection to the higher-order theory based on the consequences of explaining consciousness as outlined here. The problem that Ned sees is that when one has an empty HOT one has an episode of phenomenal consciousness that is real but that is not the result of a higher-order thought. David’s response seems to be to fall back on his denial that there are ever actually cases of empty higher-order thoughts. I brought up Anton’s syndrome and David responded that in Anton’s syndrome we don’t have any evidence that they actually have visual phenomenology. They don’t want to admit that they are blind but when we ask them to tell us what they see they can’t. If there are never empty higher-order thoughts then Block’s problem goes away.

My response to this problem is to identify the property of p-consciousness with the higher-order thought while still identifying the conscious mental states as the target of the HOT but at that point we adjourned to Brendan’s for some beer and further discussion.

During the discussion at Brendan’s we talked a little bit about my suggestion that we develop a homomorphism theory of teh mental attitudes. David and Myrto wanted to know how many similarities there were between sensory hommorphisms and the mental attitudes. In the sensory case we build up the quality space by presenting pairs of stimuli and noting what kind of discriminations the creature can do. What we end up doing is constructing the quality space from these kinds of discriminatory abilities. So, what kind of discriminations would happen in the mental attitude case? I suggested that maybe we could present pairs of sentences and ask subjects whether they expressed the same thought or different thoughts. Dan wanted to know what the dimensions of the quality space for mental attitudes would be. I suggested that one would be degree conviction, so that whether one doubts something or believes something firmly or just barely will be one dimension of difference but I have yet to think of any others. This has always been a project I hope to get to at some point…right now its just a pretty picture in my head…

Ok well I feel like I have been writing this all day so I am going to stop…

Error
This video doesn’t exist

20 thoughts on “Explaining Consciousness & Its Consequences

  1. i think the upshot of your puzzle is that there is indeed something distinctive it’s like to think thoughts (as if we really needed an argument for this). to me this doesn’t seem like much of a bullet to bite.

    also, i don’t think i understand what this “special” vs “distinctive” stuff is all about (from dave p).

  2. Hi Brian, thanks for the comment!

    I agree that it is not much of a bullet to bite since I do think that there is a phenomenology of thought but we do need an argument for it as long as there are those like David Rosenthal who firmly and resolutely deny that there is!

    As to the ‘special’ vs. ‘distinctive’ issue; the way I understood the point was that qualitative properties may turn out to be interesting and unique but that all by itself is not enough to merit a claim that they are not explainable in physical terms. And that is precisely what I mean by ‘special’. Both Ned and Dave think that qualitative properties are special in this way (though they of course disagree over whether they are ultimately still physical or not). I agree that their distinctive properties need not entail that they are special…that is, I take it, the point of taking the higher-order theory seriously: viz, it proposes a nonmysterious, non-special, explanation of how phenomenology arises in nature. My only point at this point is that if it does the work for sensations it had better do teh work fo thoughts.

  3. Hi Jason, thanks for the comment.

    What I mean by this is that when we imagine the zombie world we may turn out to be imagining a world that is not micro-physically a duplicate of ours. We may turn out to be imagining a world which only very closely approximates the microphysics of the actual world. But in any case we do succeed in imagining a world where there are physical creatures that are behaviorally just like us and which lack phenomenal consciousness. This makes perfect sense (the only thing at issue is whether this is a microphysical duplicate of our world or not). And if so then we have an intuitive grasp on what we are talking about when we ask whether or not there is anything that it is like for one to have a conscious thought. When I have a conscious thought am I like a zombie would be? Or is there something that it is like for me? This is a coherent question precisely because of out intuitive grasp on what the ‘what it is like for one’ locution picks out.

    • Thanks for the response, Richard.

      Do you mean that we can conceive of beings which function biologically just as we do, but which lack some physical properties required for phenomenal knowledge?

      I think this might be a trap. We have no evidence (apart from social behavior and biological function) to justify our attributions of phenomenal knowledge. By the same token, we have no other standard which could be used to reject such attributions. I am skeptical of the idea that any other sort of evidence could work here. I wonder if another sort of evidence is even conceivable.

  4. …what I meant was that we can imagine creatures that are physical in roughly the same way that we are (we imagine them with a brain and heart etc) but without any conscious experience. There is nothing that it is like for them, etc etc. Now Chalmers goes on to claim that these creatures are microphysical duplicates of us, whereas I deny that they are…perhaps they lack the area of the brain responsible for phenomenal consciousness, perhaps there are subtly different physical laws there, etc. But the point is just that we intuitively know what it would mean for there to be a meat robot that acted like us but did not have conscious experience in the way that we did.

  5. I’m afraid my initial concern is still pressing. (At least, it’s pressing me.)

    Let’s consider sight. I cannot imagine a person who was behaviorally indistinguishable from a person with sight, but who lacked vision. Even if their brain was significantly different from our brains–even if they had a man-made computer for a brain–I would still have to suppose that there was something it was like for them to see the world. Any claim to the contrary would seem irrational.

    I might agree that their conscious experience was significantly different from ours–just as I am willing to suppose that your conscious experience is different from my own. But that is a far cry from denying them any conscious experience. I just don’t see how we could ever rationally do that. What could possibly act as evidence here?

    I think you are asking us to conceive of applying an inconceivable standard of measurement.

    • Whether he is or isn’t fictional he is behaviorally just like a 11 year-old boy (or close enough for us to see the point). And keep in mind that Milo is not having a scripted dialouge with that guy…he is an AI agent who is responding to the Human on its own…So, if you agree that Milo doesn’t have experience then we have an intuitive grasp on what it would mean for there to be a behavioral duplicate without consciousness…

  6. Milo can’t do anything that we can do, as far as I can tell. I don’t see any reason to even suggest a similarity.

    I’m off for vacation, so I’ll look forward to continuing this thread in a couple of weeks.

    • Why would you say that? Just because he is an AI agent in a virtual world you think he isn’t behaving? At the very least he is producing speech which is behavior…but anyway if you don’t like Milo just imagine that the AI program was put in a robot body…the point is that it is really quite easy to imagine something that behaves like we do but lacks consciousness…

      • Hi Richard,

        No, I am not rejecting the idea that an AI agent in a virtual world can behave. I just don’t think Milo is an intelligent agent. I don’t think he is producing intelligent speech–at least, not nearly as intelligent as the speech of human boy. I think Milo was designed to give users the feeling of interacting with an artificial intelligence, but that there is actually very little going on in Milo to make him a good example for the purposes of our discussion. And, unfortunately, I don’t see any reason to now accept your claim that we can imagine something that behaves like we do but which lacks consciousness.

        • Well, I don’t know what else to say at this point…I agree that Milo is not exactly like a living little boy but he is far enough along that we can ‘extrapolate’ to what a complete functional duplicate of us would be…maybe, as you say, you can’t do it, but I seem to be able to and plenty of other people think that they are able to as well…is there any reason to think that we are wrong about this?

  7. I think we disagree about how intelligent Milo is. I think his interactions are a lot more scripted than you suggest. What we can see in the video is a very brief and heavily controlled situation in which an actor works with the Milo program to create the effect of an intelligent interaction. But it’s just a trick. I predict that a lengthier observation of Milo would reveal just how utter is his lack of creativity and spontaneity, and how predictable and single-track his functionality really is. That’s why I say I don’t think he is even remotely similar to a real person, and why I don’t think he is a good example for our discussion.

    The reason why I think you are wrong is that I think you are asking us to apply an inconceivable criterion. Our criteria for attributing phenomenal knowledge is based on behavior and nothing else. You are suggesting that we could be justified in rejecting the attribution of phenomenal knowledge to beings that exhibit the same behavior. What possible criterion could justify that?

    Is there some way I could justify my claim that Zed was blind, even though Zed behaved just as a person with sight behaves? Could I justify my claim that Zed had sight, even though he never exhibited any behavior which indicated visual perception?

    I think you are asking us to ignore how we actually attribute certain kinds of knowledge and claiming that we can do it some other way. But you are not specifying any other way, so I have no conception of what it might be. It seems to me that some other criterion would arguably pick out something else, and not what we mean when we talk about p-consciousness–that is, unless that other criterion were intensionally identical with our behavioral criterion, which would make zombies inconceivable.

    As far as I can tell, any line we could draw between p-conscious and p-unconscious beings would have to be drawn according to observable behavior. I don’t see the logical space for any other way.

Leave a reply to Jason Streitfeld Cancel reply