No too long ago Jake Berger and I presented a paper we are working on at the NYU philosophy of mind discussion session. There was a lot of very interesting discussion and there are a couple of themes I plan on writing about (if I ever get the chance I am teaching four classes in our short six week winter semester and it is a bit much).
One very interesting objection that came up, and was discussed in email afterwards, was whether HOT theory has the resources to say which first-order state is the conscious state. Ned Block raised this objection in the following way. Suppose I have two qualitative first-order states that are, say, slightly different shades of red. When these states are unconscious there is nothing that it is like for the subject to be in them (ex hypothesi). Now suppose I have an appropriate higher-order thought to the effect that I am seeing red (but not some particular shade of red). The content of the higher-order thought does not distinguish between the two first-order states so there is no good reason to think that one of them is consciousness and the other is not. Yet common sense seems to indicate that one of them could be conscious and the other non-conscious, so there is a problem for higher-order thought theory.
The basic idea behind the objection is that there could be two first-order states that are somewhat similar in some way, and there could be a fact of the matter about which of the two first-order states is conscious while there is a higher-order thought that does not distinguish between the two states. David’s views about intentional content tend toward descriptivism and so he thinks that the way in which a higher-order thought refers to its target first-order state is via describing it. I tend to have more sympathy with causal/historical accounts of intentional content (I even wrote about this back in 2007: Two Concepts of Transitive Consciousness) than David does but I think in this kind of case he does think that these kinds of considerations will answer Block’s challenge.
But stepping back from the descriptivism vs. causal theories of reference for a second, I this objection helps to bring out the differences between the way in which David thinks abut higher-order thought theory and they way that I tend to think about it.
David has presented the higher-order thought theory as a theory of conscious states. It is presented as giving an answer to the following question:
- How can the very same first-order state occur consciously and also non-consciously?
The difference between these two cases is that when the state is conscious it is accompanied by a higher-order thought to the effect that one is currently in the state. Putting things this way makes Block’s challenge look pressing. We want to know which first-order state is conscious!
I trend to think of the higher-order thought theory as a theory of phenomenal consciousness. It makes the claim that phenomenal consciousness consists in having the appropriate higher-order thought. By phenomenal consciousness I mean that there is something that it is like for the organism in question. I want to distinguish phenomenal consciousness from state consciousness. A state is state-conscious when it is the target of an appropriate higher-order awareness. A state is phenomenally conscious when there is something that it is like for one to be in the state. A lot of confusion is caused because people use ‘conscious state’ for both of these notions. A state of which I am aware is naturally called a conscious state but so to is a state which there is something that it is like to be in.
Block’s challenge thus has two different interpretations. On one he is asking how the higher-order awareness refers to its target state. That is, he wants to know which first-order state am I aware of in his case. On the other interpretation he is asking which first-order state is there something that it is like for the subject to be in. The way I understand Rosenthal’s view is that he wants to give the same answer to both questions. The target of the higher-order state is the one that is ‘picked out’ by the higher-order state. And what it is like for the subject to be in that target first-order state consists in there being the right kind of higher-order awareness. Having the appropriate higher-order state is all there is to there being something that it is like to be in the first-order state.
I tend to think that maybe we want to give different answers to these two challenges. Regardless of which first-order state is targeted by the higher-order awareness the state which there is something that it is like for the subject to be in is the higher-order state itself. This higher-order state makes one aware of being in a first-order state, and that is just what phenomenal consciousness is. Thus it will seem to you as though you are in a first-order state (it will seem to you as though you are seeing red when you consciously see red). For that reason I think it is natural to say that the higher-order state is itself phenomenally conscious (by which I mean it is the state which there is something that it is like to be in). I agree that we intuitively think it is the first-order states which are phenomenally conscious but I don’t think that carries much weight when we get sufficiently far into theorizing.
While I agree that it does sound strange to say that the first-order state is not phenomenally conscious I think this is somewhat mitigated by the fact that we can none the less say that the first-order state is a conscious state when it is targeted by the appropriate higher-order awareness. This is because all there is to being a conscious state, as I use the term here, is that the state is targeted by an appropriate higher-order awareness. The advantage to putting things in this way is that it makes it clear what the higher-order theory is a theory of and that the objection from Block is clearly assuming that first-order states must be phenomenally conscious.
10 thoughts on “…And the Conscious State is…”
You and I agree that one could have a conscious perception and an unconscious perception at the same time. The notion of a conscious perception here is that there is something it is like to perceive. The conscious perception could be of one maximally fine-grained shade of red and the unconscious perception could be of another maximally fine-grained shade of red. We can see a million shades but only have concepts of a small minority of those. Even if were possible to put together descriptions of such shades on the order of “10,003 shades towards blue from reddish blue”, I think we could agree that a normal HOT would not contain such descriptions. So the problem for the HOT account is that it has no way to explain how one of the perceptions could be conscious and the other not.
You seem to agree with this line of reasoning, saying we have to just give up the idea that one perception could be conscious, the other not. But then you veer off. You say “While I agree that it does sound strange to say that the first-order state is not phenomenally conscious I think this is somewhat mitigated by the fact that we can none the less say that the first-order state is a conscious state when it is targeted by the appropriate higher-order awareness. This is because all there is to being a conscious state, as I use the term here, is that the state is targeted by an appropriate higher-order awareness.” But the problem is that the theory does not have the resources to say which of the two perceptions is targeted by the HOT, so that reply is inadequate.
You suggest a causal/historical account of what makes a HOT be about a perception, but that gives rise to the familiar problem of how a thought to the effect that I am smelling vomit could make a perception of crimson a conscious perception. You could build in a content restriction but then we would be back where we started with the fact that a descriptivist view based on content is inadequate.
If I were a higher order theorist, I would be moving in the direction of Hakwan’s pointer theory.
Hi Ned, thanks for the comment!
I like Hakwan’s pointer view and there is a version of it which I find somewhat plausible (I have even helped defend it here and there 🙂 Sometimes I think that David Rosenthal is right when he says we don’t really know what mental pointing is, but suppose we can address that. It just seems mysterious to me why a first-order state would become phenomenally conscious because it was pointed at. I grant that this could turn out to be true for some reason and even that we could have reason to believe it but it would not help us understand why that kind of higher-order state resulted in consciousness, let alone an experience of a certain sort (say red). For non-pointer views it is the conceptual content of the higher-order state that accounts for what it is like for one. One is aware of oneself as seeing red, and so attributes to oneself a seeing of red which accounts for why it seems to you that you are seeing red. I don’t see how the pointer view can account for how we are aware of the first-order states. That is why I have always been drawn to a ‘mixed-account’ where the pointer element secures the reference of the higher-order state while the conceptual non-pointer content describes the state which the pointer refers to (like ‘that philosopher defends the biological theory’)
And that brings us to your current argument. I think there is a pointer/causal element that determines which of the two very similar first-order states is the one that the higher-order state targets. That is the state which we would say I am conscious of myself as being in, so it is a conscious state in that sense. But the higher-order theory claims that one can be aware of the state in various ways, in particular the way the higher-order state describes the first-order state determines what it is like for you. The phenomenal character of the state that is targeted by the higher-order state is determined by the content of the higher-order state.
So what do we say if there is a first-order representation of crimson that is targeted by a higher-order awareness to the effect that one is smelling vomit? Things will be a little complicated because one might be tempted to invoke a kind of teleological notion of content here, like Ruth Millikan, and say that the intentional content of ‘smelling vomit’ has the function of referring to certain olfactory experiences, even when accidentally caused by visual states but let’s set that aside and say there is a pointer element in the higher-order state that picks out the first-order seeing of crimson. Then that is the state that I am aware of myself as being in, and that is the conscious state. But if the higher-order state that targets it describes it as smelling vomit then what it is like for me will be like smelling vomit. The theory says that I will have a conscious visual state (in the sense of being the state I am ware of) which I experience as an olfactory state (in the sense of being described in that way by the content of the higher-order state). This may sound strange but I think this is actually a prediction of the theory that we could, at least in Principe, test. If we could somehow manipulate someone’s brain so that they had the first-order state for crimson but then also produce the right kind of inner awareness as of smelling vomit then the person would experience smelling vomit but behave as though seeing crimson, or the descriptive content version of the higher-order theory is false.
It’s nice to see a discussion of this important point. As Ned, I’d probably prefer a “pointer” version of higher-order theory as well. About your worry that the theory makes it mysterious why the pointer makes the first-order content phenomenally conscious, one solution could be a dual-content view with consumer / inferential role semantics. Although it seems like even Peter Carruthers doesn’t believe in this view anymore. So, I’m not sure this is the way to go.
A quick thought on your discussion with Ned. First, about the example of the visual first-order state described as an olfactory state by the higher-order state. You write: “The theory says that I will have a conscious visual state (in the sense of being the state I am aware of) which I experience as an olfactory state (in the sense of being described in that way by the content of the higher-order state)”.
Given how your view accounts for this example, it’s unclear to me in what sense the subject is aware of being in a visual state if her higher-order thought describes herself as being in a completely different (olfactory) state. If we asked the subject, would she report having a visual state, or an olfactory state? If she reports having an olfactory state and no visual state at all, then it seems that she’s not aware of being in a visual state. If she reports having a visual state and no olfactory state, it seems that what she’s experiencing is somewhat detached from the way the higher-order thought describes herself.
Another way to put my worry is that, if I have a thought that describes *that* visual state of seeing something red as a state of smelling vomit, then it’s unclear in what sense that thought makes me aware of myself as having a visual state at all. After all, I will think that *that* state is a state of smelling vomit, I’ll feel like that state is a state of smelling vomit, and I’ll describe my experience as an experience of smelling vomit. Sure, the pointer still points to *that* visual state, but now I don’t understand in what sense pointing to that state makes me *aware* of being in that state. It seems that pointing to that state doesn’t make me aware of being in that state. It just makes me aware of being in a different, unrelated state. Evidence for this latter claim is that I wouldn’t report being in that state as a result of the pointer pointing to that state.
Another question. Keeping the higher-order descriptions constant, would you notice any change if the pointer were pointing to entirely different first-order states, or if it were pointing to nothing at all? If not, this leads to the counterintuitive conclusion that what it’s like for you to see, hear, smell, etc., could be entirely dissociated from the *conscious* mental states you’re having at any given moment (since which mental states are conscious depends on the pointing relation, not on the higher-order description). As I understood your point, it seems that you agree with this counter-intuitive claim, but then if that’s possible I’m not sure in what sense those first-order mental states still qualify as “conscious” mental states. It doesn’t even seem to you like you’re having those mental states (since it seems to you that you’re having, say, olfactory states instead).
Just another quick question: is it possible to have a higher-order thought that describes the first-order state as a state I’m not aware of being in? e.g. “I’m having *this* mental state I’m not aware of being in”. If it’s possible (if not, why not?), would your view entail that in that case I have a conscious visual state with the phenomenology of not being aware of being in that state? That sounds a bit weird to me, so maybe there’s something I don’t get.
Hi Matthias, thanks for this comment, I appreciate it!
The claim is that it makes you aware of the state in a certain way, as a state of a certain kind. The higher-order approach distinguishes the state which is targeted (the state of which you are aware) and the way in which you are aware of the state. I am arguing that we can map ‘conscious state’ and ‘phenomenal consciousness’ onto these two notions. The notion of state consciousness is just the one picked out by being the target of the higher-order state. It gives you an answer to the question ‘what mental state is this higher-order state about?’ The content of the higher-order state describes the state which is targeted and that maps onto phenomenal consciousness. What it is like for you is a matter of how you are aware of the target state.
Notice that I am not saying that the higher-order state makes you aware of yourself as having a visual state. In the example we are taking about you are ware of yourself as having an olfactory state. You want the reference of the higher-order state to depend solely on its descriptive content but I don’t see why we should think that. Subjects will report that the smell vomit and they will behave as though they see crimson (for example pointing to a red object when asked what they saw, which might surprise them since they were smelling something, it would be a strange experience to have).
I take the appropriate higher-order state to be phenomenally conscious. It is the higher-order state which there is something that it is like for one to be in and what that state does is represent oneself as being in a first-order state of some kind. So what it I like for you is like being in a first-order state. The phenomenal character of the experience you have is exhausted by the representational content of the higher-order state. Thus when one has a conscious experience it will seem to one as though one is in some first-order state. It will seem to one as though one is aware of some first-order state or other. In a great many cases the state one is aware of will be (for the most part) correctly described by the higher-order state. But it seems like they could come apart, and I think we have some empirical cases that can be interpreted in that way.
When the first-order target doesn’t exist there is no state-conscious state (there is no state you are aware of) but there is a phenomenally conscious state (the higher-order state) and so it will seem to you that there is a first-order state when there isn’t (and we can test for that). When the target state is not correctly described by the higher-order state then the state-conscious state diverges from the phenomenal content. There would be no way to notice a change if the merely the pointer pointed to another state. The first-order state is conscious (on my view) only in the sense that it I the one you are aware of, which I take to be the one which the higher-order state refers to.
On your last question, it sounds like a contradiction to me to the effect that I am aware of this state that I am not aware of…I don’t think we have any reason to think we have those kinds of thoughts.
1. Picky point: Rather than describe my own goal as explaining how some token state can be conscious at one time and not at another, I would describe my goal more generally as explaining what the difference is between a state’s being conscious and its not being conscious–with no antecedent commitment about token states’ changing status.
2. It’s one thing to say that common sense says that only one Ned’s two qualitatively very similar states *could* be conscious and another to say that only one is. Why mightn’t the HOT that doesn’t distinguish them result in their both being conscious? And even if one thinks that common sense thinks that only one actually is, why assume that we can tell which? Commonsense assumes that at each moment (nanosecond?) there’s a discrete, finite number of electrons in my coffee cup, but I suspect neither common sense nor any aspect of physics graces us with a way determining what that number is.
3. On the other hand, I guess I think that timing and perhaps neural connections might tell us in Ned’s imaginary case. Are those your causal-historical factors? If so terrific. I have never myself said–ever–that a HOT that occurs now (as I type this) and perfectly describes a first-order state I had 10 years ago results in that first-order state’s being conscious. Relying on descriptive properties of the HOT has never meant relying only on them. (Compare my remarks years ago about Dan Dennett’s Stalinesque vs. Orwellian scenarios. Timing matters.)
4 So I am reluctant to agree that Ned’s challenge provides leverage for resolving whatever differences separate your version of a higher-order theory from mine. (Though when we aren’t doing anything else, let’s coauthor an article about just what if anything really does.)
On Matthias’s proposal of somebody’s having a thought that one is in a mental state but not (subjectively, I suppose) aware of the state, why not, indeed. (Subjectively aware because one might think that having a thought about something results in one’s being aware of it. That is, after all, the point of the HOT theory.) But I’m not sure I get the point. One can invent all sorts of thought one might have. I can be aware of myself as being in some mental state inferentially, e.g., but that’s irrelevant.
Hi David, thanks for this comment! I really appreciate this discussion.
On you picky point, I am a little unsure what you mean by that. You do think that the very same perception of red can occur consciously and also unconsciously, right? I was taking ‘a mental state’ to mean ‘some particular mental state or other’ …that’s not how we should read it?
On the other points, do you agree that we answer my two questions differently? That is, one question is ‘which first-order state is the target of the HOT’ and the other question is ‘which state is there something that it is like for one to be in?’…I think you want to say that the state which is targeted I the one that it is like something for me to be in and I want to say that the state which is targeted is distinct from the on that there is something that it is like for me to be in.
Great idea to write something on this@ Count me in!!
Yes, I agree that the very same token first-order state can be conscious at one time and not at another. I was talking about my goal in developing a theory of consciousness; that’s all.
I guess I think it’s not obvious how we should fit together terms that derive from common sense, like ‘what it’s like’, with terms that are part of a a theory of consciousness, even when that theory is cast, as HOT theory is, in psychological terms. One could locate there being something it’s like in different places.
But if somebody is in a conscious state of seeing a red square in virtue of being in a first-order state of seeing a red square and being aware of being in that state in virtue of having a HOT to the effect that the person is in that state, and I ask the person what it’s like for you, I expect to get the answer that what it’s like for that person is seeing a red square, not being aware of doing so. And that seems to me to cut my way.
All the best,
Thanks David, I completely agree both that it is not obvious how to fit these terms together and that it does cut your way. However I think the empty HOT/mis-match considerations give us an argument that pushes back on this consideration. If someone is in a conscious state of seeing a red square by just being aware of being in that state but without being in a first-order state of seeing a red square then it strikes me as very odd to say that it is the first-order state that there is something it is like to be in (I think the same weirdness comes out in Ned’s cases we are discussing here as well). Once you get done explaining why it isn’t all that weird I think you have basically said what I have been saying.
Here’s another way to put the argument. The neural correlates of phenomenal consciousness are the minimal activation/areas one needs for there to be something that it is like for one to see red. On the higher-order view the NCC is the NCC of the higher-order states not the seeing of red. Being in a state of seeing red is not part of the minimal neural(/psychological) states needed to generate the experience of consciously seeing red.
Would you object if I said that the phenomenal character of the target state consists in how the higher-order awareness describes the target state?
Again, thanks for the comments, I really appreciate this discussion!
Hi Richard and Matthias, I agree with Matthias’s comment. Here is an additional point. It is possible for there to be two perceptions that are simultaneous AND have the same content, yet one is conscious and the other isn’t. Humans and other primates have two visual systems, a dorsal system that is dedicated to fast and inflexible spatial computations, is used to guide action and is mostly or totally unconscious; and a conscious ventral system that is slower, more flexible and functions to produce a model of the world useful for planning. There are often—even usually—simultaneous representations of the spatiotemporal aspects of the same events in both systems. So there could be a ventral and dorsal representation with the content: round object moving up. There is no way that the content of a HOT could distinguish between simultaneous perceptions with the same content, but nonetheless one is conscious and the other not. There just can’t be anything in the descriptive content of the HOT that distinguishes them.
Hi Ned, it seems to me that you are assuming that the relevant first-order state must have some property in virtue of which it is conscious but on the higher-order account to say that one of those two states is conscious is just to say that I am aware of one of the states and not the other. I may not be able to become aware of myself as being in state which are found in the dorsal stream. Then one needs to say how it is that we are aware of ourselves as being in ventral stream states. I think that you are right that the kind of argument you are giving puts pressure on the following claim:
But the higher-order theory does not entail this claim. And, as David says above, he doesn’t endorse it. I do think that he endorses the claim that usually or generally the descriptive content is what does the trick but in these kinds of cases we need to talk about timing (historical factors) and neural connections (causal factors). As I said I am personally more on the side of Kripke, Devitt, and Millikan when it comes to intentionality but either way there will be something beyond description that explains why the ventral states are the ones we are representing (thinking about) when we have the relevant higher-order representations. However, there will be nothing beyond the relevant descriptions that will explain what it is like for me.
Why do you think David’s higher-order theory is committed to the claim about descriptive content? Is it because he says that the conscious state is the one that you represent yourself as being in? He takes it that the one you represent yourself as being in is the one picked out by the description and/or other factors as needed in special cases. It seems like you are arguing that the second clause is ad hoc (notice that wouldn’t affect my causal/historical intentionality version) but I don’t really see why you think that…
thanks again for these comments, I am enjoying the discussion!