LeDoux and Brown on Higher-Order Theories and Emotional Consciousness

On Monday May 1st Joe LeDoux and I presented our paper at the NYU philosophy of mind discussion group. This was the second time that I have presented there (the first was with Hakwan (back in 2011!)). It was a lot of fun and there was some really interesting discussion of our paper.

There were a lot of inter-related points/objections that came out of the discussion but here I will just focus on just a few themes that stood out to Joe and I after the discussion. I haven’t yet had the chance to talk with him extensively about this so this is just my take on the discussion.

One of the issues centered on our postulation that there are three levels of content in emotional consciousness. On the ‘traditional’ higher-order theory there is the postulation of two distinct states. One is ‘first-order’ where this means that the state represents something in the world (the animal’s body counts as being in the world in this sense). A higher-order mental state is one that has higher-order content, where this means that it represents a mental state as opposed to some worldly-non-mental thing. It is often assumed that the first-order state will be some basic, some might even say ‘non-representational’ or non-conceptual, kind of content. We do not deny that there are states like the but we suggested that we needed to ‘go up a level’ so to speak.

Before delving into this I will say that I view this as an additional element in the theory. The basic idea of HOROR theory is just that the higher-order state is the phenomenally conscious state (because that what phenomenal consciousness is). I am pretty sure that the idea of the lower-order state being itself a higher-order state is Joe’s idea but to be fair I am not 100% sure. The idea was that the information coming in from the senses needed to be assembled in working memory in such a way as to allow the animal to connect memories, engage schemas etc. We coined the term ‘lower-order’ to take the place of ‘first-order’. For us a lower-order state is just one that is the target of a higher-order representation. Thus, the traditional first-order states would count as lower-order on our view but so would additional higher-order states that were re-represented  at a higher-level.

Thus on the view we defended the lower-order states are not first-order states. These states represent first-order states and thus are higher-order in nature. When you see an apple, for example, there must be a lot of first-order representations of the apple but these must be put together in working memory and result in a higher-order state which is an awareness of these first-order states. That higher-order representation is the ‘ground floor’ representation for our view. It is itself not conscious but it results in the animal behaving in appropriate ways. At this lower-order level we would characterize the content as something like ‘(I am) seeing an apple’. That is, there is an awareness of the first-order states and a characterization of those states as being seeing of red but there is no explicit representation of the self. There is an implicit referring to the self, by which we mean these states are attributed to the creature who has them but not in any explicit way. This is why we think of this state as just an awareness of the first-order activity (plus a characterization of it). At the their level we have a representation of this lower-order state (which is itself a higher-order state in that it represents first-order states).

Now, again, I do not really view this three-layer approach as essential to the HOROR theory. I think HOROR theory is perfectly compatible with the claim that it is first-order states that count as the targets. But I do think it is an interesting issue at state here and that is what role exactly the ‘I’ in “I am seeing a red apple’ is playing and also whether first-order states can be enough to play the role of lower-order states. Doesn’t the visual activity related to the apple need to be connected to concepts of red and apple? If so then there needs to be higher-order activity that is itself not conscious.

Another issue focused on our methodological challenge to using animals in consciousness research. Speaking for myself I certainly think that animals are conscious but since they cannot verbally report, and as long as we truly believe that the cognitive unconscious is as robust as widely held, then we cannot rule out that animal behavior is produced by non-conscious processes. What this suggests is that we need to be cautious when we infer from an animal’s behavior to the cause of it being a phenomenally conscious mental state. Of course that could be what is going on, but how do we establish that? It cannot be the default assumption as long as we accept the claims about the cognitive unconscious. Thus we do not think that animals do or do not have conscious experience but rather that the science of consciousness is best pursued in Humans (for now at least). For me this is related to what I think of as the biggest confound in all of consciousness science and that is the confound of behavior. If an animal can perform a task then it is assumed this is because its mental states are conscious. But if this kind of task can be performed unconsciously then behavior by itself cannot guarantee consciousness.

One objection to this claim (sadly I forgot who made this…maybe they’ll remind me in the comments?) was that maybe verbal responses themselves are non-conscious. When I asked if the kind of view that Dennett has, where there is just some sub-personal mechanism which results in an utterance of “I am seeing red” and this is all there is to the conscious experience of seeing red, counts as the kind of view the objector had in mind. The response was that no they had in mind that maybe the subjects are zombies with no conscious experience at all and yet were able to answer the question “what do you see” with “I see red,” just like zombies are thought to do. I responded to this with what I think is the usual way to respond to skeptical worries. That is, I acknowledge that there is a sense in which such skeptical scenarios are conceivable (though maybe not exactly as the conceiver supposes), but there are still reasons for not getting swept up in skepticism. For example I agree with the “lessons” from fading, dancing, and absent qualia cases that we would be in an unreasonable sense detached from our conscious experiences if this were happening. The laws of physics don’t give us any reason to suppose that there are radical differences between similar things (like you and I), though if we discovered an important brain area missing or damaged then I suppose we could be led to the conclusion that some member of the population lacked conscious experience. But why should we take this seriously now? I know I am conscious from my own first-person point of view and unless we endorse a radical skepticism then science should start from the view that report is a reliable(ish) guide to what is going on in a subject’s mind.

Another issue focused on our claim that animal consciousness may be different from human conscious experience. If you really need the concept ‘fear’ in order to feel afraid and if there is a good case to be made that animals don’t have our concept of fear then their experience would be very different from ours. That by itself is not such a bad thing. I take it that it is common sense that animal experience is not exactly like human experience. But it seems as though our view is committed to the idea that animals cannot have anything like the human experience of fear, or other emotions. Joe seemed to be ok with this but I objected. It is true that animals don’t have language like humans do and so are not able to form the rich and detailed kinds of concepts and schemas that humans do but that does not mean that they lack the concept of fear at all. I think it is plausible to think that animals have some limited concepts and if they are able to form concepts as basic as danger (present) and harm then they may have something that approaches human fear (or a basic version of it). A lot of this depends on your specific views about concepts.

Related to this, and brought up by Kate Pendoley was the issue of whether there can be emotional experiences that we only later learn to describe with a word. I suggested that I thought the answer may be yes but that even so we will describe the emotion in terms of its relations to other known emotions. ‘It is more like being afraid than feeling nausea’ and the like. This is related to my background view about a kind of ‘quality space’ for the mental attitudes.

Afterwards, over drinks, I had a discussion with Ned Block about the higher-order theory and the empirical evidence for the role of the prefrontal cortex in conscious experience. Ned has been hailing the recent Brascamp et al paper (nice video available here) as evidence against prefrontal theories. In that paper they showed that if they take away report and attention (by making the two stimuli barely distinguishable) then you can show that there is a loss of the prefrontal fMRI activation. I defended the response to this that fMRI is too crude of a measure to take this null result too seriously. This is what I take to be the line argued in this recent paper by Brain Odgaard, Bob Knight, and Hakwan, Should a few null findings falsify prefrontal theories of consciousness? Null results are ambiguous as between the falsifying interpretation and it just being missed by a crude tool. As Odgaard et al argue if we use more invasive measures like single cell or ECoG then we would find prefrontal activity. In particular the Mante et al paper referred to in Odgaard et all is pretty convincing demonstration that there is information decodable from prefrontal areas that would be missed by an fMRI. As they say in the linked to paper,

There are numerous single- and multi- unit recording studies in non-human primates, clearly demonstrating that specific perceptual decisions are represented in PFC (Kim and Shadlen, 1999; Mante et al., 2013; Rigotti et al., 2013). Overall, these studies are compatible with the view that PFC plays a key role in forming perceptual decisions (Heekeren et al., 2004; Philiastides et al., 2011; Szczepanski and Knight, 2014) via ‘reading out’ perceptual information from sensory cortices. Importantly, such decisions are central parts of the perceptual process itself (Green and Swets, 1966; Ratcliff, 1978); they are not ‘post-perceptual’ cognitive decisions. These mechanisms contribute to the subjective percept itself (de Lafuente and Romo, 2006), and have been linked to specific perceptual illusions (Jazayeri and Movshon, 2007).

In addition to this Ned accused us of begging the question in favor of the higher-order theory. In particular he thought that there really was no conscious experience in the Rare Charles Bonnett cases and that our appeal to Rahnev was just question begging.

Needless to say I disagree with this and there is a lot to say about these particular points but I will have to come back to these issue later. Before I have to run, and just for the record, I should make it clear that, while I have always been drawn to some kind of higher-order account, I have also felt the pull of first-order theories. I am in general reluctant to endorse any view completely but I guess I would have to say that my strongest allegiance is to the type-type identity theory. Ultimately I would like it to be the case that consciousness and mind are identical to brain states and/or states of the brain. I see the higher-order theory as compatible with the identity theory but I am also sympathetic to to other versions (for full-full disclosure, there is even a tiny (tiny) part of me that thinks functionalism isn’t as bad as dualism (which itself isn’t *that* bad)).

Why, then, do I spend so much time defending the higher-order theory? When I was still an  undergraduate student I thought that the higher-order thought theory of consciousness was obviously false. After studying it for a while and thinking more carefully about it I revised my credence to ‘not obviously false’. That is, I defended it against objections because I thought they dismissed the theory unduly quickly.

Over time, and largely because of empirical reasons, I have updated my credence  from ‘not obviously false’ to ‘possibly true’ and this is where I am at now. I have become more confident that the theory is empirically and conceptually adequate but I do not by any means think that there is a decisive case for the higher-order theory.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s