Seager on the Empirical Case for Higher-Order Theories of Consciousness

In the recent second edition of William Seager’s book Theories of Consciousness: An Introduction and Assessment he addresses some of my work on the higher-order theory. I haven’t yet read the entire book but he seems generally very skeptical of higher-order theories, which is fine. Overall the argument he presents is interesting and it allows me to clarify a few things.

It is clear from the beginning that he is interpreting the higher-order theory in the standard relational way. This is made especially clear when he says that the basic claim of higher-order theory can be put as follows:

A mental state is conscious if and only if it is the target of a suitable higher-order thought (page 94)

This is certainly the way that most people interpret the theory and is the main reason I adopted ‘HOROR’ theory as a name for the kind of view I thought was the natural interpretation of Rosenthal’s work. I seem to remember a time when I thought this was ‘the correct’ way to think about Rosenthal’s work but I have since come to believe that it is not as cut and dry and that.

This is why I have given up on Rosenthal exegesis and just pointed out that there are two differing ways to interpret the theory. One of which is the relational kind of view summed up above. The other is the non-relation view, which I have argued allows us to capture key insights of the first-order theories. On this alternative interpretation the first-order state is not ‘made’ phenomenally conscious by the higher-order state. Rather the higher-order state just is phenomenal consciousness. Simply having the appropriate higher-order state is what being phenomenally conscious consists in, there is nothing more to it than that. This is the way I interpret the higher-order theory.

Seager comes close to recognizing this when he says (on page 94),

Denial of (CS) [the claim that “if S is conscious then S is in (or has) at least one conscious state”] offers a clear escape hatch for HOT theory. Contrast that clarity with this alternative characterization of the issue ‘[c]onscious states are states we are conscious of ourselves as being in, whether we are actually in them’ (Rosenthal 2002 p 415). Here Rosenthal appears to endorse the existence of a conscious state which is not the target of a higher-order thought, contrary to HOT theory itself. If so then HOT theory is not the full account of the nature of conscious states and it is time to move on to other theories. I submit that it is better for HOT theorists to reject (CS) and allow for creatures to be conscious in certain ways in the absence of an associated conscious mental state.

The quote from Rosenthal is an accurate one and it does summarize his views. If one interprets it my way, as basically saying that the higher-order state is the phenomenally conscious state, then we do have a conscious state that is not the target of a higher-order state (or at least which need not be). This is because the higher-order state is phenomenally conscious but not because of a further higher-order state. It is because being phenomenally conscious consists in being aware of yourself in the way the higher-order theory requires. As I have argued, in several places, this does not require that we give up the higher-order theory or adopt a ‘same-order theory’. HOROR theory is the higher-order thought theory correctly interpreted.

It thus turns out that phenomenal consciousness is not the same thing as ‘state consciousness’ as it is usually defined on the traditional higher-order theory. That property involves being the target of the higher-order state. This is something that, on my view, reduces to the causal connections between higher-order states, and their conceptual contents, and the first-order states. This will amount to a causal theory of reference for higher-order states. They refer to the first-order states which cause them in the right way. The states to which they refer are what I call the ‘targets’ of the higher-order states. So, for me the targeting relation is causal, but for Rosenthal and others more influenced by Quine it essentially amounts to describing. Thus for Rosenthal the target of the relevant higher-order state will be the first-order state which ‘fits the description’ in the higher-order content. I suppose I could live with either of these ultimately but I do think you need to say something about this on the higher-order account. At any rate on my view being the target of the higher-order state tells us which state we are aware of and the content of the higher-order state tells us the way in which we are aware of it. The two typically occur together but if I had to call one the phenomenally conscious state it would be the higher-order state.

Seager goes on to say in the next paragraph,

One might try to make a virtue of necessity here and seek for confirmation of the false HOT scenario. There have been some recent attempts to marshall empirical evidence for consciousness in the absence of lower-level states but with the presence of characteristic higher-order thoughts, thus showing that the latter are sufficient to generate consciousness (see Lau and Rosenthal 2011; Lau and Brown forthcoming; Brown 2015). The strategy of these efforts is clear: Find the neural correlates of higher-order thoughts posited by HOT theory, test subjects on tasks which sometimes elicit consciousness and sometimes do not (e.g. present them with an image for a very short time and ask them to report on what they saw), and, ideally, observe that no lower-order states occur even in the case where subjects report seeing something. Needless to say, it is a difficult strategy to follow. (page 95)

I would quibble with the way that things are put here but overall I agree with it. The quibbles come from the characterization of the strategy. What Lau and I were arguing was that we want to find cases where the first-order state is either absent or degraded, or  otherwise less rich than the conscious experiences of subjects. So we would be happy just with a mis-match between the first-order and higher-order cases. Whether we ever get the ideal total absence of first-order states is maybe too high of a bar. This is why in the work that Lau does he aims to produce cases where task performance is matched but subjective reports differ. The primary goal is to show that conscious experience outstrips what is represented at the first-order level. It is a difficult strategy to follow but all we can do is to use the tools we have to try to test the various theories of consciousness.

Seager then goes on to focus on the case of the rare form of Charles Bonnett syndrome. In these rare cases subjects report very vivid visual hallucinations even though there is extensive damage to the primary visual cortex. Seager briefly considers Miguel Sebastian’s objection based on dreaming but then objects that

…a deeper problem undercuts the empirical case, tentative though it is, for HOT theory and the empty HOT scenario. This is a confusion about the nature of the lower-order and higher-order cognitive states ate issue. ‘Lower-order’ does not mean ‘early’ and ‘higher-order’ does not mean ‘later’ in the brain’s processing of information. Higher-order refers specifically to thoughts about mental states as such; lower-order states are not about thoughts as such but are about the world as presented to the subject (including the subject’s body).

There is little reason to think that lower-order states, properly conceived, should be implemented in low-level or entry-level sensory systems. It is not likely that an isolated occipital lobe would generate visually conscious states.

Nor is it unlikely that lower-order states, states, that is, which represent the world and the body occur in ‘higher’ brain regions such as the dorsolateral prefrontal cortex. It would be astounding if that brain region were devoted to higher-order thoughts about mental states as such. (page 96)

I largely agree with the points being made here but I do not think that Lau and I were confused about this. The first thing I would say is that we are pretty explicit that we adopt the usage that we think the typical first-order theorist does (and especially Ned Block) and that we include areas outside the occipital lobe “that are known to contain high number of neurons explicitly coding for visual objects (e.g. fusiform face area)”  as first-order areas (see footnote 7 in the paper).

In the second instance we talked about three empirical cases in the paper and each was used for a slightly different purpose. When people discuss this paper, though, they typically focus on one out of the three. Here is how we summed up the cases in the paper:

To sum up, there are three kinds of Empirical Cases – Rare Charles Bonnet Cases (i.e. Charles Bonnet cases that result specifically from damage to the primary visual cortex), Inattentional Inflation (i.e. the results of Rahnev et al, in press and in review) and Peripheral Vision (introspective evidence from everyday life). The three cases serve slightly different purposes. The Rare Charles Bonnet Cases highlight the possibility of vivid conscious experience in the absence of primary visual cortex. If we take the primary visual cortex as the neural structure necessary for first-order representations, this is a straightforward case of conscious experience without first-order representations. In Inattentional Inflation, the putative first-order representations are not missing under the lack of attention, but they are not strong enough to account for the “inflated” level of reported subjective perception, in that both behavioral estimates of the signal-to-noise ratio of processing and brain imaging data show that there was no difference in overall quality or capacity in the first-order perceptual signal, which does not concern only the primary visual cortex but also other relevant visual areas. Finally, Peripheral Vision gives introspective evidence that conscious experience may not faithfully reflect the level of details supported by first-order visual processing. Though this does not depend on precise

laboratory measures, it gives an intuitive argument that is not constrained by specific experimental details.

So I don’t think Seager’s criticism of us as being confused about this is fair.

In addition, in recent work with Joe LeDoux we endorse the second claim made by Seager. We explicitly argue that the ‘lower-order’ states we are interested in will occur in working emory and likely even dorsal lateral prefrontal cortex.

But even if I think Seager is wrong to accuse us of being insensitive or confused about this issue I do think he goes on to present an interesting argument. He goes on to say,

The problem can be illustrated by the easy way HOT (or HOT-like) theorists pass over this crucial distinction. Consider these remarks from Richard Brown:

Anyone who has had experience with wine will know that acquiring a new word will sometimes allow one to make finer-grained distinctions in the experience that one has. One interpretation of what is going on here is that learning the new word results in one’s having a new concept and the application of this concept allows one to represent one’s mental life in a more fine-grained way. This results in more phenomenal properties in one’s experience…that amounts to the claim that one represents one’s mental life as instantiating different mental qualities.

Those unsympathetic to HOT theory will balk at this description. What is acquired is an enhanced ability to perceive or appreciate the wine in this case, not the experience of the wine (the experience itself does not seem to have any distinctive perceivable properties). After training the taster has new lower-order states which better characterize the wine, not new higher-order states aimed at and mentally characterizing the experience of tasting the wine.

Since there is no reason to restrict lower-order states to relatively peripheral sensory systems, it will be very hard to make out an empirical case for HOT theory and the empty HOT scenario in the way suggested. (pages 96-97)

The quote he offers here is from the HOROR paper and so it is interesting to see that the proposed solution, that the higher-order state is phenomenally conscious and that this is not giving up on the higher-order theory, is neglected.

Before going on I should say that I am pretty much sympathetic to the point being made here. I think there is a first-order account of what is going on. I also tend to think that this is ultimately an empirical issue. If there were a way to test this that would be great but I am not sure we have the capacity to do so yet. But my main point in the paper was not to offer this as a phenomenon that the first-order theorist couldn’t explain. What I was intending to do was to argue that the higher-order interpretation is one consistent interpretation of this phenomenon. It fits naturally with the theory and shows that there is nothing absurd in the basic tenet of the HOROR theory that phenomenal consciousness really is just a kind of higher-order thought, with conceptual content.

As I read Rosenthal he does not think the first-order account is plausible. For Rosenthal we are explicitly focusing on our experience sin these kinds of cases. One takes a drink of the wine and focuses on the taste of the wine. This may be done even after one has swallowed the wine. The same is true for the auditory cases. It does seem plausible that in these cases I am focused on my experience, not on the wine (it is the experience of the wine of course). But if the general kind of theory he advocates is correct then one will still come to appreciate the wine itself. When I have the new fine-grained higher-order thoughts they will attribute to me finer-grained first-order states and these will be described in terms of the properties I experience the wine as having. They will thus make me consciously aware of the wine and its qualities but they do so by making me aware of the first-order states. The first-order alternative at least seems to be at a disadvantage here because it seems that on their view learning the new word produces new first-order qualities as opposed to making me aware of the qualities which were already there (as on the higher-order view). I think there is some evidence that we can have ‘top down’ activity producing/modifying lower-order states so I ultimately think this is an empirical issue. At the very least I think we can say that this argument shows that the higher-order theory makes a clear, empirically testable predication, and like the empty higher-order state claim itself, the more implausible the prediction the more of a victory it is when it is not falsified.

At any rate abstracting from all of this Seager presents an interesting argument. If I am reading it correctly the claim seems to be that the empirical case for the higher-order theory is going to be undercut because first-order theories are not committed to the claim that first-order states are to be found in early sensory areas, and might even be found in places like the dlPFC. If so then even if there were a difference in activation there, as between early sensory areas, then this by itself would not be evidence for a higher-order theory because those may be first-order states.

The way I tried to get around this kind of worry (in my Brain and its States paper) was by taking D prime to be a measure of the first-order information which is being represented. This was justified, I thought, because the first-or-lower-order states are thought by us to largely drive the task performance. D prime gives us a measure of how well the subjects perform the task (by calculating the ration of hits to false alarms) and so it seems natural to suppose it gives a measure of what the first-order states are representing. The bias in judgment can be measured by C (the criterion) in signal detection theory and this can roughly be treated as a measure of the confidence of the subjects. So, instead of looking for direct anatomical correlates we can look for matched D prime scores while there is difference in subjective report. This is exactly what Lau and his lab has been able to show in many different cases. In addition when there is fMRI data it shows no significant difference in any first-order areas while there is a difference in the prefrontal cortex. Is this due to residual first-order states in ‘higher-order’ areas? Maybe, but if so they would be accounted for in the measure of D prime. And that would not explain why subjects report a difference in visibility, or confidence, or whatever. Because of this I do not think the empirical cases has been much undermined by Seager.

Gottlieb on Presentational Character and Higher-Order Thought Theories of Consciousness

In his paper, Presentational Character and Higher-Order Thoughts, which came out in 2015 in the Journal of Consciousness Studies, Gottlieb presents a general argument against the higher-order theory of consciousness which invokes some of my work as support. His basic idea is that conscious experience has what he calls presentational character, where this is something like the immediate directness with which we experience things in the world.

Nailing down this idea is a bit tricky but we don’t need to be too precise to get the puzzle he wants. He puts it this way in the paper,

Focus on the visual case. Then, fix the concept ‘presentational character’ in purely comparative terms, between visual experiences and occurrent thoughts: ‘presentational character’ picks out that phenomenological quality, whatever it is, that marks the difference between what it is like to be aware of an object O by having an occurrent thought about O and what it is like to be aware of an object O by having a visual experience of O. That is the phenomena I am claiming to be incompatible with the traditional HOT-theoretic explanation of consciousness. And so long as one concedes there is such a difference between thinking about O and visually experiencing O, we should have enough of a fix on our phenomenon of interest.

Whether or not you agree that presentational character, as Gottlieb defines it, is a separate, distinct, component of our overall phenomenology there is clearly a difference between consciously seeing red (a visual experience) and consciously thinking about red (a cognitive experience). If the higher-order theory of consciousness were not able to explain what this difference amounted to we would have to admit a serious deficit in the theory.

But why should we think that the higher-order theory has any problem with this? Gottlieb presents his official argument as follows:

S1  If HOT is true, m*(the HOT) entirely fixes the phenomenal character of experience.

S2  HOTs are thoughts.

S3  Presentational character is a type of phenomenal character.

S4  Thoughts as such do not have presentational character.

So:

S5 HOTs do not have presentational character.

Thus:

S6 If HOTs do not have presentational character, no experience (on HOT) has presentational character.

Therefore:

P1 If HOT is true, no experience has presentational character.

The rest of the paper goes on to defend the argument from various moves a higher-order theorist may make but I would immediately object to premise S4. There are some thoughts, in particular a specific kind of higher-order thought, which will have presentational character. Or at least these thoughts will be able to explain the difference that Gottlieb claims can’t be explained.

Gottlieb is aware that this is the most contentious premise of his argument. This is where he appeals to the work that I have done trying to connect the cognitive phenomenology debate to the higher-order thought theory of consciousness (this is the topic of some of my earliest posts here at Philosophy Sucks!). In particular he says,

Richard Brown and Pete Mandik (2013) have argued that if HOT is true, we have can have (first-order, non-introspected) thoughts with propriety phenomenology. Suppose one first has a suitable HOT about one’s first-order pain sensation. Here, the pain will become conscious. Yet now suppose one has a suitable HOT about one’s thought that the Eiffel Tower is tall. As Brown and Mandik point out, if we deny cognitive phenomenology, one will then need to say that though the thought is conscious, there is nothing that it is like for this creature to consciously think the thought. But this would be—by the edicts of HOT itself—absurd; after all, the two higher-order states are in every relevant respect the same.

I agree that this is what we say about the traditional higher-order theory (where we take the first-order state to be made conscious by the higher-order state) but I would prefer to put this by saying that if we are talking about phenomenal consciousness (as opposed to mere-state-consciousness) then it would be the higher-order state that was conscious, but other than that this is our basic point. How does it help Gottlieb’s case?

The argument is complicated but it seems to go like this. If we accept the conclusion of the argument from Brown and Mandik then conscious thoughts and visual experiences both have phenomenology and they have different kinds of phenomenology (i.e. cognitive phenomenology is proprietary). In particular cognitive phenomenology does not have presentational character. Whatever the phenomenology of thinking is, it is not like see the thing in front of you! But now consider the case where you are seeing something red and you introspect that conscious experience. When one introspects, on the traditional higher-order view, one comes to have a third-order thought about the second order thought. So, in effect, the second-order thought becomes conscious. But we already said that cognitive phenomenology is not the kind of thing that results in presentational character, so when the second-order thought becomes conscious we should be aware of it *as a thought* and so *as the kind of thing which lacks presentational character* but that would mean that introspection is incompatible with the presentational character.

I have had similar issues with Rosenthal’s account of introspection so I am glad that Gottlieb is drawing attention to this issue. I have also explored his recommended solution of having the first-order state contribute something to the content of the higher-order state (here, and in my work with Hakwan)

I also have a talk and a draft of a paper devoted to exploring alternative accounts of introspection from the higher-order perspective. I put it up on Academia.edu but that was before I fully realized that I am not much of a fan of the way they are developing it. In fact, I forgot my login info and was locked out of seeing the paper myself for about a week! Someday I aim to revisit it. But one thing that I point out in that paper is that Rosenthal seems to talk about introspection in a very different way. Here is what he says in one relevant passage,

We sometimes have thoughts about our experiences, thoughts that sometimes characterize the experiences as the sort that visually represent red physical objects.  And to have a thought about an experience as visually representing a red object is to have a thought about the experience as representing that object qualitatively, that is, by way of its having some mental quality and it is the having of just such thoughts that make one introspectively conscious of one’s experience, (CM p. 119)

This paragraph has often been in my thoughts when I think about introspection on the higher-order theory. But it has become clear to me that a lot depends on what you mean by ‘thoughts about our experiences’.

Here is what I say in the earlier mentioned draft,

…In [Rosenthal’s Trends in Cognitive Science] paper with Lau where they respond to Rafi Malach, they characterize the introspective third-order thought as having the content ‘I am having this representation that I am seeing this red object’. I think it is interesting that they do not characterize it as having content like ‘I am having this thought that I am seeing red’. On their account we represent the second-order thought as being the kind of state that represents me as seeing physical red and we do so in a way that does not characterize it as a thought. One reason for this may be that if, as we have seen, the highest-order thought determines what it is like for you then if I am having a third-order thought with the content ‘I am having this thought that I am seeing red’ then what it will be like for me is like having a thought. But this is arguably not what happens in canonical cases of introspection (Gottlieb forthcoming makes a similar objection). Rosenthal himself in his earlier paper agued that when we introspect we are having thoughts about our experiences and that we characterize them as being the kind that qualitatively represents blue things. This is a strange way to characterize a thought.

So I agree that there seems to be a problem here for the higher-order theory but I would not construe it as a problem with the theory’s ability to explain presentational character. I think it can do that just fine. Rather what it suggests is that we should look for a different account of introspection.

When Rosenthal talks specifically about introspection he is talking about the very rare case where one ‘quote-unquote’ brackets the external world and considers one’s experience as such. So, in looking at a table I may consciously perceive it but I am focused on the table (and this translates to the claim that the concepts I employ in the higher-order thought are about the worldly properties). When I introspect I ‘bracket’ the table in the world and take my experience itself as the object of my inner awareness. The intuitive idea that Rosenthal wants to capture is that when we have conscious experience we are aware of our first-order states (as describing properties in the world) and in deliberate attentive introspection we are aware of our awareness of the first-order state. The higher-order state is unconscious and when we become aware of our awareness we make that state conscious, but, on his view, we do so in a way so as not to notice that it is a thought.

But part of me wonders about this. Don’t some people take introspection to be a matter of having a belief about one’s own experience? If so the a conscious higher-order thought would fit this bill. So there may be a notion of introspection that a third-order thought may account for. But we might also want a notion of introspection that was more directly related to focusing on what it is like for the subject. When I focus on the redness of my conscious experience it doesn’t seem as though I am having a conscious thought about the redness. It seems like I am focused on the particular nature of my conscious experience. We might describe that with something like ‘I am seeing red’ and that may sound like a conscious higher-order thought but we are here talking about being aware of the conscious experience itself. So, to capture this, I would suggest, in both cases we are aware of our first-order states. In non-introspective consciousness we are aware of the first-order state as presenting something external to us. In introspective consciousness we are aware of the first-order state as a mental state, as being a visual experience, or a seeing, etc.

I am inclined to see these two kinds of thoughts as ‘being at the same level’ in the sense that they are both thoughts about the first-order states but which have very different contents. And this amounts to the claim that they employ different kinds of concepts. But these ideas are still very much in development. Any thoughts (of whatever order) appreciated!

Gottlieb on Brown

I have been interested in the relationship between the transitivity principle and transparency for quite a while now. This issue has come up again in a recent paper  by Joseph Gottlieb fittingly called Transitivity and Transparency. This paper came out in Analytic Philosophy in 2016 but he actually sent me the paper beforehand. I read it and we had some email conversation about it (and this influenced my Introspective Consciousness paper (here is the Academia.edu session I had on it)) but I never got the chance to formulate any clear thoughts on it. So I figured I would give it a shot now.

There is a lot going on in the paper and so I will focus for the most part on his response to some of my early work on what will become HOROR theory. He argues that what he calls Non-State-Relational Transitivity, is not an ‘acceptable consistency gloss’ on the transitivity principle. So what is a consistency gloss? The article is technical (it did come out in Analytic Philosophy, after all!). For Gottlieb this amounts to giving a precisification of the transitivity principle that renders it compatible with what he calls Weak Transparency. He defines these terms as follows,

TRANSITIVITY: Conscious mental states are mental states we are aware of in some way.

W-TRANSPARENCY: For at least one conscious state M, it is impossible to:

(a) TRANSPARENCY-DIRECT: Stand in a direct awareness relation to M, or; (b) TRANSPARENCY-DE RE: Stand in a de re awareness relation to M, or; (c) TRANSPARENCY-INT: Stand in an introspective awareness relation to M,

His basic claim, then, is that there is no way of making precise the statement of transitivity above in such a way as to render it consistent with the weak version of transparency that he thinks should count as a truism or platitude.

Of course my basic claim, one that I have made since the beginning of thinking about these issues, is that there is a way of doing this but it requires a proper understanding of what the transitivity principle says. If we do not interpret the theory as claiming that a first-order state is made conscious by the higher-order state (as Gottlieb does in TRANSITIVITY above) but instead think of transitivity as telling us that a conscious experience is one that makes me aware of myself as being in first-order states then we have a way to satisfy Weak Transparency.

So what is Gottlieb’s problem with this way of interpreting the transitivity principle? He has a section of the paper discussing this kind of move. He says,

4.3 Non-State-Relational Transitivity

As it stands, TRANSITIVITY posits a relation between a higher-order state and a first-order state. But not all Higher-Order theorists construe TRANSITIVITY this way. Instead, some advance:

  • NON-STATE-RELATIONAL TRANSITIVITY: A conscious mental state is a mental state whose subject is aware of itself as being in that state.

NON-STATE-RELATIONAL TRANSITIVITY is an Object-Side Precisification. And it appears promising. For it says that we are aware of ourselves as being in conscious states, not simply that we are aware of our conscious states. These are different claims.

I agree that this is an importantly different way of thinking about the transitivity principle. However, I do not think that I actually endorse this version of the transitivity principle. As it is stated here NON-STATE-RELATIONAL TRANSITIVITY is still cast in terms of the first-order state.

What I mean by that is when we ask the question ‘which metal state is phenomenally conscious?’ the current proposal would answer ‘the mental state the subject is aware of itself as being in’. Now, I do think that this is most likely the way that Rosenthal and Weisberg think of non-state-relational transitivity but this is not the way that I think about it.

I have not put this in print yet (though it is in a paper in draft stage) but the way I would reformulate the transitivity principle would be as follows (or at least along these general lines),

  • A mental state is phenomenally conscious only if it appropriately makes one aware of oneself as being in some first-order mental state

This way of putting things emphasizes the claim that the higher-order state itself is the phenomenally conscious state.

Part of what I think is going on here is that there is an ambiguity in terms like ‘awareness’. When we say that we are aware of a first-order state, or whatever, what we should mean, from the higher-order perspective, is that the higher-order state aims at or targets or represents or whatever the first-order state. I have toyed with the idea that the ‘targeting’ relation boils down to a kind of causal-reference relation. But then we can also ask ‘how does it appear to the subject?’ and there it is not the case that we should say that it appears to the subject that they are aware of the first-order state. The subject will seemingly be aware of the items in the environment and this is because of the higher-order content of the higher-order representation.

Gottlieb thinks that non-state-relational transitivity,

 …will do nothing with respect to W-TRANSPARENCY…For presumably there will be (many!) cases where I am in the conscious state I am aware of myself as being in, and so cases where we will still need to ask in what sense I am aware of those states, and whether that sense comports with W-TRANSPARENCY. NON-STATE-RELATIONAL TRANSITIVITY doesn’t obviously speak to this latter question, though; the awareness we have of ourselves is de re, and presumably direct, but whether that’s also true of the awareness we have of our conscious states is another issue. So as it stands, NON-STATE-RELATIONAL TRANSITIVITY is not a consistency gloss.

I think it should be clear by now that this may apply to the kind of view he discusses, and that this view may even be one you could attribute to Rosenthal or Weisberg, but it is not the kind of view that I have advocated.

According to my view the higher-order state is itself the phenomenally conscious state, it is the one that there is something that it is like for one to be in. What, specifically, it is like, will depend on the content of the higher-order representation. That is to say, the way the state describes one’s own self determined what it is like for you. When the first order state is there, it, the first-order state, will be accurately described but that is besides the point. W-transparency is clearly met by the HOROR version of higher-order theory. And if what I said above can hold water then it is still a higher-order theory which endorses a version of the transitivity principle but it is able to simultaneously capture many of the intuitions touted as evidence for first-order theories.

Eliminativism and the Neuroscience of Consciousness

I am teaching Introduction to Neuroscience this spring semester and am using An Introduction to Brain and Behavior 5th edition by Kolb et al as the textbook (this is the book the biology program decided to adopt). I have not previously used this book and so I am just getting to find my way around it but so far I am enjoying it. The book makes a point of trying to connect neuroscience, psychology, and philosophy, which is pretty unusual for these kinds of textbooks (or at least it used to be!).

In the first chapter they go through some of the basic issues in the metaphysics of the mind, starting with Aristotle and then comparing Descartes’ dualism to Darwin’s Materialism. This is a welcome sight in a neuroscience/biological psychology textbook, but there are some points at which I find myself disagreeing with the way they set things up. I was thinking of saying something in class but we have so little time as it is. I then thought maybe I would write something and post it on Blackboard but if I do that I may as well have it here in case anyone else wants to chime in.

They begin by discussing the greek myth of Cupid and Psyche and then say,

The ancient Greek philosopher Aristotle was alluding to this story when he suggested that all human intellectual functions are produced by a person’s psyche. The psyche, Aristotle argued, is responsible for life, and its departure from the body results in death.

Thus, according to them, the ordinary conception of the way things work, i.e. that the mind is the cause of our behavior, is turned by  Aristotle into a psychological theory about the source or cause of behavior. They call this position mentalism.

They also say that Aristotle’s view was that the mind was non-material and separate from the body, and this is technically true. I am by no means an expert on Aristotle’s philosophy in general but his view seems to have been that the mind was the form of the body in something like the way that the shape of a statue was the form of (say) some marble. This is what is generally referred to as ‘hylomorphism’ which means that ordinary objects are somehow composed of both matter and form. I’ll leave aside the technical philosophical details but I think the example of a statue does an ok job of getting at the basics.  The statue of Socrates and the marble that it is composed out of are two distinct objects for Aristotle but I am not sure that I would say that the statue was non-physical. It is physical but it is just not identical to the marble it is made out of (you can destroy the statue and not destroy the marble so they seem like different things). So while it is true that Aristotle claimed the mind and body were distinct  I don’t think it is fair to say that Aristotle thought that the psyche was non-physical. It was not identical to the body but was something like ‘the body doing what it does’ or ‘the organizing principle of the body’. But ok, that is a subtle point!

They go on to say that

Descartes’s thesis that the [non-physical] mind directed the body was a serious attempt to give the brain an understandable role in controlling behavior. This idea that behavior is controlled by two entities, a [non-physical] mind and a body, is dualism (from Latin, meaning two). To Descartes, the [non-physical] mind received information from the body through the brain. The [non-physical] mind also directed the body through the brain. The rational [non-physical] mind, then, depended on the brain both for information and to control behavior.

I think this is an interesting way to frame Descartes view. On the kind of account they are developing Aristotle could not allow any kind of physical causation by the non-physical mind but I am not sure this is correct.

But either way they have an interesting way of putting things. The question is what produces behavior? If we start with a non-physical mind as the cause of behavior then that seems to leave no role for the brain, so then we would have to posit that the brain and the non-physical mind work together to produce behavior.

They then go on to give the standard criticisms of Descartes’ dualism. They argue that it violates the conservation of energy, though this is not entirely clear (see David Papineau’s The Rise of Physicalism for some history on this issue). They also argue that dualism is a bad theory because it has led to morally questionable results. In particular:

Cruel treatment of animals, children, and the mentally ill has for centuries been justified by Descartes’s theory.

I think this is interesting and probably true. It is a lot easier to dehumanize something if you think the part that matters can be detached. However I am not sure this counts as a reason to reject dualism. Keep in mind I am not much of a dualist but if something is true then it is true. I tend to find that students more readily posit a non-physical mind for animals than they do deny that they have pain as Descartes did but that is neither here nor there.

Having set everything up in this way they then introduce eliminativism about the mind as follows.

The contemporary philosophical school eliminative materialism takes the position that if behavior can be described adequately without recourse to the mind, then the mental explanation should be eliminated.

Thus they seem to be claiming that the non-physical aspect of the system should be eliminated, which I think a lot of people might agree with, but also that along with it the mental items that Descartes and others thought were non-physical should be eliminated as well. I fully agree that, in principle, all of the behaviors of animals can be fully explained in terms of the brain and its activity but does this mean that we should eliminate the mind? I don’t think so! In fact I would generally think that this is the best argument against dualisms like Descartes’. We have never needed to actually posit any non-physical features in the explanation of animal behavior.

In general the book tends to neglect the distinction between reduction and elimination. One can hold that we should eliminate the idea that pains and beliefs are non-physical mental items and instead think that they are physical and can be found in the activity or biology of the brain. That is to say we can think that certain states of the brain just are the having of a belief or feeling of a pain, etc. Eliminativism, as it is usually understood, is not a claim about the physicality of the mind. It is instead a claim about how neuroscience will proceed in the future. That is to say the emphasis is not on the *materialism* but on the *eliminative* part. The goal is to distinguish it from other kinds of materialism not to distinguish it from dualism. The claim is that when neuroscience gives us the ultimate explanation of behavior we will see that there really is no such thing as a belief. This is very different from the claim that we will find out that certain brain states are beliefs.

Thus it is a bit strange that the authors run together the claim that the mind is a non-physical substance together with the claim that there are such things as beliefs, desires, pains, itches, and so on. This seems to be a confusion that was evident in early discussions of eliminativism (see the link above) but now we know we can eliminate one and reduce the other, though we may not as well.

They go on to say,

Daniel Dennett (1978) and other philosophers, who have considered such mental attributes as consciousness, pain, and attention, argue that an understanding of brain function can replace mental explanations of these attributes. Mentalism, by contrast, defines consciousness as an entity, attribute, or thing. Let us use the concept of consciousness to illustrate the argument for eliminative materialism.

I do not think this is quite the right way to think about Dennett’s views but it is hard to know if there is a right way to think about them! At any rate it is true that Dennett thinks that we will not find anything like beliefs in the completed neuroscience but it is wrong to think that Dennett thinks we should eliminate mentalistic talk. It is true, for Dennett, that there are no beliefs in the brain but it is still useful, on his view, to talk about beliefs and to explain behavior in terms of beliefs.

He has lately taken to comparing his views with the way that your desktop computer works. When you look at the desktop there are various icons there and folders, etc. Clicking on the folder will bring up a menu showing where your saved files are, etc. But it would be a mistake to think that this gave you any idea about how the computer was working. It is not storing little file folders away. Rather there is a bunch of machine code and those icons are a convenient way for you to interface with that code without having to know anything about it. So, too, Dennett argues our talk about the mind is like that. It is useful but wrong about the nature of the brain.

At any rate how does consciousness illustrate the argument for eliminative materialism?

The experimenters’ very practical measures of consciousness are formalized by the Glasgow Coma Scale (GCS), one indicator of the degree of unconsciousness and of recovery from unconsciousness. The GCS rates eye movement, body movement, and speech on a 15-point scale. A low score indicates coma and a high score indicates consciousness. Thus, the ability to follow commands, to eat, to speak, and even to watch TV provide quantifiable measures of consciousness contrasting sharply with the qualitative description that sees consciousness as a single entity. Eliminative materialists would argue, therefore, that the objective, measurably improved GCS score of behaviors in a brain-injured patient is more useful than a subjective mentalistic explanation that consciousness has “improved.”

I don’t think I see much of an argument for eliminativism in this approach. The basic idea seems to be that we should take ‘the patient is conscious’ as a description of a certain kind of behavior that is tied to brain activity and that this should be taken as evidence that we should not take ‘consciousness’ to refer to a non-physical mental entity. This is interesting and it illustrates a general view I think is in the background of their discussion. Mentalism, as they define it, is the claim that the non-physical mind is the cause of behavior. They propose eliminating that but keeping the mentalistic terms, like ‘consciousness’. But they argue that we should think of these terms not as naming some subjective mental state but as a description of objective behavior.

I do agree that our ordinary conception of ‘consciousness’ in the sense of being awake or asleep or in a coma will come to be refined by things like the Glasgow Coma Scale. I also agree that this may be some kind of evidence against the existence of a non-physical mind that is either fully conscious or not at one moment. As the authors themselves are at pains to point out we can take the behavior to be tied to brain activity and it is there that I would expect to find consciousness. So I would take this as evidence of reduction or maybe slight modification of our ordinary concept of waking consciousness. That is, on my view, we keep the mental items and identify them with brain activity thereby rejecting dualism (even though I think dualism could be true, I just don’t think we have a lot of reason to believe that it is in fact true).

They make this clear in their summary of their view;

Contemporary brain theory is materialistic. Although materialists, your authors included, continue to use subjective mentalistic words such as consciousnesspain, and attention to describe more complex behaviors, at the same time they recognize that these words do not describe mental entities.

It think it should be very clear by now that they mean this as a claim about the non-physical mind. The word ‘consciousness’ on their view describes a kind of behavior which can be tied to the brain but not a non-physical part of nature. But even so it will still be true that the brain’s activity will cause pain; as long as we interpret ‘pain’ as ‘pain behavior’.

However, I think it is also clear by now that we need not put things this way. It seems to me that the better way to think of things is that pain causes pain behavior, and that pain is typically and canonically a conscious experience, and that we can learn about the nature of pain by studying the brain (because certain states of the brain just are states of being in pain).  We can thereby be eliminativists about the non-physical mind while being reductionists about pain.

But, whichever way one goes on this, is it even correct to say that modern neuroscience is materialistic? This seems to assume too much. Contemporary neuroscience does make the claim that an animal’s behavior can be fully understood in terms of brain activity (and it seems to me that this claim is empirically well justified) but is this the same thing as being materialistic? It depends on what one thinks about consciousness. It is certainly possible to take all of what neurosciences says and still think that conscious experience is not physical. That is the point that some people want to make by imagining zombies (or claiming that they can). It seems to them that we could have everything that neuroscience tells us about it and its relation to behavior and yet still lack any of the conscious experience in the sense that there is something that it is like for the subject. I don’t think we can really do this but it certainly seems like we can to (me and) a lot of other people. I also agree that eliminativism is a possibility in some sense of that word but I don’t see that neuroscience commits you to it or that it is in any way an assumption of contemporary brain theory.

It wasn’t that long ago (back in the 1980s) that Jerry Fodor famously said, “if commonsense psychology were to collapse, that would be, beyond comparison, the greatest intellectual catastrophe in the history of our species” and I tend to agree (to a somewhat less hyperbolic way of putting the point). The authors of this textbook may advocate eliminating our subjective mental life but that is not something that contemporary neuroscience commits you to!

Kozuch on Lau and Brown

Way back on November 20th 2009 Benji Kozuch came and gave a talk at the CUNY Cognitive Science series and became the first to be persuaded by me to attempt an epic marathon of cognitive science, drinking, and jamming!  The mission: give a 3 hour talk followed by intense discussion over drinks (and proceeded by intense discussion over lunch) followed by a late night jam session at a midtown rehearsal studio. This monstrous marathon typically begins at noon with lunch and then concludes sometime around 10 pm when the jamming is done (drinks after jamming optional). That’s 10 hours-plus of philosophical and musical mayhem! We recorded the jam that night but it was subsequently ruined and no one has ever heard what happened that night…which is probably for the best!

This was just before our first open jam session at the Parkside Lounge (the first one was held after the American Philosophical Association meeting in NYC December 2009), which became the New York Consciousness Collective and gave rise to Qualia Fest. But this itself was the culmination of a lot of music playing going back to the summer of 2006. The last Qualia Fest was in 2012 but since then we have had two other brave members of Club Cogsci. One is myself (in 2015) and the other is Joe LeDoux (in 2016). That’s 10 year’s of jamming with cognitive scientists and philosophers! Having done it myself, I can say it is grueling and special thanks go to Benji for being such a champion.

Putting all of that to one side, Kozuch has in some recent publications argued against the position that I tentatively support. In particular in his 2014 Philosophical Studies paper he argued that evidence from lesions to prefrontal areas cast doubt on higher-order theories of consciousness (see Lau and Rosenthal for a defense of higher-order theories against this kind of charge). I have for sometime meant to post something about this (at one point I thought I might have a conference presentation based on this)…but, as is becoming more common, it has taken a while to get to it! Teaching a 6/3-6/3 load has been stressful but I think I am beginning to get the hang of how to manage time and to find the time to have some thoughts that are not related to children or teaching 🙂

The first thing I would note is that Kozuch clearly has the relational version of the higher-order theory in mind. In the opening setup he says,

…[Higher-Order] theories claim that a mental state M cannot be phenomenally conscious unless M is targeted by some mental state M*. It is precisely this claim that is my target.

This is one way of characterizing the higher-order approach but I have spent a lot of time suggesting that this is not the best way to think of higher-order theories. This is why I coined the term ‘HOROR theory’. I used to think that the non-relational way of doing things was closer to the spirit of what Rosenthal intended but now I think that this is a pointless debate and that there are just (at least) two different ways of thinking about higher-order theories. On the one kind, as Kozuch says, the first-order state M is made phenomenally conscious by the targeting of M by some higher-order state.

I have argued that another way of thinking about all of this is that it is not the first-order state that gets turned into a phenomenally conscious state. This is because of things like Block’s argument, and the empirical evidence (as I interpret that evidence at least). Now this would not really matter if all Kozuch wanted to do was to argue against the relational view, I might even join him in that! But if he is going to cite my work and argue against the view that I endorse then the HOROR theory might make a difference. Let’s see.

The basic premise of the paper is that if a higher-order theory is true then we have good reason to think that damaging or impairing the brain areas associated with the higher-order awareness should impair conscious experience. From here Kozuch argues that the best candidate for the relevant brain areas are the dorsal lateral prefrontal cortex. I agree that we have enough evidence to take this area seriously as a possible candidate for an area important for higher-order awareness, but I also think we need to keep in mind other prefrontal areas, and even the possibility that different prefrontal areas may have different roles to play in the higher-order awareness.

At any rate I think I can agree with Kozuch’s basic premise that if we damaged the right parts of the prefrontal cortex we should expect loss or degradation of visual phenomenology. But what would count as evidence of this? If we call an area of the brain an integral area only if that area is necessary for conscious experience then what will the result of disabling that area be? Kozuch begins to answer this question as follows,

It is somewhat straightforward what would happen if each of a subject’s integral areas (or networks) were disabled. Since the subject could no longer produce those HO states necessary for visual consciousness, we may reasonably predict this results in something phenomenologically similar to blindness.

I think this is somewhat right. From the subject’s point of view there would be no visual phenomenology  but I am not sure this is similar to blindness, where a subject seems to be aware of their lack of visual phenomenology (or at least can be made aware). Kozuch is careful to note in a footnote that it is at least a possibility that subjects may loose conscious phenomenology but fail to notice it but I do not think he takes it as seriously as he should.

This is because the higher-order theory, especially the non-relational version I am most likely to defend, the first-order states largely account for the behavioral data and the higher-order states account for visual phenomenology. Thus in a perfect separation of the two, that is in a case of just first-order states and no higher-order states at all then according to the theory the behavior of the animal will largely be undisturbed. The first-order states will produce their usual effects and the animal will be able to sort, push buttons, etc. They will not be able to report on their experience, or any changes therein, because they will not have the relevant higher-order states to be aware that they are having any first-order states at all. I am not sure this is what is happening in these cases (I have heard some severe skepticism over whether these second hand reports should be given much weight) but it is not ruled out theoretically and so we haven’t got any real evidence that pushes past one’s intuitive feel for these things. Kozuch comes close to recognizing this when he says, in a footnote,

In what particular manner should we expect the deficits to be detected? I do not precisely know, but one could guess that a subject with a disabled integral area would not perform normally on (at least some) tests of their visual abilities. Failing that, we could probably still expect the subject to volunteer information indicating that things ‘‘seemed’’ visually different to her.

But both of these claims are disputed by the higher-order theory!

Later in the paper where Kozuch is addressing some of the evidence for the involvement of the prefrontal cortex he introduces the idea of redundancy. If someone objects that taking away on integral area does not dramatically diminish visual phenomenology because of some other area taking over or covering for it then he claims we are committed to the view that there are redundant duplications of first-order contents at the higher-order level. But this does not seem right to me. An alternative view is that the prefrontal areas are all contributing something different to the content of the higher-orderr representation and taking one away may take away one component of the overall representations. We do not need to appeal to redundancy to explain why there may not be dramatic changes in the conscious experiences of subjects.

Finally, I would say that I wish Kozuch had addressed what I take to be the main argument in Lau and Brown (and elsewhere), which is that we have empirical cases which suggest that there is a difference in the conscious visual phenomenology of a subject but where the first-order representations do not seem like they would be different in the relevant way. In one case, the Rare Charles Bonnett case, we have a reason to think that the first-order representations are too weak to capture the rich phenomenal experience. In another case, subjective inflation, we have reason to think that the first-order states are held roughly constant while the phenomenology changes.

-photo by Jared Blank

Chalmers on Brown on Chalmers

I just found out that the double special issue of the Journal of Consciousness Studies devoted to David Chalmers’ paper The Singularity: A Philosophical Analysis recently came out as a book! I had a short paper in that collection that stemmed from some thoughts I had about zombies and simulated worlds (I posted about them here and here). Dave responded to all of the articles (here) and I just realized that I never wrote anything about that response!

I have always had a love/hate relationship with this paper. On the one hand I felt like there was an idea worth developing, one that started to take shape back in 2009. On the other hand there was a pretty tight deadline for the special issue and I did not feel like I had really got ahold of what the main idea was supposed to be, in my own thinking. I felt rushed and secretly wished I could wait a year or two to think about it. But this was before I had tenure and I thought it would be a bad move to miss this opportunity. The end result is that I think the paper is flawed but I still feel like there is an interesting idea lurking about that needs to be more fully developed. Besides, I thought, the response from Dave would give me an opportunity to think more deeply about these issues and would be something I could respond to…that was five years ago! Well, I guess better late than never so here goes.

My paper was divided into two parts. As Dave says,

First, [Brown] cites my 1990 discussion piece “How Cartesian dualism might have been true”, in which I argued that creatures who live in simulated environments with separated simulated cognitive processes would endorse Cartesian dualism. The cognitive processes that drive their behavior would be entirely distinct from the processes that govern their environment, and an investigation of the latter would reveal no sign of the former: they will not find brains inside their heads driving their behavior, for example. Brown notes that the same could apply even if the creatures are zombies, so this sort of dualism does not essentially involve consciousness. I think this is right: we might call it process dualism, because it is a dualism of two distinct sorts of processes. If the cognitive processes essentially involve consciousness, then we have something akin to traditional Cartesian dualism; if not, then we have a different sort of interactive dualism.

Looking back on this now I think that I can say that part of the idea I had was that what Dave here calls ‘process dualism’ is really what lies behind the conceivability of zombies. Instead of testing whether (one thinks that) dualism or physicalism is true about consciousness the two-dimensional argument against materialism is really testing whether one thinks that consciousness is  grounded in biological or functional/computational properties. This debate is distinct and orthogonal to the debate about physicalism/dualism.

In the next part of the response Dave addresses my attempted extension of this point to try to reconcile the conceivability of zombies with what I called ‘biologism’. Biologism was supposed to be a word to distinguish the debate between the physicalist and the dualist from the debate between the biologically-oriented views of the mind as against the computationally oriented views. At the time I thought this term was coined by me and it was supposed to be an umbrella term that would have biological materialism as a particular variant. I should note before going on that it was only after the paper was published that I became aware that this term has a history and is associated with certain views about ‘the use of biological explanations in the analysis of social situations‘. This is not what I intended and had I known that beforehand I would have tried to coin a different term.

The point was to try to emphasize that this debate was supposed to be distinct from the debate about physicalism and that one could endorse this kind of view even if one rejected biological materialism. The family of views I was interested in defending can be summed up as holding that consciousness is ultimately grounded in or caused by some biological property of the brain and that a simulation of the brain would lack that property. This is compatible with materialism (=identity theory) but also dualism. One could be a dualist and yet hold that only biological agents could have the required relation to the non-physical mind. Indeed I would say that in my experience this is the view of the vast majority of those who accept dualism (by which I mostly mean my students). Having said that it is true that in my own thinking I lean towards physicalism (though as a side-side note I do not think that physicalism is true, only that we have no good reason to reject it) and it is certainly true that in the paper I say that this can be used to make the relevant claim about biological materialism.

At any rate, here is what Dave says about my argument.

Brown goes on to argue that simulated worlds show how one can reconcile biological materialism with the conceivability and possibility of zombies. If biological materialism is true, a perfect simulation of a biological conscious being will not be conscious. But if it is a perfect simulation in a world that perfectly simulates our physics, it will be a physical duplicate of the original. So it will be a physical duplicate without consciousness: a zombie.

I think Brown’s argument goes wrong at the second step. A perfect simulation of a physical system is not a physical duplicate of that system. A perfect simulation of a brain on a computer is not made of neurons, for example; it is made of silicon. So the zombie in question is a merely functional duplicate of a conscious being, not a physical duplicate. And of course biological materialism is quite consistent with functional duplicates.

It is true that from the point of view of beings in the simulation, the simulated being will seem to have the same physical structure that the original being seems to us to have in our world. But this does not entail that it is a physical duplicate, any more than the watery stuff on Twin Earth that looks like water really is water. (See note 7 in “The Matrix as metaphysics” for more here.) To put matters technically (nonphilosophers can skip!), if P is a physical specification of the original being in our world, the simulated being may satisfy the primary intension of P (relative to an inhabitant of the simulated world), but it will not satisfy the secondary intension of P. For zombies to be possible in the sense relevant to materialism, a being satisfying the secondary intension of P is required. At best, we can say that zombies are (primarily) conceivable and (primarily) possible— but this possibility mere reflects the (secondary) possibility of a microfunctional duplicate of a conscious being without consciousness, and not a full physical duplicate. In effect, on a biological view the intrinsic basis of the microphysical functions will make a difference to consciousness. To that extent the view might be seen as a variant of what is sometimes known as Russellian monism, on which the intrinsic nature of physical processes is what is key to consciousness (though unlike other versions of Russellian monism, this version need not be committed to an a priori entailment from the underlying processes to consciousness).

I have to say that I am sympathetic with Dave in the way he diagnoses the flaw in the argument in the paper. It is a mistake to think of the simulated world, with its simulated creatures, as being a physical duplicate of our world in the right way; especially if this simulation is taking place in the original non-simulated world. If the biological view is correct then it is just a functional duplicate, true a microfunctional duplicate, but not a physical duplicate.

While I think this is right I also think the issues are complicated. For example take the typical Russellian pan(proto)psychism that is currently being explored by Chalmers and others. This view is touted as being compatible with the conceivability of zombies because we can conceive of a duplicate of our physics as long as we mean the structural, non-intrinsic properties. Since physics, on this view, describes only these structural features we can count the zombie world as having our physics in the narrow sense. The issues here are complex but this looks superficially just like the situation described in my paper. The simulated world captures all of the structural features of physics but leaves out whatever biological properties are necessary and in this sense the reasoning of the paper holds up.

This is why I think the comparison with Russellian monism invoked by Dave is helpful. In fact when I pitched my commentary to Dave I included this comparison with Russellian monism but it did not get developed in the paper. At any rate, I think what it helps us to see is the many ways in which we can *almost* conceive of zombies. This is a point that I have made going back to some of my earliest writings about zombies.  If the identity theory is true, or if some kind of biological view about consciousness is true, then there is some (as yet to be discovered) property/properties of biological neural states which necessitate/cause /just are the existence of phenomenal consciousness. Since we don’t know what this property is (yet) and since we don’t yet understand how it could necessitate/cause/etc phenomenal consciousness, we may fail to include it in our conceptualization of a ‘zombie world’. Or we may include it and fail to recognize that this entails a contradiction. I am sympathetic to both of these claims.

On the one hand, we can certainly conceive of a world very nearly physically just like ours. This world may have all/most of the same physical properties, excepting certain necessary biological properties, and as a result the creatures will behave in indistinguishable ways from us (given certain other assumptions). On the other hand we may conceive of the zombie twin as a biologically exact duplicate in which case we do not see that this is not actually a conceivable situation. If we knew the full biological story we would be, or at least could be, in a position to see that we had misdescribed the situation in just the same way as someone who did not know enough chemistry might think they could conceive of h2o failing to be water (in a world otherwise physically just like ours). This is what I take to be the essence of the Krpkean strategy. We allow that the thing in question is a metaphysical possibility but then argue that it is actually misdescribed in the original argument. While misdescribing it we think (mistakenly) we have conceived of a certain situation being true but really we have conceived of a slightly different situation being true and this one is compatible with physicalism.

Thus while I think the issues are complex and that I did not get them right in the paper I still think the paper is morally correct. To the extent that biological materialism resembles Russellian monism is the extent to which the zombie argument is irrelevant.

A Higher-Order Theory of Emotional Consciousness

I am very happy to be able to say that the paper I have been writing with Joseph E. LeDoux is out in PNAS (Proceeding of the National Academy of the Sciences of the United States). In this paper we develop a higher-order theory of conscious emotional experience.

I have been interested in the emotions for quite some time now. I wrote my dissertation trying to show that it was possible to take seriously the role that the emotions play in our moral psychology which is seemingly revealed by contemporary cognitive neuroscience, and which I take to suggest that one of the basic premises of emotivism is true. But at the same time I wanted to preserve the space for one to also take seriously some kind of moral realism. In the dissertation I was more concerned with the philosophy of language than with the nature of the emotions but I have always been attracted to a rather simplistic view on which the differing conscious emotions differ with respect to the way in which they feel subjectively (I explore this as a general approach to the propositional attitudes in The Mark of the Mental). The idea that emotions are feelings is an old one in philosophy but has fallen out of favor in recent years. I also felt that in fleshing out such an account the higher-order approach to consciousness would come in handy. This idea was really made clear when I reviewed the book Feelings and Emotions: The Amsterdam Symposium. I felt that it would be a good idea to approach the science of emotions with the higher-order theory of consciousness in mind.

That was back in 2008 and since then I have not really followed up on any of the ideas in my dissertation. I have always wanted to but have always found something else at the moment to work on and that is why it is especially nice to have been working with Joseph LeDoux explicitly combining the two. I am very happy with the result and look forward to any discussion.