Eliminative Non-Materialism

It struck me today that all of the eliminativists about the mind are physicalists (or materialists) and a quick google search didn’t reveal any eliminativist dualist out there. But why is that?

I can see why a particular kind of dualist would reject eliminativism. If one held that the mind was transparent to itself in a strong way then the existence of beliefs and other mental states can be known directly via the first-person method of introspection. But does that exhaust the possibilities? Suppose one thought that there was a robust correlation (or even causation) between the brain and mind. Then one would expect a robust NCC for every conscious state (assuming a law-like connection or at least correlation between the brain and mental states).

To give us a model to work with let’s assume that there is correlation between function states of the brain and consciousness such that whenever certain functional states are realized that guarantees (given our laws of physics, etc) that a certain (non-physical) state of consciousness is also instantiated. Now suppose that we have a pretty good functional definition for what the functional correlate of a given metal state should be. That is, suppose we have worked out in a fair amount of detail what kinds of functional states we expect would be correlated with the conscious mental states posited by folk-psychology. Now further suppose that when we advanced far enough into our neuroscience we saw that there were no such states realized in the brain or that the states were somewhat what we thought but varied in some dramatic way from what we had worked out folk-psychologically.

At that point it seems we would have two options. One thing we could do is to maintain that there is after all no law-like correlation between brain states and mental states. There is a belief or a red quale, say, but it is somehow instantiated in a way independently from the neural workings. This seems like a bad option. The second option would be to abandon folk-psychology and say that the non-physical states of mind are better captured by what the correlates are suggesting. The newly non-physical states might be so different from the original folk-psychological postulates that we might be tempted to say that the originally postulated states don’t exist. Wouldn’t we then have arrived at an eliminative non-materialism?

As a corollary, doesn’t this possibility suggest that there aren’t any truly a priori truths knowable from introspection?

LeDoux and Brown on Higher-Order Theories and Emotional Consciousness

On Monday May 1st Joe LeDoux and I presented our paper at the NYU philosophy of mind discussion group. This was the second time that I have presented there (the first was with Hakwan (back in 2011!)). It was a lot of fun and there was some really interesting discussion of our paper.

There were a lot of inter-related points/objections that came out of the discussion but here I will just focus on just a few themes that stood out to Joe and I after the discussion. I haven’t yet had the chance to talk with him extensively about this so this is just my take on the discussion.

One of the issues centered on our postulation that there are three levels of content in emotional consciousness. On the ‘traditional’ higher-order theory there is the postulation of two distinct states. One is ‘first-order’ where this means that the state represents something in the world (the animal’s body counts as being in the world in this sense). A higher-order mental state is one that has higher-order content, where this means that it represents a mental state as opposed to some worldly-non-mental thing. It is often assumed that the first-order state will be some basic, some might even say ‘non-representational’ or non-conceptual, kind of content. We do not deny that there are states like the but we suggested that we needed to ‘go up a level’ so to speak.

Before delving into this I will say that I view this as an additional element in the theory. The basic idea of HOROR theory is just that the higher-order state is the phenomenally conscious state (because that what phenomenal consciousness is). I am pretty sure that the idea of the lower-order state being itself a higher-order state is Joe’s idea but to be fair I am not 100% sure. The idea was that the information coming in from the senses needed to be assembled in working memory in such a way as to allow the animal to connect memories, engage schemas etc. We coined the term ‘lower-order’ to take the place of ‘first-order’. For us a lower-order state is just one that is the target of a higher-order representation. Thus, the traditional first-order states would count as lower-order on our view but so would additional higher-order states that were re-represented  at a higher-level.

Thus on the view we defended the lower-order states are not first-order states. These states represent first-order states and thus are higher-order in nature. When you see an apple, for example, there must be a lot of first-order representations of the apple but these must be put together in working memory and result in a higher-order state which is an awareness of these first-order states. That higher-order representation is the ‘ground floor’ representation for our view. It is itself not conscious but it results in the animal behaving in appropriate ways. At this lower-order level we would characterize the content as something like ‘(I am) seeing an apple’. That is, there is an awareness of the first-order states and a characterization of those states as being seeing of red but there is no explicit representation of the self. There is an implicit referring to the self, by which we mean these states are attributed to the creature who has them but not in any explicit way. This is why we think of this state as just an awareness of the first-order activity (plus a characterization of it). At the their level we have a representation of this lower-order state (which is itself a higher-order state in that it represents first-order states).

Now, again, I do not really view this three-layer approach as essential to the HOROR theory. I think HOROR theory is perfectly compatible with the claim that it is first-order states that count as the targets. But I do think it is an interesting issue at state here and that is what role exactly the ‘I’ in “I am seeing a red apple’ is playing and also whether first-order states can be enough to play the role of lower-order states. Doesn’t the visual activity related to the apple need to be connected to concepts of red and apple? If so then there needs to be higher-order activity that is itself not conscious.

Another issue focused on our methodological challenge to using animals in consciousness research. Speaking for myself I certainly think that animals are conscious but since they cannot verbally report, and as long as we truly believe that the cognitive unconscious is as robust as widely held, then we cannot rule out that animal behavior is produced by non-conscious processes. What this suggests is that we need to be cautious when we infer from an animal’s behavior to the cause of it being a phenomenally conscious mental state. Of course that could be what is going on, but how do we establish that? It cannot be the default assumption as long as we accept the claims about the cognitive unconscious. Thus we do not think that animals do or do not have conscious experience but rather that the science of consciousness is best pursued in Humans (for now at least). For me this is related to what I think of as the biggest confound in all of consciousness science and that is the confound of behavior. If an animal can perform a task then it is assumed this is because its mental states are conscious. But if this kind of task can be performed unconsciously then behavior by itself cannot guarantee consciousness.

One objection to this claim (sadly I forgot who made this…maybe they’ll remind me in the comments?) was that maybe verbal responses themselves are non-conscious. When I asked if the kind of view that Dennett has, where there is just some sub-personal mechanism which results in an utterance of “I am seeing red” and this is all there is to the conscious experience of seeing red, counts as the kind of view the objector had in mind. The response was that no they had in mind that maybe the subjects are zombies with no conscious experience at all and yet were able to answer the question “what do you see” with “I see red,” just like zombies are thought to do. I responded to this with what I think is the usual way to respond to skeptical worries. That is, I acknowledge that there is a sense in which such skeptical scenarios are conceivable (though maybe not exactly as the conceiver supposes), but there are still reasons for not getting swept up in skepticism. For example I agree with the “lessons” from fading, dancing, and absent qualia cases that we would be in an unreasonable sense detached from our conscious experiences if this were happening. The laws of physics don’t give us any reason to suppose that there are radical differences between similar things (like you and I), though if we discovered an important brain area missing or damaged then I suppose we could be led to the conclusion that some member of the population lacked conscious experience. But why should we take this seriously now? I know I am conscious from my own first-person point of view and unless we endorse a radical skepticism then science should start from the view that report is a reliable(ish) guide to what is going on in a subject’s mind.

Another issue focused on our claim that animal consciousness may be different from human conscious experience. If you really need the concept ‘fear’ in order to feel afraid and if there is a good case to be made that animals don’t have our concept of fear then their experience would be very different from ours. That by itself is not such a bad thing. I take it that it is common sense that animal experience is not exactly like human experience. But it seems as though our view is committed to the idea that animals cannot have anything like the human experience of fear, or other emotions. Joe seemed to be ok with this but I objected. It is true that animals don’t have language like humans do and so are not able to form the rich and detailed kinds of concepts and schemas that humans do but that does not mean that they lack the concept of fear at all. I think it is plausible to think that animals have some limited concepts and if they are able to form concepts as basic as danger (present) and harm then they may have something that approaches human fear (or a basic version of it). A lot of this depends on your specific views about concepts.

Related to this, and brought up by Kate Pendoley was the issue of whether there can be emotional experiences that we only later learn to describe with a word. I suggested that I thought the answer may be yes but that even so we will describe the emotion in terms of its relations to other known emotions. ‘It is more like being afraid than feeling nausea’ and the like. This is related to my background view about a kind of ‘quality space’ for the mental attitudes.

Afterwards, over drinks, I had a discussion with Ned Block about the higher-order theory and the empirical evidence for the role of the prefrontal cortex in conscious experience. Ned has been hailing the recent Brascamp et al paper (nice video available here) as evidence against prefrontal theories. In that paper they showed that if they take away report and attention (by making the two stimuli barely distinguishable) then you can show that there is a loss of the prefrontal fMRI activation. I defended the response to this that fMRI is too crude of a measure to take this null result too seriously. This is what I take to be the line argued in this recent paper by Brain Odgaard, Bob Knight, and Hakwan, Should a few null findings falsify prefrontal theories of consciousness? Null results are ambiguous as between the falsifying interpretation and it just being missed by a crude tool. As Odgaard et al argue if we use more invasive measures like single cell or ECoG then we would find prefrontal activity. In particular the Mante et al paper referred to in Odgaard et all is pretty convincing demonstration that there is information decodable from prefrontal areas that would be missed by an fMRI. As they say in the linked to paper,

There are numerous single- and multi- unit recording studies in non-human primates, clearly demonstrating that specific perceptual decisions are represented in PFC (Kim and Shadlen, 1999; Mante et al., 2013; Rigotti et al., 2013). Overall, these studies are compatible with the view that PFC plays a key role in forming perceptual decisions (Heekeren et al., 2004; Philiastides et al., 2011; Szczepanski and Knight, 2014) via ‘reading out’ perceptual information from sensory cortices. Importantly, such decisions are central parts of the perceptual process itself (Green and Swets, 1966; Ratcliff, 1978); they are not ‘post-perceptual’ cognitive decisions. These mechanisms contribute to the subjective percept itself (de Lafuente and Romo, 2006), and have been linked to specific perceptual illusions (Jazayeri and Movshon, 2007).

In addition to this Ned accused us of begging the question in favor of the higher-order theory. In particular he thought that there really was no conscious experience in the Rare Charles Bonnett cases and that our appeal to Rahnev was just question begging.

Needless to say I disagree with this and there is a lot to say about these particular points but I will have to come back to these issue later. Before I have to run, and just for the record, I should make it clear that, while I have always been drawn to some kind of higher-order account, I have also felt the pull of first-order theories. I am in general reluctant to endorse any view completely but I guess I would have to say that my strongest allegiance is to the type-type identity theory. Ultimately I would like it to be the case that consciousness and mind are identical to brain states and/or states of the brain. I see the higher-order theory as compatible with the identity theory but I am also sympathetic to to other versions (for full-full disclosure, there is even a tiny (tiny) part of me that thinks functionalism isn’t as bad as dualism (which itself isn’t *that* bad)).

Why, then, do I spend so much time defending the higher-order theory? When I was still an  undergraduate student I thought that the higher-order thought theory of consciousness was obviously false. After studying it for a while and thinking more carefully about it I revised my credence to ‘not obviously false’. That is, I defended it against objections because I thought they dismissed the theory unduly quickly.

Over time, and largely because of empirical reasons, I have updated my credence  from ‘not obviously false’ to ‘possibly true’ and this is where I am at now. I have become more confident that the theory is empirically and conceptually adequate but I do not by any means think that there is a decisive case for the higher-order theory.

Dispatches from the Ivory Tower

In celebration of my ten years in the blogosphere I have been compiling some of my past posts into thematic meta-posts. The first of these listed my posts on the higher-order thought theory of consciousness. Continuing in this theme below are links to posts I have done over the past ten years reporting on talks/conferences/classes I have attended. I wrote these mostly so that I would not forget about these sessions but they may be interesting to others as well. Sadly, there are several things I have been to in the last year or so that I have not had the tim to sit down and write about…ah well maybe some day!

  1. 09/05/07 Kripke
    • Notes on Kripke’s discussion of existence as a predicate and fiction
  2. 09/05/2007 Devitt
  3. 09/05 Devitt II
  4. 09/19/07 -Devitt on Meaning
    • Notes on Devitt’s class on semantics
  5. Flamming LIPS!
  6. Back to the Grind & Meta-Metaethics
  7. Day Two of the Yale/UConn Conference
  8. Peter Singer on Climate Change and Ethics
    • Notes on Singer’s talk at LaGuardia
  9. Where Am I?
    • Reflections on my talk at the American Philosophical Association talk in 2008
  10. Fodor on Natural Selection
    • Reflections on the Society of Philosophy and Psychology meeting June 2008
  11. Kripke’s Argument Against 4-Dimensionalism
    • Based on a class given at the Graduate Center
  12. Reflections on Zoombies and Shombies Or: After the Showdown at the APA
    • Reflections on my session at the American Philosophical Association in 2009
  13. Kripke on the Structure of Possible Worlds
    • Notes on a talk given at the Graduate Center in September 2009
  14. Unconscious Trait Inferences
    • Notes on social psychologist James Uleman‘s talk at the CUNY Cogsci Speaker Series September 2009
  15. Attributing Mental States
    • Notes on James Dow‘s talk at the CUNY Cogsci Speaker Series September 2009
  16. Busy Bees Busily Buzzing ‘Bout
  17. Shombies & Illuminati
  18. A Couple More Thoughts on Shombies and Illuminati
    • Some reflections after Kati Balog’s presentation at the NYU philosophy of mind discussion group in November 2009
  19. Attention and Mental Paint
    • Notes on Ned Block’s session at the Mind and Language Seminar in January 2010
  20. HOT Damn it’s a HO Down-Showdown
    • Notes on David Rosenthal’s session at the NYU Mind and Language Seminar in March 2010
  21. The Identity Theory in 2-D
    • Some thoughts in response to theOnline Consciousness Conference in February 2010
  22. Part-Time Zombies
    • Reflections on Michael Pauen‘s Cogsci talk at CUNY in March of 2010
  23. The Singularity, Again
    • Reflections on David Chalmers’ at the NYU Mind and Language seminar in April of 2010
  24. The New New Dualism
  25. Dream a Little Dream
    • Reflections on Miguel Angel Sebastian’s cogsci talk in July of 2010
  26. Explaining Consciousness & Its Consequences
    • Reflections on my talk at the CUNY Cog Sci Speaker Series August 2010
  27. Levine on the Phenomenology of Thought
    • Reflections on Levine’s talk at the Graduate Center in September 2010
  28. Swamp Thing About Mary
    • Reflections on Pete Mandik’s Cogsci talk at CUNY in October 2010
  29. Burge on the Origins of Perception
    • Reflections on a workshop on the predicative structure of experience sponsored by the New York Consciousness Project in October of 2010
  30. Phenomenally HOT
    • Reflections on the first session of Ned Block and David Carmel’s seminar on Conceptual and Empirical Issues about Perception, Attention and Consciousness at NYU January 2011
  31. Some Thoughts About Color
  32. Stazicker on Attention and Mental Paint
  33. Sid Kouider on Partial Awareness
    • a few notes about Sid Kouider’s recent presentation at the CUNY CogSci Colloquium in October 2011
  34. The 2D Argument Against Non-Materialism
    • Reflections on my Tucson Talk in April 2012
  35. Peter Godfrey-Smith on Evolution And Memory
    • Notes from the CUNY Cog Sci Speaker Series in September 2012
  36. The Nature of Phenomenal Consciousness
    • Reflections on my talk at the Graduate Center in September 2012
  37. Giulio Tononi on Consciousness as Integrated Information
    • Notes from the inaugural lecture of the new NYU Center for Mind and Brain by Giulio Tononi
  38. Mental Qualities 02/07/13: Cognitive Phenomenology
  39. Mental Qualities 02/21/13: Phenomenal Concepts
    • Notes/Reflections from David Rosenthal’s class in 2013
  40. The Geometrical Structure of Space and Time
    • Reflections on a session of Tim Maudlin’s course I sat in on in February 2014
  41. Towards some Reflections on the Tucson Conferences
    • Reflections on my presentations at the Tucson conferences
  42. Existentialism is a Transhumanism
    • Reflections on the NEH Seminar in Transhumanism and Technohumanism at LaGuardia I co-directed in 2015-2016

Seager on the Empirical Case for Higher-Order Theories of Consciousness

In the recent second edition of William Seager’s book Theories of Consciousness: An Introduction and Assessment he addresses some of my work on the higher-order theory. I haven’t yet read the entire book but he seems generally very skeptical of higher-order theories, which is fine. Overall the argument he presents is interesting and it allows me to clarify a few things.

It is clear from the beginning that he is interpreting the higher-order theory in the standard relational way. This is made especially clear when he says that the basic claim of higher-order theory can be put as follows:

A mental state is conscious if and only if it is the target of a suitable higher-order thought (page 94)

This is certainly the way that most people interpret the theory and is the main reason I adopted ‘HOROR’ theory as a name for the kind of view I thought was the natural interpretation of Rosenthal’s work. I seem to remember a time when I thought this was ‘the correct’ way to think about Rosenthal’s work but I have since come to believe that it is not as cut and dry and that.

This is why I have given up on Rosenthal exegesis and just pointed out that there are two differing ways to interpret the theory. One of which is the relational kind of view summed up above. The other is the non-relation view, which I have argued allows us to capture key insights of the first-order theories. On this alternative interpretation the first-order state is not ‘made’ phenomenally conscious by the higher-order state. Rather the higher-order state just is phenomenal consciousness. Simply having the appropriate higher-order state is what being phenomenally conscious consists in, there is nothing more to it than that. This is the way I interpret the higher-order theory.

Seager comes close to recognizing this when he says (on page 94),

Denial of (CS) [the claim that “if S is conscious then S is in (or has) at least one conscious state”] offers a clear escape hatch for HOT theory. Contrast that clarity with this alternative characterization of the issue ‘[c]onscious states are states we are conscious of ourselves as being in, whether we are actually in them’ (Rosenthal 2002 p 415). Here Rosenthal appears to endorse the existence of a conscious state which is not the target of a higher-order thought, contrary to HOT theory itself. If so then HOT theory is not the full account of the nature of conscious states and it is time to move on to other theories. I submit that it is better for HOT theorists to reject (CS) and allow for creatures to be conscious in certain ways in the absence of an associated conscious mental state.

The quote from Rosenthal is an accurate one and it does summarize his views. If one interprets it my way, as basically saying that the higher-order state is the phenomenally conscious state, then we do have a conscious state that is not the target of a higher-order state (or at least which need not be). This is because the higher-order state is phenomenally conscious but not because of a further higher-order state. It is because being phenomenally conscious consists in being aware of yourself in the way the higher-order theory requires. As I have argued, in several places, this does not require that we give up the higher-order theory or adopt a ‘same-order theory’. HOROR theory is the higher-order thought theory correctly interpreted.

It thus turns out that phenomenal consciousness is not the same thing as ‘state consciousness’ as it is usually defined on the traditional higher-order theory. That property involves being the target of the higher-order state. This is something that, on my view, reduces to the causal connections between higher-order states, and their conceptual contents, and the first-order states. This will amount to a causal theory of reference for higher-order states. They refer to the first-order states which cause them in the right way. The states to which they refer are what I call the ‘targets’ of the higher-order states. So, for me the targeting relation is causal, but for Rosenthal and others more influenced by Quine it essentially amounts to describing. Thus for Rosenthal the target of the relevant higher-order state will be the first-order state which ‘fits the description’ in the higher-order content. I suppose I could live with either of these ultimately but I do think you need to say something about this on the higher-order account. At any rate on my view being the target of the higher-order state tells us which state we are aware of and the content of the higher-order state tells us the way in which we are aware of it. The two typically occur together but if I had to call one the phenomenally conscious state it would be the higher-order state.

Seager goes on to say in the next paragraph,

One might try to make a virtue of necessity here and seek for confirmation of the false HOT scenario. There have been some recent attempts to marshall empirical evidence for consciousness in the absence of lower-level states but with the presence of characteristic higher-order thoughts, thus showing that the latter are sufficient to generate consciousness (see Lau and Rosenthal 2011; Lau and Brown forthcoming; Brown 2015). The strategy of these efforts is clear: Find the neural correlates of higher-order thoughts posited by HOT theory, test subjects on tasks which sometimes elicit consciousness and sometimes do not (e.g. present them with an image for a very short time and ask them to report on what they saw), and, ideally, observe that no lower-order states occur even in the case where subjects report seeing something. Needless to say, it is a difficult strategy to follow. (page 95)

I would quibble with the way that things are put here but overall I agree with it. The quibbles come from the characterization of the strategy. What Lau and I were arguing was that we want to find cases where the first-order state is either absent or degraded, or  otherwise less rich than the conscious experiences of subjects. So we would be happy just with a mis-match between the first-order and higher-order cases. Whether we ever get the ideal total absence of first-order states is maybe too high of a bar. This is why in the work that Lau does he aims to produce cases where task performance is matched but subjective reports differ. The primary goal is to show that conscious experience outstrips what is represented at the first-order level. It is a difficult strategy to follow but all we can do is to use the tools we have to try to test the various theories of consciousness.

Seager then goes on to focus on the case of the rare form of Charles Bonnett syndrome. In these rare cases subjects report very vivid visual hallucinations even though there is extensive damage to the primary visual cortex. Seager briefly considers Miguel Sebastian’s objection based on dreaming but then objects that

…a deeper problem undercuts the empirical case, tentative though it is, for HOT theory and the empty HOT scenario. This is a confusion about the nature of the lower-order and higher-order cognitive states ate issue. ‘Lower-order’ does not mean ‘early’ and ‘higher-order’ does not mean ‘later’ in the brain’s processing of information. Higher-order refers specifically to thoughts about mental states as such; lower-order states are not about thoughts as such but are about the world as presented to the subject (including the subject’s body).

There is little reason to think that lower-order states, properly conceived, should be implemented in low-level or entry-level sensory systems. It is not likely that an isolated occipital lobe would generate visually conscious states.

Nor is it unlikely that lower-order states, states, that is, which represent the world and the body occur in ‘higher’ brain regions such as the dorsolateral prefrontal cortex. It would be astounding if that brain region were devoted to higher-order thoughts about mental states as such. (page 96)

I largely agree with the points being made here but I do not think that Lau and I were confused about this. The first thing I would say is that we are pretty explicit that we adopt the usage that we think the typical first-order theorist does (and especially Ned Block) and that we include areas outside the occipital lobe “that are known to contain high number of neurons explicitly coding for visual objects (e.g. fusiform face area)”  as first-order areas (see footnote 7 in the paper).

In the second instance we talked about three empirical cases in the paper and each was used for a slightly different purpose. When people discuss this paper, though, they typically focus on one out of the three. Here is how we summed up the cases in the paper:

To sum up, there are three kinds of Empirical Cases – Rare Charles Bonnet Cases (i.e. Charles Bonnet cases that result specifically from damage to the primary visual cortex), Inattentional Inflation (i.e. the results of Rahnev et al, in press and in review) and Peripheral Vision (introspective evidence from everyday life). The three cases serve slightly different purposes. The Rare Charles Bonnet Cases highlight the possibility of vivid conscious experience in the absence of primary visual cortex. If we take the primary visual cortex as the neural structure necessary for first-order representations, this is a straightforward case of conscious experience without first-order representations. In Inattentional Inflation, the putative first-order representations are not missing under the lack of attention, but they are not strong enough to account for the “inflated” level of reported subjective perception, in that both behavioral estimates of the signal-to-noise ratio of processing and brain imaging data show that there was no difference in overall quality or capacity in the first-order perceptual signal, which does not concern only the primary visual cortex but also other relevant visual areas. Finally, Peripheral Vision gives introspective evidence that conscious experience may not faithfully reflect the level of details supported by first-order visual processing. Though this does not depend on precise

laboratory measures, it gives an intuitive argument that is not constrained by specific experimental details.

So I don’t think Seager’s criticism of us as being confused about this is fair.

In addition, in recent work with Joe LeDoux we endorse the second claim made by Seager. We explicitly argue that the ‘lower-order’ states we are interested in will occur in working emory and likely even dorsal lateral prefrontal cortex.

But even if I think Seager is wrong to accuse us of being insensitive or confused about this issue I do think he goes on to present an interesting argument. He goes on to say,

The problem can be illustrated by the easy way HOT (or HOT-like) theorists pass over this crucial distinction. Consider these remarks from Richard Brown:

Anyone who has had experience with wine will know that acquiring a new word will sometimes allow one to make finer-grained distinctions in the experience that one has. One interpretation of what is going on here is that learning the new word results in one’s having a new concept and the application of this concept allows one to represent one’s mental life in a more fine-grained way. This results in more phenomenal properties in one’s experience…that amounts to the claim that one represents one’s mental life as instantiating different mental qualities.

Those unsympathetic to HOT theory will balk at this description. What is acquired is an enhanced ability to perceive or appreciate the wine in this case, not the experience of the wine (the experience itself does not seem to have any distinctive perceivable properties). After training the taster has new lower-order states which better characterize the wine, not new higher-order states aimed at and mentally characterizing the experience of tasting the wine.

Since there is no reason to restrict lower-order states to relatively peripheral sensory systems, it will be very hard to make out an empirical case for HOT theory and the empty HOT scenario in the way suggested. (pages 96-97)

The quote he offers here is from the HOROR paper and so it is interesting to see that the proposed solution, that the higher-order state is phenomenally conscious and that this is not giving up on the higher-order theory, is neglected.

Before going on I should say that I am pretty much sympathetic to the point being made here. I think there is a first-order account of what is going on. I also tend to think that this is ultimately an empirical issue. If there were a way to test this that would be great but I am not sure we have the capacity to do so yet. But my main point in the paper was not to offer this as a phenomenon that the first-order theorist couldn’t explain. What I was intending to do was to argue that the higher-order interpretation is one consistent interpretation of this phenomenon. It fits naturally with the theory and shows that there is nothing absurd in the basic tenet of the HOROR theory that phenomenal consciousness really is just a kind of higher-order thought, with conceptual content.

As I read Rosenthal he does not think the first-order account is plausible. For Rosenthal we are explicitly focusing on our experience sin these kinds of cases. One takes a drink of the wine and focuses on the taste of the wine. This may be done even after one has swallowed the wine. The same is true for the auditory cases. It does seem plausible that in these cases I am focused on my experience, not on the wine (it is the experience of the wine of course). But if the general kind of theory he advocates is correct then one will still come to appreciate the wine itself. When I have the new fine-grained higher-order thoughts they will attribute to me finer-grained first-order states and these will be described in terms of the properties I experience the wine as having. They will thus make me consciously aware of the wine and its qualities but they do so by making me aware of the first-order states. The first-order alternative at least seems to be at a disadvantage here because it seems that on their view learning the new word produces new first-order qualities as opposed to making me aware of the qualities which were already there (as on the higher-order view). I think there is some evidence that we can have ‘top down’ activity producing/modifying lower-order states so I ultimately think this is an empirical issue. At the very least I think we can say that this argument shows that the higher-order theory makes a clear, empirically testable predication, and like the empty higher-order state claim itself, the more implausible the prediction the more of a victory it is when it is not falsified.

At any rate abstracting from all of this Seager presents an interesting argument. If I am reading it correctly the claim seems to be that the empirical case for the higher-order theory is going to be undercut because first-order theories are not committed to the claim that first-order states are to be found in early sensory areas, and might even be found in places like the dlPFC. If so then even if there were a difference in activation there, as between early sensory areas, then this by itself would not be evidence for a higher-order theory because those may be first-order states.

The way I tried to get around this kind of worry (in my Brain and its States paper) was by taking D prime to be a measure of the first-order information which is being represented. This was justified, I thought, because the first-or-lower-order states are thought by us to largely drive the task performance. D prime gives us a measure of how well the subjects perform the task (by calculating the ration of hits to false alarms) and so it seems natural to suppose it gives a measure of what the first-order states are representing. The bias in judgment can be measured by C (the criterion) in signal detection theory and this can roughly be treated as a measure of the confidence of the subjects. So, instead of looking for direct anatomical correlates we can look for matched D prime scores while there is difference in subjective report. This is exactly what Lau and his lab has been able to show in many different cases. In addition when there is fMRI data it shows no significant difference in any first-order areas while there is a difference in the prefrontal cortex. Is this due to residual first-order states in ‘higher-order’ areas? Maybe, but if so they would be accounted for in the measure of D prime. And that would not explain why subjects report a difference in visibility, or confidence, or whatever. Because of this I do not think the empirical cases has been much undermined by Seager.

Gottlieb on Presentational Character and Higher-Order Thought Theories of Consciousness

In his paper, Presentational Character and Higher-Order Thoughts, which came out in 2015 in the Journal of Consciousness Studies, Gottlieb presents a general argument against the higher-order theory of consciousness which invokes some of my work as support. His basic idea is that conscious experience has what he calls presentational character, where this is something like the immediate directness with which we experience things in the world.

Nailing down this idea is a bit tricky but we don’t need to be too precise to get the puzzle he wants. He puts it this way in the paper,

Focus on the visual case. Then, fix the concept ‘presentational character’ in purely comparative terms, between visual experiences and occurrent thoughts: ‘presentational character’ picks out that phenomenological quality, whatever it is, that marks the difference between what it is like to be aware of an object O by having an occurrent thought about O and what it is like to be aware of an object O by having a visual experience of O. That is the phenomena I am claiming to be incompatible with the traditional HOT-theoretic explanation of consciousness. And so long as one concedes there is such a difference between thinking about O and visually experiencing O, we should have enough of a fix on our phenomenon of interest.

Whether or not you agree that presentational character, as Gottlieb defines it, is a separate, distinct, component of our overall phenomenology there is clearly a difference between consciously seeing red (a visual experience) and consciously thinking about red (a cognitive experience). If the higher-order theory of consciousness were not able to explain what this difference amounted to we would have to admit a serious deficit in the theory.

But why should we think that the higher-order theory has any problem with this? Gottlieb presents his official argument as follows:

S1  If HOT is true, m*(the HOT) entirely fixes the phenomenal character of experience.

S2  HOTs are thoughts.

S3  Presentational character is a type of phenomenal character.

S4  Thoughts as such do not have presentational character.

So:

S5 HOTs do not have presentational character.

Thus:

S6 If HOTs do not have presentational character, no experience (on HOT) has presentational character.

Therefore:

P1 If HOT is true, no experience has presentational character.

The rest of the paper goes on to defend the argument from various moves a higher-order theorist may make but I would immediately object to premise S4. There are some thoughts, in particular a specific kind of higher-order thought, which will have presentational character. Or at least these thoughts will be able to explain the difference that Gottlieb claims can’t be explained.

Gottlieb is aware that this is the most contentious premise of his argument. This is where he appeals to the work that I have done trying to connect the cognitive phenomenology debate to the higher-order thought theory of consciousness (this is the topic of some of my earliest posts here at Philosophy Sucks!). In particular he says,

Richard Brown and Pete Mandik (2013) have argued that if HOT is true, we have can have (first-order, non-introspected) thoughts with propriety phenomenology. Suppose one first has a suitable HOT about one’s first-order pain sensation. Here, the pain will become conscious. Yet now suppose one has a suitable HOT about one’s thought that the Eiffel Tower is tall. As Brown and Mandik point out, if we deny cognitive phenomenology, one will then need to say that though the thought is conscious, there is nothing that it is like for this creature to consciously think the thought. But this would be—by the edicts of HOT itself—absurd; after all, the two higher-order states are in every relevant respect the same.

I agree that this is what we say about the traditional higher-order theory (where we take the first-order state to be made conscious by the higher-order state) but I would prefer to put this by saying that if we are talking about phenomenal consciousness (as opposed to mere-state-consciousness) then it would be the higher-order state that was conscious, but other than that this is our basic point. How does it help Gottlieb’s case?

The argument is complicated but it seems to go like this. If we accept the conclusion of the argument from Brown and Mandik then conscious thoughts and visual experiences both have phenomenology and they have different kinds of phenomenology (i.e. cognitive phenomenology is proprietary). In particular cognitive phenomenology does not have presentational character. Whatever the phenomenology of thinking is, it is not like see the thing in front of you! But now consider the case where you are seeing something red and you introspect that conscious experience. When one introspects, on the traditional higher-order view, one comes to have a third-order thought about the second order thought. So, in effect, the second-order thought becomes conscious. But we already said that cognitive phenomenology is not the kind of thing that results in presentational character, so when the second-order thought becomes conscious we should be aware of it *as a thought* and so *as the kind of thing which lacks presentational character* but that would mean that introspection is incompatible with the presentational character.

I have had similar issues with Rosenthal’s account of introspection so I am glad that Gottlieb is drawing attention to this issue. I have also explored his recommended solution of having the first-order state contribute something to the content of the higher-order state (here, and in my work with Hakwan)

I also have a talk and a draft of a paper devoted to exploring alternative accounts of introspection from the higher-order perspective. I put it up on Academia.edu but that was before I fully realized that I am not much of a fan of the way they are developing it. In fact, I forgot my login info and was locked out of seeing the paper myself for about a week! Someday I aim to revisit it. But one thing that I point out in that paper is that Rosenthal seems to talk about introspection in a very different way. Here is what he says in one relevant passage,

We sometimes have thoughts about our experiences, thoughts that sometimes characterize the experiences as the sort that visually represent red physical objects.  And to have a thought about an experience as visually representing a red object is to have a thought about the experience as representing that object qualitatively, that is, by way of its having some mental quality and it is the having of just such thoughts that make one introspectively conscious of one’s experience, (CM p. 119)

This paragraph has often been in my thoughts when I think about introspection on the higher-order theory. But it has become clear to me that a lot depends on what you mean by ‘thoughts about our experiences’.

Here is what I say in the earlier mentioned draft,

…In [Rosenthal’s Trends in Cognitive Science] paper with Lau where they respond to Rafi Malach, they characterize the introspective third-order thought as having the content ‘I am having this representation that I am seeing this red object’. I think it is interesting that they do not characterize it as having content like ‘I am having this thought that I am seeing red’. On their account we represent the second-order thought as being the kind of state that represents me as seeing physical red and we do so in a way that does not characterize it as a thought. One reason for this may be that if, as we have seen, the highest-order thought determines what it is like for you then if I am having a third-order thought with the content ‘I am having this thought that I am seeing red’ then what it will be like for me is like having a thought. But this is arguably not what happens in canonical cases of introspection (Gottlieb forthcoming makes a similar objection). Rosenthal himself in his earlier paper agued that when we introspect we are having thoughts about our experiences and that we characterize them as being the kind that qualitatively represents blue things. This is a strange way to characterize a thought.

So I agree that there seems to be a problem here for the higher-order theory but I would not construe it as a problem with the theory’s ability to explain presentational character. I think it can do that just fine. Rather what it suggests is that we should look for a different account of introspection.

When Rosenthal talks specifically about introspection he is talking about the very rare case where one ‘quote-unquote’ brackets the external world and considers one’s experience as such. So, in looking at a table I may consciously perceive it but I am focused on the table (and this translates to the claim that the concepts I employ in the higher-order thought are about the worldly properties). When I introspect I ‘bracket’ the table in the world and take my experience itself as the object of my inner awareness. The intuitive idea that Rosenthal wants to capture is that when we have conscious experience we are aware of our first-order states (as describing properties in the world) and in deliberate attentive introspection we are aware of our awareness of the first-order state. The higher-order state is unconscious and when we become aware of our awareness we make that state conscious, but, on his view, we do so in a way so as not to notice that it is a thought.

But part of me wonders about this. Don’t some people take introspection to be a matter of having a belief about one’s own experience? If so the a conscious higher-order thought would fit this bill. So there may be a notion of introspection that a third-order thought may account for. But we might also want a notion of introspection that was more directly related to focusing on what it is like for the subject. When I focus on the redness of my conscious experience it doesn’t seem as though I am having a conscious thought about the redness. It seems like I am focused on the particular nature of my conscious experience. We might describe that with something like ‘I am seeing red’ and that may sound like a conscious higher-order thought but we are here talking about being aware of the conscious experience itself. So, to capture this, I would suggest, in both cases we are aware of our first-order states. In non-introspective consciousness we are aware of the first-order state as presenting something external to us. In introspective consciousness we are aware of the first-order state as a mental state, as being a visual experience, or a seeing, etc.

I am inclined to see these two kinds of thoughts as ‘being at the same level’ in the sense that they are both thoughts about the first-order states but which have very different contents. And this amounts to the claim that they employ different kinds of concepts. But these ideas are still very much in development. Any thoughts (of whatever order) appreciated!

Gottlieb on Brown

I have been interested in the relationship between the transitivity principle and transparency for quite a while now. This issue has come up again in a recent paper  by Joseph Gottlieb fittingly called Transitivity and Transparency. This paper came out in Analytic Philosophy in 2016 but he actually sent me the paper beforehand. I read it and we had some email conversation about it (and this influenced my Introspective Consciousness paper (here is the Academia.edu session I had on it)) but I never got the chance to formulate any clear thoughts on it. So I figured I would give it a shot now.

There is a lot going on in the paper and so I will focus for the most part on his response to some of my early work on what will become HOROR theory. He argues that what he calls Non-State-Relational Transitivity, is not an ‘acceptable consistency gloss’ on the transitivity principle. So what is a consistency gloss? The article is technical (it did come out in Analytic Philosophy, after all!). For Gottlieb this amounts to giving a precisification of the transitivity principle that renders it compatible with what he calls Weak Transparency. He defines these terms as follows,

TRANSITIVITY: Conscious mental states are mental states we are aware of in some way.

W-TRANSPARENCY: For at least one conscious state M, it is impossible to:

(a) TRANSPARENCY-DIRECT: Stand in a direct awareness relation to M, or; (b) TRANSPARENCY-DE RE: Stand in a de re awareness relation to M, or; (c) TRANSPARENCY-INT: Stand in an introspective awareness relation to M,

His basic claim, then, is that there is no way of making precise the statement of transitivity above in such a way as to render it consistent with the weak version of transparency that he thinks should count as a truism or platitude.

Of course my basic claim, one that I have made since the beginning of thinking about these issues, is that there is a way of doing this but it requires a proper understanding of what the transitivity principle says. If we do not interpret the theory as claiming that a first-order state is made conscious by the higher-order state (as Gottlieb does in TRANSITIVITY above) but instead think of transitivity as telling us that a conscious experience is one that makes me aware of myself as being in first-order states then we have a way to satisfy Weak Transparency.

So what is Gottlieb’s problem with this way of interpreting the transitivity principle? He has a section of the paper discussing this kind of move. He says,

4.3 Non-State-Relational Transitivity

As it stands, TRANSITIVITY posits a relation between a higher-order state and a first-order state. But not all Higher-Order theorists construe TRANSITIVITY this way. Instead, some advance:

  • NON-STATE-RELATIONAL TRANSITIVITY: A conscious mental state is a mental state whose subject is aware of itself as being in that state.

NON-STATE-RELATIONAL TRANSITIVITY is an Object-Side Precisification. And it appears promising. For it says that we are aware of ourselves as being in conscious states, not simply that we are aware of our conscious states. These are different claims.

I agree that this is an importantly different way of thinking about the transitivity principle. However, I do not think that I actually endorse this version of the transitivity principle. As it is stated here NON-STATE-RELATIONAL TRANSITIVITY is still cast in terms of the first-order state.

What I mean by that is when we ask the question ‘which metal state is phenomenally conscious?’ the current proposal would answer ‘the mental state the subject is aware of itself as being in’. Now, I do think that this is most likely the way that Rosenthal and Weisberg think of non-state-relational transitivity but this is not the way that I think about it.

I have not put this in print yet (though it is in a paper in draft stage) but the way I would reformulate the transitivity principle would be as follows (or at least along these general lines),

  • A mental state is phenomenally conscious only if it appropriately makes one aware of oneself as being in some first-order mental state

This way of putting things emphasizes the claim that the higher-order state itself is the phenomenally conscious state.

Part of what I think is going on here is that there is an ambiguity in terms like ‘awareness’. When we say that we are aware of a first-order state, or whatever, what we should mean, from the higher-order perspective, is that the higher-order state aims at or targets or represents or whatever the first-order state. I have toyed with the idea that the ‘targeting’ relation boils down to a kind of causal-reference relation. But then we can also ask ‘how does it appear to the subject?’ and there it is not the case that we should say that it appears to the subject that they are aware of the first-order state. The subject will seemingly be aware of the items in the environment and this is because of the higher-order content of the higher-order representation.

Gottlieb thinks that non-state-relational transitivity,

 …will do nothing with respect to W-TRANSPARENCY…For presumably there will be (many!) cases where I am in the conscious state I am aware of myself as being in, and so cases where we will still need to ask in what sense I am aware of those states, and whether that sense comports with W-TRANSPARENCY. NON-STATE-RELATIONAL TRANSITIVITY doesn’t obviously speak to this latter question, though; the awareness we have of ourselves is de re, and presumably direct, but whether that’s also true of the awareness we have of our conscious states is another issue. So as it stands, NON-STATE-RELATIONAL TRANSITIVITY is not a consistency gloss.

I think it should be clear by now that this may apply to the kind of view he discusses, and that this view may even be one you could attribute to Rosenthal or Weisberg, but it is not the kind of view that I have advocated.

According to my view the higher-order state is itself the phenomenally conscious state, it is the one that there is something that it is like for one to be in. What, specifically, it is like, will depend on the content of the higher-order representation. That is to say, the way the state describes one’s own self determined what it is like for you. When the first order state is there, it, the first-order state, will be accurately described but that is besides the point. W-transparency is clearly met by the HOROR version of higher-order theory. And if what I said above can hold water then it is still a higher-order theory which endorses a version of the transitivity principle but it is able to simultaneously capture many of the intuitions touted as evidence for first-order theories.

Eliminativism and the Neuroscience of Consciousness

I am teaching Introduction to Neuroscience this spring semester and am using An Introduction to Brain and Behavior 5th edition by Kolb et al as the textbook (this is the book the biology program decided to adopt). I have not previously used this book and so I am just getting to find my way around it but so far I am enjoying it. The book makes a point of trying to connect neuroscience, psychology, and philosophy, which is pretty unusual for these kinds of textbooks (or at least it used to be!).

In the first chapter they go through some of the basic issues in the metaphysics of the mind, starting with Aristotle and then comparing Descartes’ dualism to Darwin’s Materialism. This is a welcome sight in a neuroscience/biological psychology textbook, but there are some points at which I find myself disagreeing with the way they set things up. I was thinking of saying something in class but we have so little time as it is. I then thought maybe I would write something and post it on Blackboard but if I do that I may as well have it here in case anyone else wants to chime in.

They begin by discussing the greek myth of Cupid and Psyche and then say,

The ancient Greek philosopher Aristotle was alluding to this story when he suggested that all human intellectual functions are produced by a person’s psyche. The psyche, Aristotle argued, is responsible for life, and its departure from the body results in death.

Thus, according to them, the ordinary conception of the way things work, i.e. that the mind is the cause of our behavior, is turned by  Aristotle into a psychological theory about the source or cause of behavior. They call this position mentalism.

They also say that Aristotle’s view was that the mind was non-material and separate from the body, and this is technically true. I am by no means an expert on Aristotle’s philosophy in general but his view seems to have been that the mind was the form of the body in something like the way that the shape of a statue was the form of (say) some marble. This is what is generally referred to as ‘hylomorphism’ which means that ordinary objects are somehow composed of both matter and form. I’ll leave aside the technical philosophical details but I think the example of a statue does an ok job of getting at the basics.  The statue of Socrates and the marble that it is composed out of are two distinct objects for Aristotle but I am not sure that I would say that the statue was non-physical. It is physical but it is just not identical to the marble it is made out of (you can destroy the statue and not destroy the marble so they seem like different things). So while it is true that Aristotle claimed the mind and body were distinct  I don’t think it is fair to say that Aristotle thought that the psyche was non-physical. It was not identical to the body but was something like ‘the body doing what it does’ or ‘the organizing principle of the body’. But ok, that is a subtle point!

They go on to say that

Descartes’s thesis that the [non-physical] mind directed the body was a serious attempt to give the brain an understandable role in controlling behavior. This idea that behavior is controlled by two entities, a [non-physical] mind and a body, is dualism (from Latin, meaning two). To Descartes, the [non-physical] mind received information from the body through the brain. The [non-physical] mind also directed the body through the brain. The rational [non-physical] mind, then, depended on the brain both for information and to control behavior.

I think this is an interesting way to frame Descartes view. On the kind of account they are developing Aristotle could not allow any kind of physical causation by the non-physical mind but I am not sure this is correct.

But either way they have an interesting way of putting things. The question is what produces behavior? If we start with a non-physical mind as the cause of behavior then that seems to leave no role for the brain, so then we would have to posit that the brain and the non-physical mind work together to produce behavior.

They then go on to give the standard criticisms of Descartes’ dualism. They argue that it violates the conservation of energy, though this is not entirely clear (see David Papineau’s The Rise of Physicalism for some history on this issue). They also argue that dualism is a bad theory because it has led to morally questionable results. In particular:

Cruel treatment of animals, children, and the mentally ill has for centuries been justified by Descartes’s theory.

I think this is interesting and probably true. It is a lot easier to dehumanize something if you think the part that matters can be detached. However I am not sure this counts as a reason to reject dualism. Keep in mind I am not much of a dualist but if something is true then it is true. I tend to find that students more readily posit a non-physical mind for animals than they do deny that they have pain as Descartes did but that is neither here nor there.

Having set everything up in this way they then introduce eliminativism about the mind as follows.

The contemporary philosophical school eliminative materialism takes the position that if behavior can be described adequately without recourse to the mind, then the mental explanation should be eliminated.

Thus they seem to be claiming that the non-physical aspect of the system should be eliminated, which I think a lot of people might agree with, but also that along with it the mental items that Descartes and others thought were non-physical should be eliminated as well. I fully agree that, in principle, all of the behaviors of animals can be fully explained in terms of the brain and its activity but does this mean that we should eliminate the mind? I don’t think so! In fact I would generally think that this is the best argument against dualisms like Descartes’. We have never needed to actually posit any non-physical features in the explanation of animal behavior.

In general the book tends to neglect the distinction between reduction and elimination. One can hold that we should eliminate the idea that pains and beliefs are non-physical mental items and instead think that they are physical and can be found in the activity or biology of the brain. That is to say we can think that certain states of the brain just are the having of a belief or feeling of a pain, etc. Eliminativism, as it is usually understood, is not a claim about the physicality of the mind. It is instead a claim about how neuroscience will proceed in the future. That is to say the emphasis is not on the *materialism* but on the *eliminative* part. The goal is to distinguish it from other kinds of materialism not to distinguish it from dualism. The claim is that when neuroscience gives us the ultimate explanation of behavior we will see that there really is no such thing as a belief. This is very different from the claim that we will find out that certain brain states are beliefs.

Thus it is a bit strange that the authors run together the claim that the mind is a non-physical substance together with the claim that there are such things as beliefs, desires, pains, itches, and so on. This seems to be a confusion that was evident in early discussions of eliminativism (see the link above) but now we know we can eliminate one and reduce the other, though we may not as well.

They go on to say,

Daniel Dennett (1978) and other philosophers, who have considered such mental attributes as consciousness, pain, and attention, argue that an understanding of brain function can replace mental explanations of these attributes. Mentalism, by contrast, defines consciousness as an entity, attribute, or thing. Let us use the concept of consciousness to illustrate the argument for eliminative materialism.

I do not think this is quite the right way to think about Dennett’s views but it is hard to know if there is a right way to think about them! At any rate it is true that Dennett thinks that we will not find anything like beliefs in the completed neuroscience but it is wrong to think that Dennett thinks we should eliminate mentalistic talk. It is true, for Dennett, that there are no beliefs in the brain but it is still useful, on his view, to talk about beliefs and to explain behavior in terms of beliefs.

He has lately taken to comparing his views with the way that your desktop computer works. When you look at the desktop there are various icons there and folders, etc. Clicking on the folder will bring up a menu showing where your saved files are, etc. But it would be a mistake to think that this gave you any idea about how the computer was working. It is not storing little file folders away. Rather there is a bunch of machine code and those icons are a convenient way for you to interface with that code without having to know anything about it. So, too, Dennett argues our talk about the mind is like that. It is useful but wrong about the nature of the brain.

At any rate how does consciousness illustrate the argument for eliminative materialism?

The experimenters’ very practical measures of consciousness are formalized by the Glasgow Coma Scale (GCS), one indicator of the degree of unconsciousness and of recovery from unconsciousness. The GCS rates eye movement, body movement, and speech on a 15-point scale. A low score indicates coma and a high score indicates consciousness. Thus, the ability to follow commands, to eat, to speak, and even to watch TV provide quantifiable measures of consciousness contrasting sharply with the qualitative description that sees consciousness as a single entity. Eliminative materialists would argue, therefore, that the objective, measurably improved GCS score of behaviors in a brain-injured patient is more useful than a subjective mentalistic explanation that consciousness has “improved.”

I don’t think I see much of an argument for eliminativism in this approach. The basic idea seems to be that we should take ‘the patient is conscious’ as a description of a certain kind of behavior that is tied to brain activity and that this should be taken as evidence that we should not take ‘consciousness’ to refer to a non-physical mental entity. This is interesting and it illustrates a general view I think is in the background of their discussion. Mentalism, as they define it, is the claim that the non-physical mind is the cause of behavior. They propose eliminating that but keeping the mentalistic terms, like ‘consciousness’. But they argue that we should think of these terms not as naming some subjective mental state but as a description of objective behavior.

I do agree that our ordinary conception of ‘consciousness’ in the sense of being awake or asleep or in a coma will come to be refined by things like the Glasgow Coma Scale. I also agree that this may be some kind of evidence against the existence of a non-physical mind that is either fully conscious or not at one moment. As the authors themselves are at pains to point out we can take the behavior to be tied to brain activity and it is there that I would expect to find consciousness. So I would take this as evidence of reduction or maybe slight modification of our ordinary concept of waking consciousness. That is, on my view, we keep the mental items and identify them with brain activity thereby rejecting dualism (even though I think dualism could be true, I just don’t think we have a lot of reason to believe that it is in fact true).

They make this clear in their summary of their view;

Contemporary brain theory is materialistic. Although materialists, your authors included, continue to use subjective mentalistic words such as consciousnesspain, and attention to describe more complex behaviors, at the same time they recognize that these words do not describe mental entities.

It think it should be very clear by now that they mean this as a claim about the non-physical mind. The word ‘consciousness’ on their view describes a kind of behavior which can be tied to the brain but not a non-physical part of nature. But even so it will still be true that the brain’s activity will cause pain; as long as we interpret ‘pain’ as ‘pain behavior’.

However, I think it is also clear by now that we need not put things this way. It seems to me that the better way to think of things is that pain causes pain behavior, and that pain is typically and canonically a conscious experience, and that we can learn about the nature of pain by studying the brain (because certain states of the brain just are states of being in pain).  We can thereby be eliminativists about the non-physical mind while being reductionists about pain.

But, whichever way one goes on this, is it even correct to say that modern neuroscience is materialistic? This seems to assume too much. Contemporary neuroscience does make the claim that an animal’s behavior can be fully understood in terms of brain activity (and it seems to me that this claim is empirically well justified) but is this the same thing as being materialistic? It depends on what one thinks about consciousness. It is certainly possible to take all of what neurosciences says and still think that conscious experience is not physical. That is the point that some people want to make by imagining zombies (or claiming that they can). It seems to them that we could have everything that neuroscience tells us about it and its relation to behavior and yet still lack any of the conscious experience in the sense that there is something that it is like for the subject. I don’t think we can really do this but it certainly seems like we can to (me and) a lot of other people. I also agree that eliminativism is a possibility in some sense of that word but I don’t see that neuroscience commits you to it or that it is in any way an assumption of contemporary brain theory.

It wasn’t that long ago (back in the 1980s) that Jerry Fodor famously said, “if commonsense psychology were to collapse, that would be, beyond comparison, the greatest intellectual catastrophe in the history of our species” and I tend to agree (to a somewhat less hyperbolic way of putting the point). The authors of this textbook may advocate eliminating our subjective mental life but that is not something that contemporary neuroscience commits you to!