Papa don’t Teach (again!)

IMG_4628

The Brown Boys

2018 is off to an eventful start in the Brown household. My wife and I have just welcomed our newborn son Caden (pictured with older brother Ryland and myself to the right) and I will soon be going on Parental Leave until the end of April. Because of various reasons I had to finish the last two weeks of the short Winter semester after Caden was born (difficult!). That is all wrapped up now and there is just one thing left to do before officially clocking out.

Today I will be co-teaching a class with Joseph LeDoux at NYU. Joe is teaching a course on The Emotional Brain and he asked me to come in to discuss issues related to our recent paper. I initially recorded the below presentation to get a feel for how long the presentation was (I went a bit overboard I think) but I figured once it was done I would post it. The animations didn’t work out (I used powerpoint instead of Keynote), I lost some of the pictures, and I was heavily rushed and sleep-deprived (plus I seem to be talking very slow when I listen back to it) but at any rate any feedback is appreciated. Since this was to be presented to a neuroscience class I tried to emphasize some of the points made recently by Hakwan Lau at his blog.

The Biological Chinese Room (?)

I am getting ready to head out to New York Medical College to give Grand Rounds in the department of Psychiatry and Behavioral Sciences on the Neurobiology of Consciousness. I am leaving in just a bit but as I was getting ready I had a strange thought about Searle’s Chinese Room argument that I thought I would jot down very quickly. I assume we are all familiar with the traditional version of the argument. We have you (or Seattle) locked in a room receiving input in a foreign language and looking up proper responses in a giant rule book to return the proper output. In effect the person in the room is performing the job that a computer would taking syntactic representations and transforming them according to formally specified rules. The general idea is that since Searle doesn’t thereby understand Chinese that there must be more to understanding it than formal computation.

Now, I don’t want to get bogged down in going over the myriads of responses and counter responses that have appeared since  Seattle first gave this argument but it did occur to me that we could give a biological version of this that would target the biological nature of consciousness that Searle prefers. Indeed, I think it also would work against Block’s recent claim that some kind of analog computation suffices for phenomenal consciousness (see his talk at Google (and especially the questions at the end)). So the basic idea is this. Instead of having the person in the room implement formal computations, have them implement analog ones by playing the role of neurons. They would be sequestered in the room as usual and would receive input in the form of neurotransmitters. They would then respond with the appropriate neurotransmitters. We can imagine the entire room is hooked up in such a way that the Chinese speaker on the outside in speaking normally, or typing or whatever, and this gets translated into neural-chemical activity which is what the person in the room receives. They respond in kind and this gets translated into speech on the other end. Seattle still wouldn’t understand Chinese.

So it seems that either this refutes the biological view of consciousness or it suggests what is wrong with the original Chinese Room argument…any thoughts?

Cognitive Prosthetics and Mind Uploading

I am on record (in this old episode of Spacetime Mind where we talk to Eric Schwitzgebel) as being somewhat of a skeptic about mind uploading and artificial consciousness generally (especially for a priori reasons) but I also think this is largely an empirical matter (see this old draft of a paper that I never developed). So even though I am willing to be convinced I still have some non-minimal credence in the biological nature of consciousness and the mind generally, though in all honesty it is not as non-minimal as it used to be.

Those who are optimistic about mind uploading have often appealed to partial uploading as a practical convincing case. This point is made especially clearly by David Chalmers in his paper The Singularity: A Philosophical Analysis (a selection of which is reprinted as ‘Mind uploading: A Philosophical Analysis),

At the very least, it seems very likely that partial uploading will convince most people that uploading preserves consciousness. Once people are confronted with friends and family who have undergone limited partial uploading and are behaving normally, few people will seriously think that they lack consciousness. And gradual extensions to full uploading will convince most people that these systems are conscious at well. Of course it remains at least a logical possibility that this process will gradually or suddenly turn everyone into zombies. But once we are confronted with partial uploads, that hypothesis will seem akin to the hypothesis that people of different ethnicities or genders are zombies.

What is partial uploading? Uploading in general is never very well defined (that I know of) but it is often taken to involve in some way producing a functional isomorph to the human brain. Thus partial uploading would be the partial production of a functional isomorph to the human brain. In particular we would have to reproduce the function of the relevant neuron(s).

At this point we are not really able to do any kind of uploading as Chalmers’ or others describe but there are people who seem to be doing things that look like a bit like partial uploading. First one might think of cochlear implants. What we can do now is impressive but it doesn’t look like uploading in any significant way. We have computers analyze incoming sound waves and then stimulate the auditory nerves in (what we hope) are appropriate ways. Even leaving aside the fact that subjects seem to report a phenomenological difference, and leaving aside how useful this is for a certain kind of auditory deficit, it is not clear that the role of the computational device has anything to do with constituting the conscious experience, or of being part of the subject’s mind. It looks to me like these are akin to fancy glasses. They causally interact with the systems that produce consciousness but do not show that the mind can be replaced by a silicon computer.

The case of the artificial hippocampus gives us another nice test case. While still in its early development it certainly seems like it is a real possibility that the next generation of people with memory problems may have neural prosthetics as an option (there is even a startup trying to make it happen and here is a nice video of Theodore Berger presenting the main experimental work).

What we can do now is fundamentally limited by our lack of understanding about what all of the neural activity ‘means’ but even so there is impressive and suggestive evidence that homelike like a prosthetic hippocampus is possible. They record from an intact hippocampus (in rats) while performing some memory task and then have a computer analyze and predict what the output of the hippocampus would have been. When compared to actual output of hippocampal cells it is pretty good and the hope is that they can then use this to stimulate post-hippocampal neurons as they would have been if the hippocampus was intact. This has been done as proof of principle in rats (not in real time) and now in monkeys, and in real time and in the prefrontal cortex as well!

The monkey work was really interesting. They had the animal perform a task which involved viewing a picture and then waiting through a delay period. After the delay period the animal is shown many pictures and has to pick out the one it saw before (this is one version of a delayed match to sample task). While they were doing this they recorded activity of cells in the prefrontal cortex (specifically layers 2/3 and 5). When they introduced a drug into the region which was known to impair performance on this kind of task the animal’s performance was very poor (as expected) but if they stimulated the animal’s brain in the way that their computer program predicted that the deactivated region would respond (specifically they stimulated the layer 5 neurons (via the same electrode they previously used to record) in the way that the model predicted they would have been by layer 2/3) the animal’s performance returned to almost normal! Theodore Berger describes this as something like ‘putting the memory into memory for the animal’. He then shows that if you do this with an animal that has an intact brain they do better than they did before. This can be used to enhance the performance of a neuroscience-typical brain!

They say they are doing human trials but I haven’t heard anything about that. Even so this is impressive in that they use it successfully in rats for long term memory in the hippocampus and then they also use it in monkeys in the prefrontal cortex in working memory. In both cases they seem to get the same result. It starts to look like it is hard to deny that the computer is ‘forming’ the memory and transmitting it for storage. So something cognitive has been uploaded. Those sympathetic to the biological view will have to say that this is more like the cochlear implant case where we have a system causally interacting with the brain but it is the biological brain that stores the memory and recalls it and is responsible for any phenomenology or conscious experiences. It seems to me that they have to predict that in humans there will be a difference in the phenomenology that stands out to the subject (due to the silicon not being a functional isomorph) but if we get the same pattern of results for working memory in humans are we heading towards Chalmers’ acceptance scenario?

Dispatches from the Ivory Tower

In celebration of my ten years in the blogosphere I have been compiling some of my past posts into thematic meta-posts. The first of these listed my posts on the higher-order thought theory of consciousness. Continuing in this theme below are links to posts I have done over the past ten years reporting on talks/conferences/classes I have attended. I wrote these mostly so that I would not forget about these sessions but they may be interesting to others as well. Sadly, there are several things I have been to in the last year or so that I have not had the tim to sit down and write about…ah well maybe some day!

  1. 09/05/07 Kripke
    • Notes on Kripke’s discussion of existence as a predicate and fiction
  2. 09/05/2007 Devitt
  3. 09/05 Devitt II
  4. 09/19/07 -Devitt on Meaning
    • Notes on Devitt’s class on semantics
  5. Flamming LIPS!
  6. Back to the Grind & Meta-Metaethics
  7. Day Two of the Yale/UConn Conference
  8. Peter Singer on Climate Change and Ethics
    • Notes on Singer’s talk at LaGuardia
  9. Where Am I?
    • Reflections on my talk at the American Philosophical Association talk in 2008
  10. Fodor on Natural Selection
    • Reflections on the Society of Philosophy and Psychology meeting June 2008
  11. Kripke’s Argument Against 4-Dimensionalism
    • Based on a class given at the Graduate Center
  12. Reflections on Zoombies and Shombies Or: After the Showdown at the APA
    • Reflections on my session at the American Philosophical Association in 2009
  13. Kripke on the Structure of Possible Worlds
    • Notes on a talk given at the Graduate Center in September 2009
  14. Unconscious Trait Inferences
    • Notes on social psychologist James Uleman‘s talk at the CUNY Cogsci Speaker Series September 2009
  15. Attributing Mental States
    • Notes on James Dow‘s talk at the CUNY Cogsci Speaker Series September 2009
  16. Busy Bees Busily Buzzing ‘Bout
  17. Shombies & Illuminati
  18. A Couple More Thoughts on Shombies and Illuminati
    • Some reflections after Kati Balog’s presentation at the NYU philosophy of mind discussion group in November 2009
  19. Attention and Mental Paint
    • Notes on Ned Block’s session at the Mind and Language Seminar in January 2010
  20. HOT Damn it’s a HO Down-Showdown
    • Notes on David Rosenthal’s session at the NYU Mind and Language Seminar in March 2010
  21. The Identity Theory in 2-D
    • Some thoughts in response to theOnline Consciousness Conference in February 2010
  22. Part-Time Zombies
    • Reflections on Michael Pauen‘s Cogsci talk at CUNY in March of 2010
  23. The Singularity, Again
    • Reflections on David Chalmers’ at the NYU Mind and Language seminar in April of 2010
  24. The New New Dualism
  25. Dream a Little Dream
    • Reflections on Miguel Angel Sebastian’s cogsci talk in July of 2010
  26. Explaining Consciousness & Its Consequences
    • Reflections on my talk at the CUNY Cog Sci Speaker Series August 2010
  27. Levine on the Phenomenology of Thought
    • Reflections on Levine’s talk at the Graduate Center in September 2010
  28. Swamp Thing About Mary
    • Reflections on Pete Mandik’s Cogsci talk at CUNY in October 2010
  29. Burge on the Origins of Perception
    • Reflections on a workshop on the predicative structure of experience sponsored by the New York Consciousness Project in October of 2010
  30. Phenomenally HOT
    • Reflections on the first session of Ned Block and David Carmel’s seminar on Conceptual and Empirical Issues about Perception, Attention and Consciousness at NYU January 2011
  31. Some Thoughts About Color
  32. Stazicker on Attention and Mental Paint
  33. Sid Kouider on Partial Awareness
    • a few notes about Sid Kouider’s recent presentation at the CUNY CogSci Colloquium in October 2011
  34. The 2D Argument Against Non-Materialism
    • Reflections on my Tucson Talk in April 2012
  35. Peter Godfrey-Smith on Evolution And Memory
    • Notes from the CUNY Cog Sci Speaker Series in September 2012
  36. The Nature of Phenomenal Consciousness
    • Reflections on my talk at the Graduate Center in September 2012
  37. Giulio Tononi on Consciousness as Integrated Information
    • Notes from the inaugural lecture of the new NYU Center for Mind and Brain by Giulio Tononi
  38. Mental Qualities 02/07/13: Cognitive Phenomenology
  39. Mental Qualities 02/21/13: Phenomenal Concepts
    • Notes/Reflections from David Rosenthal’s class in 2013
  40. The Geometrical Structure of Space and Time
    • Reflections on a session of Tim Maudlin’s course I sat in on in February 2014
  41. Towards some Reflections on the Tucson Conferences
    • Reflections on my presentations at the Tucson conferences
  42. Existentialism is a Transhumanism
    • Reflections on the NEH Seminar in Transhumanism and Technohumanism at LaGuardia I co-directed in 2015-2016

Eliminativism and the Neuroscience of Consciousness

I am teaching Introduction to Neuroscience this spring semester and am using An Introduction to Brain and Behavior 5th edition by Kolb et al as the textbook (this is the book the biology program decided to adopt). I have not previously used this book and so I am just getting to find my way around it but so far I am enjoying it. The book makes a point of trying to connect neuroscience, psychology, and philosophy, which is pretty unusual for these kinds of textbooks (or at least it used to be!).

In the first chapter they go through some of the basic issues in the metaphysics of the mind, starting with Aristotle and then comparing Descartes’ dualism to Darwin’s Materialism. This is a welcome sight in a neuroscience/biological psychology textbook, but there are some points at which I find myself disagreeing with the way they set things up. I was thinking of saying something in class but we have so little time as it is. I then thought maybe I would write something and post it on Blackboard but if I do that I may as well have it here in case anyone else wants to chime in.

They begin by discussing the greek myth of Cupid and Psyche and then say,

The ancient Greek philosopher Aristotle was alluding to this story when he suggested that all human intellectual functions are produced by a person’s psyche. The psyche, Aristotle argued, is responsible for life, and its departure from the body results in death.

Thus, according to them, the ordinary conception of the way things work, i.e. that the mind is the cause of our behavior, is turned by  Aristotle into a psychological theory about the source or cause of behavior. They call this position mentalism.

They also say that Aristotle’s view was that the mind was non-material and separate from the body, and this is technically true. I am by no means an expert on Aristotle’s philosophy in general but his view seems to have been that the mind was the form of the body in something like the way that the shape of a statue was the form of (say) some marble. This is what is generally referred to as ‘hylomorphism’ which means that ordinary objects are somehow composed of both matter and form. I’ll leave aside the technical philosophical details but I think the example of a statue does an ok job of getting at the basics.  The statue of Socrates and the marble that it is composed out of are two distinct objects for Aristotle but I am not sure that I would say that the statue was non-physical. It is physical but it is just not identical to the marble it is made out of (you can destroy the statue and not destroy the marble so they seem like different things). So while it is true that Aristotle claimed the mind and body were distinct  I don’t think it is fair to say that Aristotle thought that the psyche was non-physical. It was not identical to the body but was something like ‘the body doing what it does’ or ‘the organizing principle of the body’. But ok, that is a subtle point!

They go on to say that

Descartes’s thesis that the [non-physical] mind directed the body was a serious attempt to give the brain an understandable role in controlling behavior. This idea that behavior is controlled by two entities, a [non-physical] mind and a body, is dualism (from Latin, meaning two). To Descartes, the [non-physical] mind received information from the body through the brain. The [non-physical] mind also directed the body through the brain. The rational [non-physical] mind, then, depended on the brain both for information and to control behavior.

I think this is an interesting way to frame Descartes view. On the kind of account they are developing Aristotle could not allow any kind of physical causation by the non-physical mind but I am not sure this is correct.

But either way they have an interesting way of putting things. The question is what produces behavior? If we start with a non-physical mind as the cause of behavior then that seems to leave no role for the brain, so then we would have to posit that the brain and the non-physical mind work together to produce behavior.

They then go on to give the standard criticisms of Descartes’ dualism. They argue that it violates the conservation of energy, though this is not entirely clear (see David Papineau’s The Rise of Physicalism for some history on this issue). They also argue that dualism is a bad theory because it has led to morally questionable results. In particular:

Cruel treatment of animals, children, and the mentally ill has for centuries been justified by Descartes’s theory.

I think this is interesting and probably true. It is a lot easier to dehumanize something if you think the part that matters can be detached. However I am not sure this counts as a reason to reject dualism. Keep in mind I am not much of a dualist but if something is true then it is true. I tend to find that students more readily posit a non-physical mind for animals than they do deny that they have pain as Descartes did but that is neither here nor there.

Having set everything up in this way they then introduce eliminativism about the mind as follows.

The contemporary philosophical school eliminative materialism takes the position that if behavior can be described adequately without recourse to the mind, then the mental explanation should be eliminated.

Thus they seem to be claiming that the non-physical aspect of the system should be eliminated, which I think a lot of people might agree with, but also that along with it the mental items that Descartes and others thought were non-physical should be eliminated as well. I fully agree that, in principle, all of the behaviors of animals can be fully explained in terms of the brain and its activity but does this mean that we should eliminate the mind? I don’t think so! In fact I would generally think that this is the best argument against dualisms like Descartes’. We have never needed to actually posit any non-physical features in the explanation of animal behavior.

In general the book tends to neglect the distinction between reduction and elimination. One can hold that we should eliminate the idea that pains and beliefs are non-physical mental items and instead think that they are physical and can be found in the activity or biology of the brain. That is to say we can think that certain states of the brain just are the having of a belief or feeling of a pain, etc. Eliminativism, as it is usually understood, is not a claim about the physicality of the mind. It is instead a claim about how neuroscience will proceed in the future. That is to say the emphasis is not on the *materialism* but on the *eliminative* part. The goal is to distinguish it from other kinds of materialism not to distinguish it from dualism. The claim is that when neuroscience gives us the ultimate explanation of behavior we will see that there really is no such thing as a belief. This is very different from the claim that we will find out that certain brain states are beliefs.

Thus it is a bit strange that the authors run together the claim that the mind is a non-physical substance together with the claim that there are such things as beliefs, desires, pains, itches, and so on. This seems to be a confusion that was evident in early discussions of eliminativism (see the link above) but now we know we can eliminate one and reduce the other, though we may not as well.

They go on to say,

Daniel Dennett (1978) and other philosophers, who have considered such mental attributes as consciousness, pain, and attention, argue that an understanding of brain function can replace mental explanations of these attributes. Mentalism, by contrast, defines consciousness as an entity, attribute, or thing. Let us use the concept of consciousness to illustrate the argument for eliminative materialism.

I do not think this is quite the right way to think about Dennett’s views but it is hard to know if there is a right way to think about them! At any rate it is true that Dennett thinks that we will not find anything like beliefs in the completed neuroscience but it is wrong to think that Dennett thinks we should eliminate mentalistic talk. It is true, for Dennett, that there are no beliefs in the brain but it is still useful, on his view, to talk about beliefs and to explain behavior in terms of beliefs.

He has lately taken to comparing his views with the way that your desktop computer works. When you look at the desktop there are various icons there and folders, etc. Clicking on the folder will bring up a menu showing where your saved files are, etc. But it would be a mistake to think that this gave you any idea about how the computer was working. It is not storing little file folders away. Rather there is a bunch of machine code and those icons are a convenient way for you to interface with that code without having to know anything about it. So, too, Dennett argues our talk about the mind is like that. It is useful but wrong about the nature of the brain.

At any rate how does consciousness illustrate the argument for eliminative materialism?

The experimenters’ very practical measures of consciousness are formalized by the Glasgow Coma Scale (GCS), one indicator of the degree of unconsciousness and of recovery from unconsciousness. The GCS rates eye movement, body movement, and speech on a 15-point scale. A low score indicates coma and a high score indicates consciousness. Thus, the ability to follow commands, to eat, to speak, and even to watch TV provide quantifiable measures of consciousness contrasting sharply with the qualitative description that sees consciousness as a single entity. Eliminative materialists would argue, therefore, that the objective, measurably improved GCS score of behaviors in a brain-injured patient is more useful than a subjective mentalistic explanation that consciousness has “improved.”

I don’t think I see much of an argument for eliminativism in this approach. The basic idea seems to be that we should take ‘the patient is conscious’ as a description of a certain kind of behavior that is tied to brain activity and that this should be taken as evidence that we should not take ‘consciousness’ to refer to a non-physical mental entity. This is interesting and it illustrates a general view I think is in the background of their discussion. Mentalism, as they define it, is the claim that the non-physical mind is the cause of behavior. They propose eliminating that but keeping the mentalistic terms, like ‘consciousness’. But they argue that we should think of these terms not as naming some subjective mental state but as a description of objective behavior.

I do agree that our ordinary conception of ‘consciousness’ in the sense of being awake or asleep or in a coma will come to be refined by things like the Glasgow Coma Scale. I also agree that this may be some kind of evidence against the existence of a non-physical mind that is either fully conscious or not at one moment. As the authors themselves are at pains to point out we can take the behavior to be tied to brain activity and it is there that I would expect to find consciousness. So I would take this as evidence of reduction or maybe slight modification of our ordinary concept of waking consciousness. That is, on my view, we keep the mental items and identify them with brain activity thereby rejecting dualism (even though I think dualism could be true, I just don’t think we have a lot of reason to believe that it is in fact true).

They make this clear in their summary of their view;

Contemporary brain theory is materialistic. Although materialists, your authors included, continue to use subjective mentalistic words such as consciousnesspain, and attention to describe more complex behaviors, at the same time they recognize that these words do not describe mental entities.

It think it should be very clear by now that they mean this as a claim about the non-physical mind. The word ‘consciousness’ on their view describes a kind of behavior which can be tied to the brain but not a non-physical part of nature. But even so it will still be true that the brain’s activity will cause pain; as long as we interpret ‘pain’ as ‘pain behavior’.

However, I think it is also clear by now that we need not put things this way. It seems to me that the better way to think of things is that pain causes pain behavior, and that pain is typically and canonically a conscious experience, and that we can learn about the nature of pain by studying the brain (because certain states of the brain just are states of being in pain).  We can thereby be eliminativists about the non-physical mind while being reductionists about pain.

But, whichever way one goes on this, is it even correct to say that modern neuroscience is materialistic? This seems to assume too much. Contemporary neuroscience does make the claim that an animal’s behavior can be fully understood in terms of brain activity (and it seems to me that this claim is empirically well justified) but is this the same thing as being materialistic? It depends on what one thinks about consciousness. It is certainly possible to take all of what neurosciences says and still think that conscious experience is not physical. That is the point that some people want to make by imagining zombies (or claiming that they can). It seems to them that we could have everything that neuroscience tells us about it and its relation to behavior and yet still lack any of the conscious experience in the sense that there is something that it is like for the subject. I don’t think we can really do this but it certainly seems like we can to (me and) a lot of other people. I also agree that eliminativism is a possibility in some sense of that word but I don’t see that neuroscience commits you to it or that it is in any way an assumption of contemporary brain theory.

It wasn’t that long ago (back in the 1980s) that Jerry Fodor famously said, “if commonsense psychology were to collapse, that would be, beyond comparison, the greatest intellectual catastrophe in the history of our species” and I tend to agree (to a somewhat less hyperbolic way of putting the point). The authors of this textbook may advocate eliminating our subjective mental life but that is not something that contemporary neuroscience commits you to!

Chalmers on Brown on Chalmers

I just found out that the double special issue of the Journal of Consciousness Studies devoted to David Chalmers’ paper The Singularity: A Philosophical Analysis recently came out as a book! I had a short paper in that collection that stemmed from some thoughts I had about zombies and simulated worlds (I posted about them here and here). Dave responded to all of the articles (here) and I just realized that I never wrote anything about that response!

I have always had a love/hate relationship with this paper. On the one hand I felt like there was an idea worth developing, one that started to take shape back in 2009. On the other hand there was a pretty tight deadline for the special issue and I did not feel like I had really got ahold of what the main idea was supposed to be, in my own thinking. I felt rushed and secretly wished I could wait a year or two to think about it. But this was before I had tenure and I thought it would be a bad move to miss this opportunity. The end result is that I think the paper is flawed but I still feel like there is an interesting idea lurking about that needs to be more fully developed. Besides, I thought, the response from Dave would give me an opportunity to think more deeply about these issues and would be something I could respond to…that was five years ago! Well, I guess better late than never so here goes.

My paper was divided into two parts. As Dave says,

First, [Brown] cites my 1990 discussion piece “How Cartesian dualism might have been true”, in which I argued that creatures who live in simulated environments with separated simulated cognitive processes would endorse Cartesian dualism. The cognitive processes that drive their behavior would be entirely distinct from the processes that govern their environment, and an investigation of the latter would reveal no sign of the former: they will not find brains inside their heads driving their behavior, for example. Brown notes that the same could apply even if the creatures are zombies, so this sort of dualism does not essentially involve consciousness. I think this is right: we might call it process dualism, because it is a dualism of two distinct sorts of processes. If the cognitive processes essentially involve consciousness, then we have something akin to traditional Cartesian dualism; if not, then we have a different sort of interactive dualism.

Looking back on this now I think that I can say that part of the idea I had was that what Dave here calls ‘process dualism’ is really what lies behind the conceivability of zombies. Instead of testing whether (one thinks that) dualism or physicalism is true about consciousness the two-dimensional argument against materialism is really testing whether one thinks that consciousness is  grounded in biological or functional/computational properties. This debate is distinct and orthogonal to the debate about physicalism/dualism.

In the next part of the response Dave addresses my attempted extension of this point to try to reconcile the conceivability of zombies with what I called ‘biologism’. Biologism was supposed to be a word to distinguish the debate between the physicalist and the dualist from the debate between the biologically-oriented views of the mind as against the computationally oriented views. At the time I thought this term was coined by me and it was supposed to be an umbrella term that would have biological materialism as a particular variant. I should note before going on that it was only after the paper was published that I became aware that this term has a history and is associated with certain views about ‘the use of biological explanations in the analysis of social situations‘. This is not what I intended and had I known that beforehand I would have tried to coin a different term.

The point was to try to emphasize that this debate was supposed to be distinct from the debate about physicalism and that one could endorse this kind of view even if one rejected biological materialism. The family of views I was interested in defending can be summed up as holding that consciousness is ultimately grounded in or caused by some biological property of the brain and that a simulation of the brain would lack that property. This is compatible with materialism (=identity theory) but also dualism. One could be a dualist and yet hold that only biological agents could have the required relation to the non-physical mind. Indeed I would say that in my experience this is the view of the vast majority of those who accept dualism (by which I mostly mean my students). Having said that it is true that in my own thinking I lean towards physicalism (though as a side-side note I do not think that physicalism is true, only that we have no good reason to reject it) and it is certainly true that in the paper I say that this can be used to make the relevant claim about biological materialism.

At any rate, here is what Dave says about my argument.

Brown goes on to argue that simulated worlds show how one can reconcile biological materialism with the conceivability and possibility of zombies. If biological materialism is true, a perfect simulation of a biological conscious being will not be conscious. But if it is a perfect simulation in a world that perfectly simulates our physics, it will be a physical duplicate of the original. So it will be a physical duplicate without consciousness: a zombie.

I think Brown’s argument goes wrong at the second step. A perfect simulation of a physical system is not a physical duplicate of that system. A perfect simulation of a brain on a computer is not made of neurons, for example; it is made of silicon. So the zombie in question is a merely functional duplicate of a conscious being, not a physical duplicate. And of course biological materialism is quite consistent with functional duplicates.

It is true that from the point of view of beings in the simulation, the simulated being will seem to have the same physical structure that the original being seems to us to have in our world. But this does not entail that it is a physical duplicate, any more than the watery stuff on Twin Earth that looks like water really is water. (See note 7 in “The Matrix as metaphysics” for more here.) To put matters technically (nonphilosophers can skip!), if P is a physical specification of the original being in our world, the simulated being may satisfy the primary intension of P (relative to an inhabitant of the simulated world), but it will not satisfy the secondary intension of P. For zombies to be possible in the sense relevant to materialism, a being satisfying the secondary intension of P is required. At best, we can say that zombies are (primarily) conceivable and (primarily) possible— but this possibility mere reflects the (secondary) possibility of a microfunctional duplicate of a conscious being without consciousness, and not a full physical duplicate. In effect, on a biological view the intrinsic basis of the microphysical functions will make a difference to consciousness. To that extent the view might be seen as a variant of what is sometimes known as Russellian monism, on which the intrinsic nature of physical processes is what is key to consciousness (though unlike other versions of Russellian monism, this version need not be committed to an a priori entailment from the underlying processes to consciousness).

I have to say that I am sympathetic with Dave in the way he diagnoses the flaw in the argument in the paper. It is a mistake to think of the simulated world, with its simulated creatures, as being a physical duplicate of our world in the right way; especially if this simulation is taking place in the original non-simulated world. If the biological view is correct then it is just a functional duplicate, true a microfunctional duplicate, but not a physical duplicate.

While I think this is right I also think the issues are complicated. For example take the typical Russellian pan(proto)psychism that is currently being explored by Chalmers and others. This view is touted as being compatible with the conceivability of zombies because we can conceive of a duplicate of our physics as long as we mean the structural, non-intrinsic properties. Since physics, on this view, describes only these structural features we can count the zombie world as having our physics in the narrow sense. The issues here are complex but this looks superficially just like the situation described in my paper. The simulated world captures all of the structural features of physics but leaves out whatever biological properties are necessary and in this sense the reasoning of the paper holds up.

This is why I think the comparison with Russellian monism invoked by Dave is helpful. In fact when I pitched my commentary to Dave I included this comparison with Russellian monism but it did not get developed in the paper. At any rate, I think what it helps us to see is the many ways in which we can *almost* conceive of zombies. This is a point that I have made going back to some of my earliest writings about zombies.  If the identity theory is true, or if some kind of biological view about consciousness is true, then there is some (as yet to be discovered) property/properties of biological neural states which necessitate/cause /just are the existence of phenomenal consciousness. Since we don’t know what this property is (yet) and since we don’t yet understand how it could necessitate/cause/etc phenomenal consciousness, we may fail to include it in our conceptualization of a ‘zombie world’. Or we may include it and fail to recognize that this entails a contradiction. I am sympathetic to both of these claims.

On the one hand, we can certainly conceive of a world very nearly physically just like ours. This world may have all/most of the same physical properties, excepting certain necessary biological properties, and as a result the creatures will behave in indistinguishable ways from us (given certain other assumptions). On the other hand we may conceive of the zombie twin as a biologically exact duplicate in which case we do not see that this is not actually a conceivable situation. If we knew the full biological story we would be, or at least could be, in a position to see that we had misdescribed the situation in just the same way as someone who did not know enough chemistry might think they could conceive of h2o failing to be water (in a world otherwise physically just like ours). This is what I take to be the essence of the Krpkean strategy. We allow that the thing in question is a metaphysical possibility but then argue that it is actually misdescribed in the original argument. While misdescribing it we think (mistakenly) we have conceived of a certain situation being true but really we have conceived of a slightly different situation being true and this one is compatible with physicalism.

Thus while I think the issues are complex and that I did not get them right in the paper I still think the paper is morally correct. To the extent that biological materialism resembles Russellian monism is the extent to which the zombie argument is irrelevant.

A Higher-Order Theory of Emotional Consciousness

I am very happy to be able to say that the paper I have been writing with Joseph E. LeDoux is out in PNAS (Proceeding of the National Academy of the Sciences of the United States). In this paper we develop a higher-order theory of conscious emotional experience.

I have been interested in the emotions for quite some time now. I wrote my dissertation trying to show that it was possible to take seriously the role that the emotions play in our moral psychology which is seemingly revealed by contemporary cognitive neuroscience, and which I take to suggest that one of the basic premises of emotivism is true. But at the same time I wanted to preserve the space for one to also take seriously some kind of moral realism. In the dissertation I was more concerned with the philosophy of language than with the nature of the emotions but I have always been attracted to a rather simplistic view on which the differing conscious emotions differ with respect to the way in which they feel subjectively (I explore this as a general approach to the propositional attitudes in The Mark of the Mental). The idea that emotions are feelings is an old one in philosophy but has fallen out of favor in recent years. I also felt that in fleshing out such an account the higher-order approach to consciousness would come in handy. This idea was really made clear when I reviewed the book Feelings and Emotions: The Amsterdam Symposium. I felt that it would be a good idea to approach the science of emotions with the higher-order theory of consciousness in mind.

That was back in 2008 and since then I have not really followed up on any of the ideas in my dissertation. I have always wanted to but have always found something else at the moment to work on and that is why it is especially nice to have been working with Joseph LeDoux explicitly combining the two. I am very happy with the result and look forward to any discussion.