Eliminativism and the Neuroscience of Consciousness

I am teaching Introduction to Neuroscience this spring semester and am using An Introduction to Brain and Behavior 5th edition by Kolb et al as the textbook (this is the book the biology program decided to adopt). I have not previously used this book and so I am just getting to find my way around it but so far I am enjoying it. The book makes a point of trying to connect neuroscience, psychology, and philosophy, which is pretty unusual for these kinds of textbooks (or at least it used to be!).

In the first chapter they go through some of the basic issues in the metaphysics of the mind, starting with Aristotle and then comparing Descartes’ dualism to Darwin’s Materialism. This is a welcome sight in a neuroscience/biological psychology textbook, but there are some points at which I find myself disagreeing with the way they set things up. I was thinking of saying something in class but we have so little time as it is. I then thought maybe I would write something and post it on Blackboard but if I do that I may as well have it here in case anyone else wants to chime in.

They begin by discussing the greek myth of Cupid and Psyche and then say,

The ancient Greek philosopher Aristotle was alluding to this story when he suggested that all human intellectual functions are produced by a person’s psyche. The psyche, Aristotle argued, is responsible for life, and its departure from the body results in death.

Thus, according to them, the ordinary conception of the way things work, i.e. that the mind is the cause of our behavior, is turned by  Aristotle into a psychological theory about the source or cause of behavior. They call this position mentalism.

They also say that Aristotle’s view was that the mind was non-material and separate from the body, and this is technically true. I am by no means an expert on Aristotle’s philosophy in general but his view seems to have been that the mind was the form of the body in something like the way that the shape of a statue was the form of (say) some marble. This is what is generally referred to as ‘hylomorphism’ which means that ordinary objects are somehow composed of both matter and form. I’ll leave aside the technical philosophical details but I think the example of a statue does an ok job of getting at the basics.  The statue of Socrates and the marble that it is composed out of are two distinct objects for Aristotle but I am not sure that I would say that the statue was non-physical. It is physical but it is just not identical to the marble it is made out of (you can destroy the statue and not destroy the marble so they seem like different things). So while it is true that Aristotle claimed the mind and body were distinct  I don’t think it is fair to say that Aristotle thought that the psyche was non-physical. It was not identical to the body but was something like ‘the body doing what it does’ or ‘the organizing principle of the body’. But ok, that is a subtle point!

They go on to say that

Descartes’s thesis that the [non-physical] mind directed the body was a serious attempt to give the brain an understandable role in controlling behavior. This idea that behavior is controlled by two entities, a [non-physical] mind and a body, is dualism (from Latin, meaning two). To Descartes, the [non-physical] mind received information from the body through the brain. The [non-physical] mind also directed the body through the brain. The rational [non-physical] mind, then, depended on the brain both for information and to control behavior.

I think this is an interesting way to frame Descartes view. On the kind of account they are developing Aristotle could not allow any kind of physical causation by the non-physical mind but I am not sure this is correct.

But either way they have an interesting way of putting things. The question is what produces behavior? If we start with a non-physical mind as the cause of behavior then that seems to leave no role for the brain, so then we would have to posit that the brain and the non-physical mind work together to produce behavior.

They then go on to give the standard criticisms of Descartes’ dualism. They argue that it violates the conservation of energy, though this is not entirely clear (see David Papineau’s The Rise of Physicalism for some history on this issue). They also argue that dualism is a bad theory because it has led to morally questionable results. In particular:

Cruel treatment of animals, children, and the mentally ill has for centuries been justified by Descartes’s theory.

I think this is interesting and probably true. It is a lot easier to dehumanize something if you think the part that matters can be detached. However I am not sure this counts as a reason to reject dualism. Keep in mind I am not much of a dualist but if something is true then it is true. I tend to find that students more readily posit a non-physical mind for animals than they do deny that they have pain as Descartes did but that is neither here nor there.

Having set everything up in this way they then introduce eliminativism about the mind as follows.

The contemporary philosophical school eliminative materialism takes the position that if behavior can be described adequately without recourse to the mind, then the mental explanation should be eliminated.

Thus they seem to be claiming that the non-physical aspect of the system should be eliminated, which I think a lot of people might agree with, but also that along with it the mental items that Descartes and others thought were non-physical should be eliminated as well. I fully agree that, in principle, all of the behaviors of animals can be fully explained in terms of the brain and its activity but does this mean that we should eliminate the mind? I don’t think so! In fact I would generally think that this is the best argument against dualisms like Descartes’. We have never needed to actually posit any non-physical features in the explanation of animal behavior.

In general the book tends to neglect the distinction between reduction and elimination. One can hold that we should eliminate the idea that pains and beliefs are non-physical mental items and instead think that they are physical and can be found in the activity or biology of the brain. That is to say we can think that certain states of the brain just are the having of a belief or feeling of a pain, etc. Eliminativism, as it is usually understood, is not a claim about the physicality of the mind. It is instead a claim about how neuroscience will proceed in the future. That is to say the emphasis is not on the *materialism* but on the *eliminative* part. The goal is to distinguish it from other kinds of materialism not to distinguish it from dualism. The claim is that when neuroscience gives us the ultimate explanation of behavior we will see that there really is no such thing as a belief. This is very different from the claim that we will find out that certain brain states are beliefs.

Thus it is a bit strange that the authors run together the claim that the mind is a non-physical substance together with the claim that there are such things as beliefs, desires, pains, itches, and so on. This seems to be a confusion that was evident in early discussions of eliminativism (see the link above) but now we know we can eliminate one and reduce the other, though we may not as well.

They go on to say,

Daniel Dennett (1978) and other philosophers, who have considered such mental attributes as consciousness, pain, and attention, argue that an understanding of brain function can replace mental explanations of these attributes. Mentalism, by contrast, defines consciousness as an entity, attribute, or thing. Let us use the concept of consciousness to illustrate the argument for eliminative materialism.

I do not think this is quite the right way to think about Dennett’s views but it is hard to know if there is a right way to think about them! At any rate it is true that Dennett thinks that we will not find anything like beliefs in the completed neuroscience but it is wrong to think that Dennett thinks we should eliminate mentalistic talk. It is true, for Dennett, that there are no beliefs in the brain but it is still useful, on his view, to talk about beliefs and to explain behavior in terms of beliefs.

He has lately taken to comparing his views with the way that your desktop computer works. When you look at the desktop there are various icons there and folders, etc. Clicking on the folder will bring up a menu showing where your saved files are, etc. But it would be a mistake to think that this gave you any idea about how the computer was working. It is not storing little file folders away. Rather there is a bunch of machine code and those icons are a convenient way for you to interface with that code without having to know anything about it. So, too, Dennett argues our talk about the mind is like that. It is useful but wrong about the nature of the brain.

At any rate how does consciousness illustrate the argument for eliminative materialism?

The experimenters’ very practical measures of consciousness are formalized by the Glasgow Coma Scale (GCS), one indicator of the degree of unconsciousness and of recovery from unconsciousness. The GCS rates eye movement, body movement, and speech on a 15-point scale. A low score indicates coma and a high score indicates consciousness. Thus, the ability to follow commands, to eat, to speak, and even to watch TV provide quantifiable measures of consciousness contrasting sharply with the qualitative description that sees consciousness as a single entity. Eliminative materialists would argue, therefore, that the objective, measurably improved GCS score of behaviors in a brain-injured patient is more useful than a subjective mentalistic explanation that consciousness has “improved.”

I don’t think I see much of an argument for eliminativism in this approach. The basic idea seems to be that we should take ‘the patient is conscious’ as a description of a certain kind of behavior that is tied to brain activity and that this should be taken as evidence that we should not take ‘consciousness’ to refer to a non-physical mental entity. This is interesting and it illustrates a general view I think is in the background of their discussion. Mentalism, as they define it, is the claim that the non-physical mind is the cause of behavior. They propose eliminating that but keeping the mentalistic terms, like ‘consciousness’. But they argue that we should think of these terms not as naming some subjective mental state but as a description of objective behavior.

I do agree that our ordinary conception of ‘consciousness’ in the sense of being awake or asleep or in a coma will come to be refined by things like the Glasgow Coma Scale. I also agree that this may be some kind of evidence against the existence of a non-physical mind that is either fully conscious or not at one moment. As the authors themselves are at pains to point out we can take the behavior to be tied to brain activity and it is there that I would expect to find consciousness. So I would take this as evidence of reduction or maybe slight modification of our ordinary concept of waking consciousness. That is, on my view, we keep the mental items and identify them with brain activity thereby rejecting dualism (even though I think dualism could be true, I just don’t think we have a lot of reason to believe that it is in fact true).

They make this clear in their summary of their view;

Contemporary brain theory is materialistic. Although materialists, your authors included, continue to use subjective mentalistic words such as consciousnesspain, and attention to describe more complex behaviors, at the same time they recognize that these words do not describe mental entities.

It think it should be very clear by now that they mean this as a claim about the non-physical mind. The word ‘consciousness’ on their view describes a kind of behavior which can be tied to the brain but not a non-physical part of nature. But even so it will still be true that the brain’s activity will cause pain; as long as we interpret ‘pain’ as ‘pain behavior’.

However, I think it is also clear by now that we need not put things this way. It seems to me that the better way to think of things is that pain causes pain behavior, and that pain is typically and canonically a conscious experience, and that we can learn about the nature of pain by studying the brain (because certain states of the brain just are states of being in pain).  We can thereby be eliminativists about the non-physical mind while being reductionists about pain.

But, whichever way one goes on this, is it even correct to say that modern neuroscience is materialistic? This seems to assume too much. Contemporary neuroscience does make the claim that an animal’s behavior can be fully understood in terms of brain activity (and it seems to me that this claim is empirically well justified) but is this the same thing as being materialistic? It depends on what one thinks about consciousness. It is certainly possible to take all of what neurosciences says and still think that conscious experience is not physical. That is the point that some people want to make by imagining zombies (or claiming that they can). It seems to them that we could have everything that neuroscience tells us about it and its relation to behavior and yet still lack any of the conscious experience in the sense that there is something that it is like for the subject. I don’t think we can really do this but it certainly seems like we can to (me and) a lot of other people. I also agree that eliminativism is a possibility in some sense of that word but I don’t see that neuroscience commits you to it or that it is in any way an assumption of contemporary brain theory.

It wasn’t that long ago (back in the 1980s) that Jerry Fodor famously said, “if commonsense psychology were to collapse, that would be, beyond comparison, the greatest intellectual catastrophe in the history of our species” and I tend to agree (to a somewhat less hyperbolic way of putting the point). The authors of this textbook may advocate eliminating our subjective mental life but that is not something that contemporary neuroscience commits you to!

Kozuch on Lau and Brown

Way back on November 20th 2009 Benji Kozuch came and gave a talk at the CUNY Cognitive Science series and became the first to be persuaded by me to attempt an epic marathon of cognitive science, drinking, and jamming!  The mission: give a 3 hour talk followed by intense discussion over drinks (and proceeded by intense discussion over lunch) followed by a late night jam session at a midtown rehearsal studio. This monstrous marathon typically begins at noon with lunch and then concludes sometime around 10 pm when the jamming is done (drinks after jamming optional). That’s 10 hours-plus of philosophical and musical mayhem! We recorded the jam that night but it was subsequently ruined and no one has ever heard what happened that night…which is probably for the best!

This was just before our first open jam session at the Parkside Lounge (the first one was held after the American Philosophical Association meeting in NYC December 2009), which became the New York Consciousness Collective and gave rise to Qualia Fest. But this itself was the culmination of a lot of music playing going back to the summer of 2006. The last Qualia Fest was in 2012 but since then we have had two other brave members of Club Cogsci. One is myself (in 2015) and the other is Joe LeDoux (in 2016). That’s 10 year’s of jamming with cognitive scientists and philosophers! Having done it myself, I can say it is grueling and special thanks go to Benji for being such a champion.

Putting all of that to one side, Kozuch has in some recent publications argued against the position that I tentatively support. In particular in his 2014 Philosophical Studies paper he argued that evidence from lesions to prefrontal areas cast doubt on higher-order theories of consciousness (see Lau and Rosenthal for a defense of higher-order theories against this kind of charge). I have for sometime meant to post something about this (at one point I thought I might have a conference presentation based on this)…but, as is becoming more common, it has taken a while to get to it! Teaching a 6/3-6/3 load has been stressful but I think I am beginning to get the hang of how to manage time and to find the time to have some thoughts that are not related to children or teaching 🙂

The first thing I would note is that Kozuch clearly has the relational version of the higher-order theory in mind. In the opening setup he says,

…[Higher-Order] theories claim that a mental state M cannot be phenomenally conscious unless M is targeted by some mental state M*. It is precisely this claim that is my target.

This is one way of characterizing the higher-order approach but I have spent a lot of time suggesting that this is not the best way to think of higher-order theories. This is why I coined the term ‘HOROR theory’. I used to think that the non-relational way of doing things was closer to the spirit of what Rosenthal intended but now I think that this is a pointless debate and that there are just (at least) two different ways of thinking about higher-order theories. On the one kind, as Kozuch says, the first-order state M is made phenomenally conscious by the targeting of M by some higher-order state.

I have argued that another way of thinking about all of this is that it is not the first-order state that gets turned into a phenomenally conscious state. This is because of things like Block’s argument, and the empirical evidence (as I interpret that evidence at least). Now this would not really matter if all Kozuch wanted to do was to argue against the relational view, I might even join him in that! But if he is going to cite my work and argue against the view that I endorse then the HOROR theory might make a difference. Let’s see.

The basic premise of the paper is that if a higher-order theory is true then we have good reason to think that damaging or impairing the brain areas associated with the higher-order awareness should impair conscious experience. From here Kozuch argues that the best candidate for the relevant brain areas are the dorsal lateral prefrontal cortex. I agree that we have enough evidence to take this area seriously as a possible candidate for an area important for higher-order awareness, but I also think we need to keep in mind other prefrontal areas, and even the possibility that different prefrontal areas may have different roles to play in the higher-order awareness.

At any rate I think I can agree with Kozuch’s basic premise that if we damaged the right parts of the prefrontal cortex we should expect loss or degradation of visual phenomenology. But what would count as evidence of this? If we call an area of the brain an integral area only if that area is necessary for conscious experience then what will the result of disabling that area be? Kozuch begins to answer this question as follows,

It is somewhat straightforward what would happen if each of a subject’s integral areas (or networks) were disabled. Since the subject could no longer produce those HO states necessary for visual consciousness, we may reasonably predict this results in something phenomenologically similar to blindness.

I think this is somewhat right. From the subject’s point of view there would be no visual phenomenology  but I am not sure this is similar to blindness, where a subject seems to be aware of their lack of visual phenomenology (or at least can be made aware). Kozuch is careful to note in a footnote that it is at least a possibility that subjects may loose conscious phenomenology but fail to notice it but I do not think he takes it as seriously as he should.

This is because the higher-order theory, especially the non-relational version I am most likely to defend, the first-order states largely account for the behavioral data and the higher-order states account for visual phenomenology. Thus in a perfect separation of the two, that is in a case of just first-order states and no higher-order states at all then according to the theory the behavior of the animal will largely be undisturbed. The first-order states will produce their usual effects and the animal will be able to sort, push buttons, etc. They will not be able to report on their experience, or any changes therein, because they will not have the relevant higher-order states to be aware that they are having any first-order states at all. I am not sure this is what is happening in these cases (I have heard some severe skepticism over whether these second hand reports should be given much weight) but it is not ruled out theoretically and so we haven’t got any real evidence that pushes past one’s intuitive feel for these things. Kozuch comes close to recognizing this when he says, in a footnote,

In what particular manner should we expect the deficits to be detected? I do not precisely know, but one could guess that a subject with a disabled integral area would not perform normally on (at least some) tests of their visual abilities. Failing that, we could probably still expect the subject to volunteer information indicating that things ‘‘seemed’’ visually different to her.

But both of these claims are disputed by the higher-order theory!

Later in the paper where Kozuch is addressing some of the evidence for the involvement of the prefrontal cortex he introduces the idea of redundancy. If someone objects that taking away on integral area does not dramatically diminish visual phenomenology because of some other area taking over or covering for it then he claims we are committed to the view that there are redundant duplications of first-order contents at the higher-order level. But this does not seem right to me. An alternative view is that the prefrontal areas are all contributing something different to the content of the higher-orderr representation and taking one away may take away one component of the overall representations. We do not need to appeal to redundancy to explain why there may not be dramatic changes in the conscious experiences of subjects.

Finally, I would say that I wish Kozuch had addressed what I take to be the main argument in Lau and Brown (and elsewhere), which is that we have empirical cases which suggest that there is a difference in the conscious visual phenomenology of a subject but where the first-order representations do not seem like they would be different in the relevant way. In one case, the Rare Charles Bonnett case, we have a reason to think that the first-order representations are too weak to capture the rich phenomenal experience. In another case, subjective inflation, we have reason to think that the first-order states are held roughly constant while the phenomenology changes.

-photo by Jared Blank

Chalmers on Brown on Chalmers

I just found out that the double special issue of the Journal of Consciousness Studies devoted to David Chalmers’ paper The Singularity: A Philosophical Analysis recently came out as a book! I had a short paper in that collection that stemmed from some thoughts I had about zombies and simulated worlds (I posted about them here and here). Dave responded to all of the articles (here) and I just realized that I never wrote anything about that response!

I have always had a love/hate relationship with this paper. On the one hand I felt like there was an idea worth developing, one that started to take shape back in 2009. On the other hand there was a pretty tight deadline for the special issue and I did not feel like I had really got ahold of what the main idea was supposed to be, in my own thinking. I felt rushed and secretly wished I could wait a year or two to think about it. But this was before I had tenure and I thought it would be a bad move to miss this opportunity. The end result is that I think the paper is flawed but I still feel like there is an interesting idea lurking about that needs to be more fully developed. Besides, I thought, the response from Dave would give me an opportunity to think more deeply about these issues and would be something I could respond to…that was five years ago! Well, I guess better late than never so here goes.

My paper was divided into two parts. As Dave says,

First, [Brown] cites my 1990 discussion piece “How Cartesian dualism might have been true”, in which I argued that creatures who live in simulated environments with separated simulated cognitive processes would endorse Cartesian dualism. The cognitive processes that drive their behavior would be entirely distinct from the processes that govern their environment, and an investigation of the latter would reveal no sign of the former: they will not find brains inside their heads driving their behavior, for example. Brown notes that the same could apply even if the creatures are zombies, so this sort of dualism does not essentially involve consciousness. I think this is right: we might call it process dualism, because it is a dualism of two distinct sorts of processes. If the cognitive processes essentially involve consciousness, then we have something akin to traditional Cartesian dualism; if not, then we have a different sort of interactive dualism.

Looking back on this now I think that I can say that part of the idea I had was that what Dave here calls ‘process dualism’ is really what lies behind the conceivability of zombies. Instead of testing whether (one thinks that) dualism or physicalism is true about consciousness the two-dimensional argument against materialism is really testing whether one thinks that consciousness is  grounded in biological or functional/computational properties. This debate is distinct and orthogonal to the debate about physicalism/dualism.

In the next part of the response Dave addresses my attempted extension of this point to try to reconcile the conceivability of zombies with what I called ‘biologism’. Biologism was supposed to be a word to distinguish the debate between the physicalist and the dualist from the debate between the biologically-oriented views of the mind as against the computationally oriented views. At the time I thought this term was coined by me and it was supposed to be an umbrella term that would have biological materialism as a particular variant. I should note before going on that it was only after the paper was published that I became aware that this term has a history and is associated with certain views about ‘the use of biological explanations in the analysis of social situations‘. This is not what I intended and had I known that beforehand I would have tried to coin a different term.

The point was to try to emphasize that this debate was supposed to be distinct from the debate about physicalism and that one could endorse this kind of view even if one rejected biological materialism. The family of views I was interested in defending can be summed up as holding that consciousness is ultimately grounded in or caused by some biological property of the brain and that a simulation of the brain would lack that property. This is compatible with materialism (=identity theory) but also dualism. One could be a dualist and yet hold that only biological agents could have the required relation to the non-physical mind. Indeed I would say that in my experience this is the view of the vast majority of those who accept dualism (by which I mostly mean my students). Having said that it is true that in my own thinking I lean towards physicalism (though as a side-side note I do not think that physicalism is true, only that we have no good reason to reject it) and it is certainly true that in the paper I say that this can be used to make the relevant claim about biological materialism.

At any rate, here is what Dave says about my argument.

Brown goes on to argue that simulated worlds show how one can reconcile biological materialism with the conceivability and possibility of zombies. If biological materialism is true, a perfect simulation of a biological conscious being will not be conscious. But if it is a perfect simulation in a world that perfectly simulates our physics, it will be a physical duplicate of the original. So it will be a physical duplicate without consciousness: a zombie.

I think Brown’s argument goes wrong at the second step. A perfect simulation of a physical system is not a physical duplicate of that system. A perfect simulation of a brain on a computer is not made of neurons, for example; it is made of silicon. So the zombie in question is a merely functional duplicate of a conscious being, not a physical duplicate. And of course biological materialism is quite consistent with functional duplicates.

It is true that from the point of view of beings in the simulation, the simulated being will seem to have the same physical structure that the original being seems to us to have in our world. But this does not entail that it is a physical duplicate, any more than the watery stuff on Twin Earth that looks like water really is water. (See note 7 in “The Matrix as metaphysics” for more here.) To put matters technically (nonphilosophers can skip!), if P is a physical specification of the original being in our world, the simulated being may satisfy the primary intension of P (relative to an inhabitant of the simulated world), but it will not satisfy the secondary intension of P. For zombies to be possible in the sense relevant to materialism, a being satisfying the secondary intension of P is required. At best, we can say that zombies are (primarily) conceivable and (primarily) possible— but this possibility mere reflects the (secondary) possibility of a microfunctional duplicate of a conscious being without consciousness, and not a full physical duplicate. In effect, on a biological view the intrinsic basis of the microphysical functions will make a difference to consciousness. To that extent the view might be seen as a variant of what is sometimes known as Russellian monism, on which the intrinsic nature of physical processes is what is key to consciousness (though unlike other versions of Russellian monism, this version need not be committed to an a priori entailment from the underlying processes to consciousness).

I have to say that I am sympathetic with Dave in the way he diagnoses the flaw in the argument in the paper. It is a mistake to think of the simulated world, with its simulated creatures, as being a physical duplicate of our world in the right way; especially if this simulation is taking place in the original non-simulated world. If the biological view is correct then it is just a functional duplicate, true a microfunctional duplicate, but not a physical duplicate.

While I think this is right I also think the issues are complicated. For example take the typical Russellian pan(proto)psychism that is currently being explored by Chalmers and others. This view is touted as being compatible with the conceivability of zombies because we can conceive of a duplicate of our physics as long as we mean the structural, non-intrinsic properties. Since physics, on this view, describes only these structural features we can count the zombie world as having our physics in the narrow sense. The issues here are complex but this looks superficially just like the situation described in my paper. The simulated world captures all of the structural features of physics but leaves out whatever biological properties are necessary and in this sense the reasoning of the paper holds up.

This is why I think the comparison with Russellian monism invoked by Dave is helpful. In fact when I pitched my commentary to Dave I included this comparison with Russellian monism but it did not get developed in the paper. At any rate, I think what it helps us to see is the many ways in which we can *almost* conceive of zombies. This is a point that I have made going back to some of my earliest writings about zombies.  If the identity theory is true, or if some kind of biological view about consciousness is true, then there is some (as yet to be discovered) property/properties of biological neural states which necessitate/cause /just are the existence of phenomenal consciousness. Since we don’t know what this property is (yet) and since we don’t yet understand how it could necessitate/cause/etc phenomenal consciousness, we may fail to include it in our conceptualization of a ‘zombie world’. Or we may include it and fail to recognize that this entails a contradiction. I am sympathetic to both of these claims.

On the one hand, we can certainly conceive of a world very nearly physically just like ours. This world may have all/most of the same physical properties, excepting certain necessary biological properties, and as a result the creatures will behave in indistinguishable ways from us (given certain other assumptions). On the other hand we may conceive of the zombie twin as a biologically exact duplicate in which case we do not see that this is not actually a conceivable situation. If we knew the full biological story we would be, or at least could be, in a position to see that we had misdescribed the situation in just the same way as someone who did not know enough chemistry might think they could conceive of h2o failing to be water (in a world otherwise physically just like ours). This is what I take to be the essence of the Krpkean strategy. We allow that the thing in question is a metaphysical possibility but then argue that it is actually misdescribed in the original argument. While misdescribing it we think (mistakenly) we have conceived of a certain situation being true but really we have conceived of a slightly different situation being true and this one is compatible with physicalism.

Thus while I think the issues are complex and that I did not get them right in the paper I still think the paper is morally correct. To the extent that biological materialism resembles Russellian monism is the extent to which the zombie argument is irrelevant.

A Higher-Order Theory of Emotional Consciousness

I am very happy to be able to say that the paper I have been writing with Joseph E. LeDoux is out in PNAS (Proceeding of the National Academy of the Sciences of the United States). In this paper we develop a higher-order theory of conscious emotional experience.

I have been interested in the emotions for quite some time now. I wrote my dissertation trying to show that it was possible to take seriously the role that the emotions play in our moral psychology which is seemingly revealed by contemporary cognitive neuroscience, and which I take to suggest that one of the basic premises of emotivism is true. But at the same time I wanted to preserve the space for one to also take seriously some kind of moral realism. In the dissertation I was more concerned with the philosophy of language than with the nature of the emotions but I have always been attracted to a rather simplistic view on which the differing conscious emotions differ with respect to the way in which they feel subjectively (I explore this as a general approach to the propositional attitudes in The Mark of the Mental). The idea that emotions are feelings is an old one in philosophy but has fallen out of favor in recent years. I also felt that in fleshing out such an account the higher-order approach to consciousness would come in handy. This idea was really made clear when I reviewed the book Feelings and Emotions: The Amsterdam Symposium. I felt that it would be a good idea to approach the science of emotions with the higher-order theory of consciousness in mind.

That was back in 2008 and since then I have not really followed up on any of the ideas in my dissertation. I have always wanted to but have always found something else at the moment to work on and that is why it is especially nice to have been working with Joseph LeDoux explicitly combining the two. I am very happy with the result and look forward to any discussion.

Existentialism is a Transhumanism

In the academic year 2015-2016 I was the co-director, with my colleague Naomi Stubbs, of a faculty seminar on Technology, Self, and Society. This was part of a larger three year project funded by a grant from the NEH and supported by LaGuardia’s Center for Teaching and Learning.  During my year as co-director the theme was Techno-Humanism and Transhumanism. You can see the full schedule for the seminar at the earlier link but we read four books over the year (in addition to many articles). In the Fall 2015 semester we read  The Technohuman Condition by Braden Allenby, and Superintelligence by Nick Bostrom. In the Spring semester we read The Future of the Mind by Michio Kaku, and Neuroethics, an anthology edited by Martha Farah. In addition to the readings Allenby and Kaku both gave talks at LaGuardia and since we had room for one more talk we invited David Chalmers who gave his paper on The Real and the Virtual (see short video for Aeon here).

All in all this was a fantastic seminar and I really enjoyed being a part of it. I was especially surprised to find out that some of the other faculty had used my Terminator and Philosophy book in their Science, Humanism and Technology course (I thought I was the only one who had used that book!).  The faculty came from many different disciplines ranging from English to Neuroscience and I learned quite a bit throughout the process. Two things became especially clear to me over the course of the year. The first is that many of my view can be described as Transhumanist in nature. The second is that a lot of my views can be described as Existentialist in nature.

The former was unsurprising but the latter was a bit surprising. I briefly studied Sartre and Existentialism as an undergraduate at San Francisco State University from 1997-1998 and I was really interested in Sartre’s work after that (i.e. I searched every book store in SF for anything Sartre related, bought, read it, and argued endlessly with anyone around about whether there was ‘momentum’ in consciousness). However once I got to Graduate School (in 2000)  I began to focus even more on psychology, neuroscience, and the philosophy of mind and I gradually lost contact with Sartre. I have never really kept up with the literature in this area (but I have recently read the Stanford Encyclopedia of Philosophy entries on Sartre and Existentialism), haven’t read Sartre in quite a while (but I did get out my copy of Being and Nothingness and Existentialism is a Humanism a couple of times during the seminar), and don’t work on any explicitly Sartrean themes in my published work (though there are connections between higher-order theories of consciousness and Sartre) but during this last year I found myself again and again appealing to distinctly Sartrean views, or at least Sartrean as I remembered it from being an undergraduate! By the end of it all I came to the view that Existential Transhumanism is an interesting philosophical view and probably is a pretty good descriptor for what I think about these issues. So, all that having been said, please take what follows with a grain of salt.

The core idea of existentialism as I understand it is a claim about the nature of persons and it is summed up in Sartre’s dictum that ‘existence precedes essence’. Whatever a person is you aren’t born one. You become one by acting, or as Sartre might put it, we create ourselves through our choices. Many interpret that claim as somehow being at odds with physicalism (Sartre was certainly a dualist) while I do not. But what does this mean? It helps to invoke the distinction between Facticity and Transcendence. Facticity relates to all of the things that are knowable about me from a third person point of view. It is what an intense biographer could put together. But I am not merely the sum total of those facts. I am essentially a project. An aiming toward the future. This aiming towards something is the way in which Sartre interpreted the notion of intentionality. All consciousness, for him, was necessarily directed at something that was not itself part of consciousness. This is why Sartre says ‘I am not what I am and I am what I am not”. I am not what I am in the sense of not being merely my facticity. I am what I am not in the sense that I am continually creating myself and turning myself into something that I was not previously.

Turning now for the moment to Transhumanism, I interpret this in roughly the same way as the World Transhumanist Association does. That is, as an extension of Humanism. Reason represents the best chance that Human Beings have of accomplishing our most cherished beliefs. These beliefs are enshrined in many of the world’s great religions and espouse principle of universality (all are equal in some sense), and compassion. Transhumanists see technology, at least in part, as a way of enhancing human reason and so as a way of overcoming our natural limitations.

One objection to this kind of project is that we could modify ourselves to the point of no longer being human, or to the point of our original selves not existing any further. Here I think the existentialist idea that there are no essential properties required to be human can help. We are defined by the fact that we are ‘a being whose being is in question’. That is we are essentially the kind of thing which creates itself, which aims towards something that is not yet what it is. Once one takes this kind of view one sees there is no danger in modifying ourselves. This seems to me to be very much in line with the general idea that the kinds of modifications the transhumanist envisions are not different in kind from the kind we have always done (shoes, eyeglasses, etc). Even if we are able to upload our minds to a virtual environment we may still be human by the existentialist definition.

In addition, another objection which was the central objection in the Allenby book, is that the Transhumanist somehow assumes a notion of the individual, as an independent rational entity, which doesn’t really exist. This may be the case but here I think that existentialism is very handy in helping us respond. The kind of individual envisioned by the Enlightenment thinkers may not exist but one way of seeing the transhumanist project is as seeking to construct such a being.

Enlightenment, in Kant’s immortal words, is

….man’s release from his self-incurred tutelage. Tutelage is man’s inability to make use of his understanding without direction from another. Self-incurred is this tutelage when its cause lies not in lack of reason but in lack of resolution and courage to use it without direction from another. Sapere aude! ‘Have courage to use your own reason!’- that is the motto of enlightenment

To this the transhumanist adds that Kant may have been wrong in thinking that we have enough reason and simply need the courage to use it. We may need to make ourselves into the kinds of rational beings which could fulfill the ideals of the Enlightenment.

There is a lot more that I would like to say about these issues but at this point I will briefly mention two there themes that don’t have much to do with existentialism. One is from Bostrom (see a recent talk of his at NYU’s Ethics of A.I. conference). One of Bostrom’s main claims is what he calls the orthogonality thesis. This is the claim that intelligence and values are orthogonal to each other. You can pair any level of intelligence with any goal at all.  This may be true for intelligence but I certainly don’t believe it is true for rationality.

Switching gears a bit I wanted to mention David Chalmers’ talk. I found his basic premise to be very convincing. The basic idea seemed to be that virtual objects count as real in much the same way as concrete objects do. When one is in a virtual environment (I haven’t been in one yet but I am hoping to try a Vive or a Playstation VR set soon!) and one interacts with a virtual dragon, there really is a virtual object that is there and that one is interacting with. The fundamental nature of this object is computational and there are some data structures that interact in various ways so as to make it roughly the same as ordinary objects and their atomic structure. Afterwards I asked if he thought the same was true for dreams. It seemed to me that many of the same arguments could be given for the conclusion that in one’s dreams one interacted with dream objects which were real in the same way as virtual objects. He said that perhaps but it depended on whether one was a functionalist about the mind. It seems to me that someone like Chalmers, who thinks that there is a computational/functional neural correlate for conscious states, is committed to this kind of view about dreams (even though he is a dualist). Dream objects should count as real on Chalmers’ view.

If Consciousness is an M-Property then it is Physical

Let us consider a possible world WM where consciousness is an M-property. At this world consciousness acts to collapse the wave function. Supposing that we live at WM can you or I have a zombie twin? A zombie twin is one that is physically identical to me in the relevant ways and which lacks consciousness. Suppose that I am actually suffering from a headache while eating Jelly Belly jelly beans. Then my zombie twin is in exactly the same physical states but without the consciousness. This means that the zombie must have a brain and that this brain must be in the same physical states that my brain is in. But my brain is in a collapsed state of definitely being in the relevant neural correlates (due to the presence of conscious experience). In the world where there is no consciousness, and which is physically just like WM (call this world WM-C), there would be no collapsed state. This is because the M property is missing. Since I am not in a superposition of states and my ‘zombie’ twin is we are not in the same physical states.

So it seems that if consciousness is an M-property then zombies are inconceivable and this in turns shows that if consciousness is an M-property then consciousness is a physical property.

But one might object that the right world to think about is WP. At this world the neural correlate of consciousness, construed here as distinct from consciousness itself (for the sake of argument), collapses the wave function. It is this world, continues the objection, rather than WM-C, that is the zombie world relative to WM. At WP there is a creature that has a brain, and which has a definite collapsed state identical to the neural correlates of the experience that I am actually having. This is the quantum zombie, not the one that is in the superposition of states.

I think it is is plausible that the creature at WP is in the same physical state as I am in some sense, but is it the case that WP has the same physics as WM? I would argue that they have similar physics but they are not the same. In WM when you lack consciousness you have a giant superposition that evolves deterministically according to the Schrodinger equation. There may be quasi-classical branches due to decoherence but that is not the same thing as there being a collapsed world, which is what we have at WP.

You cannot just start with WM and subtract consciousness and end up with WP. Instead you end up with WM-C and you then need to add some new physical law (or change the previous one), stating that it is the neural correlate that is responsible for collapsing the wave function. These worlds have different laws of physics and so are not the same. This is different than the zombie argument as normally construed, which leaves all the strictly physical laws in tact and simply posits the removal of the super-physical laws connecting the neural correlates of consciousness to actual consciousness.

Of course, consciousness probably isn’t an M-property but even so, any thoughts on the argument?

Consciousness as an M-Property (?)

Perhaps the central argument for thinking that the mind, consciousness included, must be a part of the physical world comes from the causal efficacy of mental states. Epiphenomenalism may be logically possible but we would need very powerful reasons for accepting it and many find that there are more powerful reasons for thinking that consciousness must play a causal role in the physical world. This has led many people to think that physicalism has the upper hand. Recently this status quo has been challenged by some philosophers who think that consciousness must be a fundamental irreducible component of the world.

One prominent defender of this view is David Chalmers who splits his credence between panpsychism and interactive dualism. On either of these views consciousness is a fundamental feature of the world that is posited in addition to the physical properties and yet it allows, or at least aspires to allow, that consciousness has a causal role to play in the physical world. Though I am optimistic about the prospects for physicalism, the kind of dualism I am most sympathetic to is the kind of Quantum Interactive Dualism as presented by Chalmers (and even more nice would be a physicalist version of that theory).

The basic idea is to define an m-property as one which acts as if it performs a measurement. M-properties will then have the effect of collapsing the wave function. Though there are many candidates for these kinds of properties consciousness seems to be a natural candidate. On this view we postulate a fundamental law that says that consciousness cannot be in superpositions and one that connects the physical correlates of consciousness to conscious experiences. This, argues Chalmers, gives us a way to make sense of a kind of interactive dualism. He does not endorse it, but it is worth exploring.

How does this give us interaction? He says,

what I think is going to actually happen here, if you think about it, is that consciousness most directly interacts with the neural correlates of consciousness, collapsing those out of superposition. So when you have an experience of red as opposed to green that may collapse a superposition of neural correlates of consciousness, say in inferiotemporal cortex, into the neural correlates of seeing red as opposed to the neural correlates of seeing green. That will then have an effect downstream. (at minute 56:33 in above linked video)

I like this kind of view and have floated something like it in an episode of spacetimemind (though, again, I would prefer it in a physicalist version). I figured I would jot down a few thoughts in hopes of eliciting some discussion to help me think through the various ideas.

First one might wonder why it is that consciousness cannot be in a superposition? Why can’t there be a state that is a superposition of consciously seeing red and consciously seeing green? One thing we might say is that phenomenal consciousness essentially involves awareness, so if I am consciously experiencing red this is essentially bound up with an awareness of myself as seeing red. This may provide some grounds for arguing that conscious experiences cannot be in superpositions.

Another major issue with this approach is the Quantum Zeno Effect. The rough idea here is that if you have a particle that will typically decay at some rate you can stop it from doing so by measuring it. This threatens to make it impossible for consciousness to show up in our world or to change. One possible way to use the above noted kind of awareness as a solution. If we suppose that we have the an unconscious representation of red, and that to make that unconscious representation conscious (in the phenomenal sense) we need to have a (possibly special kind) of awareness of that state (which in effect is the measurement by the outside observer) it will collapse into the (full) neural correlate of consciously seeing red. That will keep that state from evolving, and so will continue to be a conscious experience of phenomenal red. But since the relevant kind of awareness is external to the content (i.e red), the content of the awareness can change, thereby allowing conscious experience to change. This is, in effect, to combine a realist representationalism with a higher-order view.

One thing that seems to be in the background of Chalmers’ talk is the idea that when we get an interference pattern we have evidence that there was superposition, and conversely when we do not have an interference pattern we have wave function collapse (see minute 34-37 of his talk). But the Delayed Choice Quantum Erasers (which I have talked about previously) experiments put pressure on this kind of view.

There have been several recent experiments that build on this basic idea (see this recent paper in PNAS, or this recent paper in Science, or this one is Physical review Letters). I take these experiments to suggest that the existence of which-path information is enough to destroy the interference pattern.

So in these kinds of cases we make a measurement but since the measurement results in the loss of which-path information we still end up an interference pattern and so we seem to have an m-property (i.e. my conscious perception of the click produced by some detector) but we don’t have collapse (as indicated by the presence of an interference pattern).

Thus if we are to take the consciousness-as-m-property to be compatible with delayed choice quantum erasures we need to say that the system is in a superposition until there is a conscious experience and that even in the cases where there is an interference pattern there is still collapse. The system has collapsed from the superposition of interference pattern + no interference pattern into one or the other.