The Biological Chinese Room (?)

I am getting ready to head out to New York Medical College to give Grand Rounds in the department of Psychiatry and Behavioral Sciences on the Neurobiology of Consciousness. I am leaving in just a bit but as I was getting ready I had a strange thought about Searle’s Chinese Room argument that I thought I would jot down very quickly. I assume we are all familiar with the traditional version of the argument. We have you (or Seattle) locked in a room receiving input in a foreign language and looking up proper responses in a giant rule book to return the proper output. In effect the person in the room is performing the job that a computer would taking syntactic representations and transforming them according to formally specified rules. The general idea is that since Searle doesn’t thereby understand Chinese that there must be more to understanding it than formal computation.

Now, I don’t want to get bogged down in going over the myriads of responses and counter responses that have appeared since  Seattle first gave this argument but it did occur to me that we could give a biological version of this that would target the biological nature of consciousness that Searle prefers. Indeed, I think it also would work against Block’s recent claim that some kind of analog computation suffices for phenomenal consciousness (see his talk at Google (and especially the questions at the end)). So the basic idea is this. Instead of having the person in the room implement formal computations, have them implement analog ones by playing the role of neurons. They would be sequestered in the room as usual and would receive input in the form of neurotransmitters. They would then respond with the appropriate neurotransmitters. We can imagine the entire room is hooked up in such a way that the Chinese speaker on the outside in speaking normally, or typing or whatever, and this gets translated into neural-chemical activity which is what the person in the room receives. They respond in kind and this gets translated into speech on the other end. Seattle still wouldn’t understand Chinese.

So it seems that either this refutes the biological view of consciousness or it suggests what is wrong with the original Chinese Room argument…any thoughts?

Kozuch on Lau and Brown

Way back on November 20th 2009 Benji Kozuch came and gave a talk at the CUNY Cognitive Science series and became the first to be persuaded by me to attempt an epic marathon of cognitive science, drinking, and jamming!  The mission: give a 3 hour talk followed by intense discussion over drinks (and proceeded by intense discussion over lunch) followed by a late night jam session at a midtown rehearsal studio. This monstrous marathon typically begins at noon with lunch and then concludes sometime around 10 pm when the jamming is done (drinks after jamming optional). That’s 10 hours-plus of philosophical and musical mayhem! We recorded the jam that night but it was subsequently ruined and no one has ever heard what happened that night…which is probably for the best!

This was just before our first open jam session at the Parkside Lounge (the first one was held after the American Philosophical Association meeting in NYC December 2009), which became the New York Consciousness Collective and gave rise to Qualia Fest. But this itself was the culmination of a lot of music playing going back to the summer of 2006. The last Qualia Fest was in 2012 but since then we have had two other brave members of Club Cogsci. One is myself (in 2015) and the other is Joe LeDoux (in 2016). That’s 10 year’s of jamming with cognitive scientists and philosophers! Having done it myself, I can say it is grueling and special thanks go to Benji for being such a champion.

Putting all of that to one side, Kozuch has in some recent publications argued against the position that I tentatively support. In particular in his 2014 Philosophical Studies paper he argued that evidence from lesions to prefrontal areas cast doubt on higher-order theories of consciousness (see Lau and Rosenthal for a defense of higher-order theories against this kind of charge). I have for sometime meant to post something about this (at one point I thought I might have a conference presentation based on this)…but, as is becoming more common, it has taken a while to get to it! Teaching a 6/3-6/3 load has been stressful but I think I am beginning to get the hang of how to manage time and to find the time to have some thoughts that are not related to children or teaching 🙂

The first thing I would note is that Kozuch clearly has the relational version of the higher-order theory in mind. In the opening setup he says,

…[Higher-Order] theories claim that a mental state M cannot be phenomenally conscious unless M is targeted by some mental state M*. It is precisely this claim that is my target.

This is one way of characterizing the higher-order approach but I have spent a lot of time suggesting that this is not the best way to think of higher-order theories. This is why I coined the term ‘HOROR theory’. I used to think that the non-relational way of doing things was closer to the spirit of what Rosenthal intended but now I think that this is a pointless debate and that there are just (at least) two different ways of thinking about higher-order theories. On the one kind, as Kozuch says, the first-order state M is made phenomenally conscious by the targeting of M by some higher-order state.

I have argued that another way of thinking about all of this is that it is not the first-order state that gets turned into a phenomenally conscious state. This is because of things like Block’s argument, and the empirical evidence (as I interpret that evidence at least). Now this would not really matter if all Kozuch wanted to do was to argue against the relational view, I might even join him in that! But if he is going to cite my work and argue against the view that I endorse then the HOROR theory might make a difference. Let’s see.

The basic premise of the paper is that if a higher-order theory is true then we have good reason to think that damaging or impairing the brain areas associated with the higher-order awareness should impair conscious experience. From here Kozuch argues that the best candidate for the relevant brain areas are the dorsal lateral prefrontal cortex. I agree that we have enough evidence to take this area seriously as a possible candidate for an area important for higher-order awareness, but I also think we need to keep in mind other prefrontal areas, and even the possibility that different prefrontal areas may have different roles to play in the higher-order awareness.

At any rate I think I can agree with Kozuch’s basic premise that if we damaged the right parts of the prefrontal cortex we should expect loss or degradation of visual phenomenology. But what would count as evidence of this? If we call an area of the brain an integral area only if that area is necessary for conscious experience then what will the result of disabling that area be? Kozuch begins to answer this question as follows,

It is somewhat straightforward what would happen if each of a subject’s integral areas (or networks) were disabled. Since the subject could no longer produce those HO states necessary for visual consciousness, we may reasonably predict this results in something phenomenologically similar to blindness.

I think this is somewhat right. From the subject’s point of view there would be no visual phenomenology  but I am not sure this is similar to blindness, where a subject seems to be aware of their lack of visual phenomenology (or at least can be made aware). Kozuch is careful to note in a footnote that it is at least a possibility that subjects may loose conscious phenomenology but fail to notice it but I do not think he takes it as seriously as he should.

This is because the higher-order theory, especially the non-relational version I am most likely to defend, the first-order states largely account for the behavioral data and the higher-order states account for visual phenomenology. Thus in a perfect separation of the two, that is in a case of just first-order states and no higher-order states at all then according to the theory the behavior of the animal will largely be undisturbed. The first-order states will produce their usual effects and the animal will be able to sort, push buttons, etc. They will not be able to report on their experience, or any changes therein, because they will not have the relevant higher-order states to be aware that they are having any first-order states at all. I am not sure this is what is happening in these cases (I have heard some severe skepticism over whether these second hand reports should be given much weight) but it is not ruled out theoretically and so we haven’t got any real evidence that pushes past one’s intuitive feel for these things. Kozuch comes close to recognizing this when he says, in a footnote,

In what particular manner should we expect the deficits to be detected? I do not precisely know, but one could guess that a subject with a disabled integral area would not perform normally on (at least some) tests of their visual abilities. Failing that, we could probably still expect the subject to volunteer information indicating that things ‘‘seemed’’ visually different to her.

But both of these claims are disputed by the higher-order theory!

Later in the paper where Kozuch is addressing some of the evidence for the involvement of the prefrontal cortex he introduces the idea of redundancy. If someone objects that taking away on integral area does not dramatically diminish visual phenomenology because of some other area taking over or covering for it then he claims we are committed to the view that there are redundant duplications of first-order contents at the higher-order level. But this does not seem right to me. An alternative view is that the prefrontal areas are all contributing something different to the content of the higher-orderr representation and taking one away may take away one component of the overall representations. We do not need to appeal to redundancy to explain why there may not be dramatic changes in the conscious experiences of subjects.

Finally, I would say that I wish Kozuch had addressed what I take to be the main argument in Lau and Brown (and elsewhere), which is that we have empirical cases which suggest that there is a difference in the conscious visual phenomenology of a subject but where the first-order representations do not seem like they would be different in the relevant way. In one case, the Rare Charles Bonnett case, we have a reason to think that the first-order representations are too weak to capture the rich phenomenal experience. In another case, subjective inflation, we have reason to think that the first-order states are held roughly constant while the phenomenology changes.

-photo by Jared Blank

Existentialism is a Transhumanism

In the academic year 2015-2016 I was the co-director, with my colleague Naomi Stubbs, of a faculty seminar on Technology, Self, and Society. This was part of a larger three year project funded by a grant from the NEH and supported by LaGuardia’s Center for Teaching and Learning.  During my year as co-director the theme was Techno-Humanism and Transhumanism. You can see the full schedule for the seminar at the earlier link but we read four books over the year (in addition to many articles). In the Fall 2015 semester we read  The Technohuman Condition by Braden Allenby, and Superintelligence by Nick Bostrom. In the Spring semester we read The Future of the Mind by Michio Kaku, and Neuroethics, an anthology edited by Martha Farah. In addition to the readings Allenby and Kaku both gave talks at LaGuardia and since we had room for one more talk we invited David Chalmers who gave his paper on The Real and the Virtual (see short video for Aeon here).

All in all this was a fantastic seminar and I really enjoyed being a part of it. I was especially surprised to find out that some of the other faculty had used my Terminator and Philosophy book in their Science, Humanism and Technology course (I thought I was the only one who had used that book!).  The faculty came from many different disciplines ranging from English to Neuroscience and I learned quite a bit throughout the process. Two things became especially clear to me over the course of the year. The first is that many of my view can be described as Transhumanist in nature. The second is that a lot of my views can be described as Existentialist in nature.

The former was unsurprising but the latter was a bit surprising. I briefly studied Sartre and Existentialism as an undergraduate at San Francisco State University from 1997-1998 and I was really interested in Sartre’s work after that (i.e. I searched every book store in SF for anything Sartre related, bought, read it, and argued endlessly with anyone around about whether there was ‘momentum’ in consciousness). However once I got to Graduate School (in 2000)  I began to focus even more on psychology, neuroscience, and the philosophy of mind and I gradually lost contact with Sartre. I have never really kept up with the literature in this area (but I have recently read the Stanford Encyclopedia of Philosophy entries on Sartre and Existentialism), haven’t read Sartre in quite a while (but I did get out my copy of Being and Nothingness and Existentialism is a Humanism a couple of times during the seminar), and don’t work on any explicitly Sartrean themes in my published work (though there are connections between higher-order theories of consciousness and Sartre) but during this last year I found myself again and again appealing to distinctly Sartrean views, or at least Sartrean as I remembered it from being an undergraduate! By the end of it all I came to the view that Existential Transhumanism is an interesting philosophical view and probably is a pretty good descriptor for what I think about these issues. So, all that having been said, please take what follows with a grain of salt.

The core idea of existentialism as I understand it is a claim about the nature of persons and it is summed up in Sartre’s dictum that ‘existence precedes essence’. Whatever a person is you aren’t born one. You become one by acting, or as Sartre might put it, we create ourselves through our choices. Many interpret that claim as somehow being at odds with physicalism (Sartre was certainly a dualist) while I do not. But what does this mean? It helps to invoke the distinction between Facticity and Transcendence. Facticity relates to all of the things that are knowable about me from a third person point of view. It is what an intense biographer could put together. But I am not merely the sum total of those facts. I am essentially a project. An aiming toward the future. This aiming towards something is the way in which Sartre interpreted the notion of intentionality. All consciousness, for him, was necessarily directed at something that was not itself part of consciousness. This is why Sartre says ‘I am not what I am and I am what I am not”. I am not what I am in the sense of not being merely my facticity. I am what I am not in the sense that I am continually creating myself and turning myself into something that I was not previously.

Turning now for the moment to Transhumanism, I interpret this in roughly the same way as the World Transhumanist Association does. That is, as an extension of Humanism. Reason represents the best chance that Human Beings have of accomplishing our most cherished beliefs. These beliefs are enshrined in many of the world’s great religions and espouse principle of universality (all are equal in some sense), and compassion. Transhumanists see technology, at least in part, as a way of enhancing human reason and so as a way of overcoming our natural limitations.

One objection to this kind of project is that we could modify ourselves to the point of no longer being human, or to the point of our original selves not existing any further. Here I think the existentialist idea that there are no essential properties required to be human can help. We are defined by the fact that we are ‘a being whose being is in question’. That is we are essentially the kind of thing which creates itself, which aims towards something that is not yet what it is. Once one takes this kind of view one sees there is no danger in modifying ourselves. This seems to me to be very much in line with the general idea that the kinds of modifications the transhumanist envisions are not different in kind from the kind we have always done (shoes, eyeglasses, etc). Even if we are able to upload our minds to a virtual environment we may still be human by the existentialist definition.

In addition, another objection which was the central objection in the Allenby book, is that the Transhumanist somehow assumes a notion of the individual, as an independent rational entity, which doesn’t really exist. This may be the case but here I think that existentialism is very handy in helping us respond. The kind of individual envisioned by the Enlightenment thinkers may not exist but one way of seeing the transhumanist project is as seeking to construct such a being.

Enlightenment, in Kant’s immortal words, is

….man’s release from his self-incurred tutelage. Tutelage is man’s inability to make use of his understanding without direction from another. Self-incurred is this tutelage when its cause lies not in lack of reason but in lack of resolution and courage to use it without direction from another. Sapere aude! ‘Have courage to use your own reason!’- that is the motto of enlightenment

To this the transhumanist adds that Kant may have been wrong in thinking that we have enough reason and simply need the courage to use it. We may need to make ourselves into the kinds of rational beings which could fulfill the ideals of the Enlightenment.

There is a lot more that I would like to say about these issues but at this point I will briefly mention two there themes that don’t have much to do with existentialism. One is from Bostrom (see a recent talk of his at NYU’s Ethics of A.I. conference). One of Bostrom’s main claims is what he calls the orthogonality thesis. This is the claim that intelligence and values are orthogonal to each other. You can pair any level of intelligence with any goal at all.  This may be true for intelligence but I certainly don’t believe it is true for rationality.

Switching gears a bit I wanted to mention David Chalmers’ talk. I found his basic premise to be very convincing. The basic idea seemed to be that virtual objects count as real in much the same way as concrete objects do. When one is in a virtual environment (I haven’t been in one yet but I am hoping to try a Vive or a Playstation VR set soon!) and one interacts with a virtual dragon, there really is a virtual object that is there and that one is interacting with. The fundamental nature of this object is computational and there are some data structures that interact in various ways so as to make it roughly the same as ordinary objects and their atomic structure. Afterwards I asked if he thought the same was true for dreams. It seemed to me that many of the same arguments could be given for the conclusion that in one’s dreams one interacted with dream objects which were real in the same way as virtual objects. He said that perhaps but it depended on whether one was a functionalist about the mind. It seems to me that someone like Chalmers, who thinks that there is a computational/functional neural correlate for conscious states, is committed to this kind of view about dreams (even though he is a dualist). Dream objects should count as real on Chalmers’ view.

Zombies vs Shombies

Richard Marshall, a writer for 3am Magazine, has been interviewing philosophers. After interviewing a long list of distinguished philosophers, including Peter Carruthers, Josh Knobe, Brian Leiter, Alex Rosenberg, Eric Schwitzgebel, Jason Stanley, Alfred Mele, Graham Priest, Kit Fine, Patricia Churchland, Eric Olson, Michael Lynch, Pete Mandik, Eddy Nahmais, J.C. Beal, Sarah Sawyer, Gila Sher, Cecile Fabre, Christine Korsgaard, among others, they seem to be scraping the bottom of the barrel, since they just published my interview. I had a great time engaging in some Existential Psychoanalysis of myself!

The Brain and its States

Some time ago I was invited to contribute a paper to a forthcoming volume entitled Being in Time: Dynamical Models of Phenomenal Experience. I was pleasantly surprised to find out that I was invited because of my paper “What is a Brain State?” Looking back at that paper, which I was writing in 2004-2005, I was interested in questions about the Identity Theory and not so much about consciousness per se and I wished I had said something relating the thesis there to various notions of consciousness. So I was happy to take this opportunity to put together a general statement of my current views on this stuff as well as a chance to develop some of my recent views about higher-order theories. Overall I think it is a fairly decent statement of my considered opinion on the home of consciousness in the brain. Any comments or feedback is greatly appreciated!