Consciousness Studies in 1000 words (more) or less

The head of the philosophy program at LaGuardia, John Chaffee, is the author of an introductory text book The Philosopher’s Way. The book is entering its fourth edition and John is updating the chapter on the self and consciousness. In particular he is updating the section on Paul Churchland’s eliminative materialism to include a discussion of functionalism. I have been asked to write something that could possibly be included after this discussion and which sums up the the current state of the field, provides a kind of “star map”, and might intrigue an undergraduate to learn more. Tall order! Here is a first draft of what I came up with. Comments and suggestions welcome!

Contemporary Philosophy of Mind

The philosophical study of the mind in alive and well in the 21st Century. Broadly speaking one might say that there are three over arching concerns in this debate. The first concerns whether consciousness ultimately depends on something computational/functional or whether it depends on something biological. The second concerns whether consciousness is ultimately physical or non-physical, and the third concerns what role empirical results play in philosophical theories of consciousness.

Consider the first question. Some philosophers, like John Searle at U.C. Berkeley and Ned Block at New York University, think that consciousness is distinctly biological. To see what is at issue here we can employ a commonly used thought experiment. Neurons no doubt perform functions. Ask any psychologist or neuroscientist and they will tell you about sodium ions and potassium ions and cell membranes and neurotransmitters, action potentials and the rest. That is, we can think of a neuron as something that takes a certain kind of input (the neurotransmitters from other neurons, ions) and delivers a certain kind of output (an action potential or a graded potential. In principle it seems possible that we could use a nano-machine to mimic a neurons functional profile. This nano-machine would be able to take all of the same input and deliver all of the same output. One might think of it as an artificial neuron in the sense that we have artificial hearts. It is a bit of metal and plastic but it is designed to do the exact same job that the original was meant to do. Suppose now that this nano-machine zaps the neuron and quickly takes its place. Now you have all of your regular neurons and one artificial neuron. But it does everything the original neuron did, so we have no reason to think that this should change you conscious experience overly much. But now we do it with another neuron, and another, and another. The question then, is what happens to consciousness when we replace all of the neurons?

David Chalmers, a philosopher at the Australian National University and New York University, has argued that as a person moves through this process of having their neurons replaced with artificial ones we have a few options. We might say that as their neurons are being replaced their conscious experience is slowly fading like a light on a dimmer switch or we might say that conscious experience was just cut off at some point when some number of neurons were replaced, maybe even the first one! But each of these has a very strange consequence. Suppose that I am having a headache during the hour that my brain is being “fitted” with nanobots. Now suppose that my conscious experience is fading as the process progresses with it being absent at the end. Well, ok, but the first thing to notice is that there can be no difference in your behavior as we go through the process. Each nanobot performs exactly the same function as the neuron, and we can think of the nanobot as instantaneously zapping and replacing the neuron so that you could being driving a car or reading a book while this was happening. But then we end up with the very strange result that we cannot really know that we have conscious experience right now! How do I know that I have a conscious pain? Well, I feel it! But if we were right that it can fade out, or even pop into and out of existence, without me noticing then how do I know it is there in the first place? Chalmers concludes that it is safer to think that the conscious experience would be the same at the end of the process. But if this is right then consciousness depends of functional organization and not on the biology, or non-biology, of the hardware. Those like Searle and Block hold that real neurons with their biological properties are needed in order to have consciousness and that the neural net at the end of the process would no longer be you or have thoughts or pains, but would only simulate those things. Whatever your intuitions are this may not be science fiction for long. Neuroscience is already well along in its investigation of ways to design brain-machine interfaces (for instance as a way of helping amputees with prosthetic limbs that are controlled just like one’s own limbs) and enhancement of the human mind by prosthetic neurology is perhaps not far off.

Notice that in thinking about the question of whether the mind ultimately depends on biological or a functional properties we appealed to a thought experiment. We did not go out and do an actual experiment. We consulted what we intuitively thought about a piece of science fiction. In contemporary philosophy of mind there are those who think that these kinds of intuitions carry great weight and then there are those who think that they do not. Those who think that they carry water think that we can know some deep fact about the nature of consciousness on the basis of reason alone For instance, take Frank Jackson’s Mary thought experiment (Jackson is also a philosopher at the Australian National University). Imagine a brilliant scientist who is locked in a black and white room but who is able to communicate with the outside world via a black and white television screen. Mary is able to learn all of the science that we will ever be able to know. So imagine that she knows the TRUTH about physics, whatever it is. Now suppose that she is released from her room and shown a red ripe tomato. It seems natural to think that she would learn something that she might express by ‘oh, THAT’s what it is like to see red! Everyone out here kept talking about red, but now that I have seen it I know what they mean’. But since she knew all of the physical facts, and yet did not know at least one fact, what it is like for her to see red, it seems like that fact must not be a physical fact. If this and related thought experiments are right then it seems that we do not need empirical evidence of any kind to know that consciousness cannot be physical (note, that David Chalmers talked about above, has advocated this line against physicalism as well. He has introduced philosophical zombies, creatures that are physical duplicates of us but which lack consciousness. If these are possible then consciousness is not a consequence of physics alone).

These arguments, and the knowledge argument of Jackson in particular, have spawned a huge amount of responses. One very natural response is to question the inattention to scientific discoveries. Dan Dennett, a philosopher at Tufts University, argues that this whole strategy is thoroughly misguided. We seem to think that there are these magical conscious properties –the experience of having a pain– that just aren’t there. What is there is the seeming that it is so. Dennett often makes a comparison to magic. Take some professional magician, say David Copperfield. What David Copperfield does is to make it seem as though he has done something else. If you wanted to know how Copperfield performed some trick you would need to explain how he made it the case that it seemed that the statue was gone, or how he made it seem that the person was levitating. You don’t try to show that he really did it but how he made it seem as though he really did. Now is what he does real magic? There is some temptation to say no. Real magic is not just a trick. But sadly, the only magic that is in fact real is the kind that is fake. Dennett thinks the same is true of consciousness. When the functionalist explains what a pain is and someone says that this is not magic enough (Mary wouldn’t know it, or a zombie would lack it), the functionalist should respond that there is no such thing as that kind of magic. What is in fact true is that the brain makes it seem to us as though we have all of this magical stuff going on, but it only seems to be going on. Why think this? Dennett’s main argument is that this has been shown to us by the empirical sciences. Take just one example, the case of so-called change blindness (go online and search for ‘the amazing color changing card trick’ to see a cool example). In these kinds of cases people are presented with a scene where there is a very large central thing that changes. People are usually very bad at spotting the change. Yet when they see the difference they cannot believe that they did not notice it before. That is, from the first-person point of view it really seems as though one has access to a very rich and detailed scene, but actually one is mostly unaware of very large and salient changes in one’s environment. If this is right then our intuitions about science fiction cases may not be that reliable. And this is what Dennett and those like him think.

(cross posted at Brains)

The Myth of Phenomenological Overflow

Update 7/27/11
The paper is now available on Consciousness and Cognition’s website: The Myth Of Phenomenological Overflow

I have just finished my contribution to the Special Issue of Consciousness and Cognition that I am editing featuring descendants of papers from the second online consciousness conference and made the pre-print available at my PhilPapers profile. Discussion and comments are welcome.

The Myth of Phenomenological Overflow

In this paper I examine the dispute between Hakwan Lau, Ned Block, and David Rosenthal over the extent to which empirical results can help us decide between first-order and higher-order theories of consciousness. What emerges from this is an overall argument to the best explanation against the first-order view of consciousness and the dispelling of the mythological notion of phenomenological overflow that comes with it.