Consciousness Studies in 1000 words (more) or less

The head of the philosophy program at LaGuardia, John Chaffee, is the author of an introductory text book The Philosopher’s Way. The book is entering its fourth edition and John is updating the chapter on the self and consciousness. In particular he is updating the section on Paul Churchland’s eliminative materialism to include a discussion of functionalism. I have been asked to write something that could possibly be included after this discussion and which sums up the the current state of the field, provides a kind of “star map”, and might intrigue an undergraduate to learn more. Tall order! Here is a first draft of what I came up with. Comments and suggestions welcome!
—————————-

Contemporary Philosophy of Mind

The philosophical study of the mind in alive and well in the 21st Century. Broadly speaking one might say that there are three over arching concerns in this debate. The first concerns whether consciousness ultimately depends on something computational/functional or whether it depends on something biological. The second concerns whether consciousness is ultimately physical or non-physical, and the third concerns what role empirical results play in philosophical theories of consciousness.

Consider the first question. Some philosophers, like John Searle at U.C. Berkeley and Ned Block at New York University, think that consciousness is distinctly biological. To see what is at issue here we can employ a commonly used thought experiment. Neurons no doubt perform functions. Ask any psychologist or neuroscientist and they will tell you about sodium ions and potassium ions and cell membranes and neurotransmitters, action potentials and the rest. That is, we can think of a neuron as something that takes a certain kind of input (the neurotransmitters from other neurons, ions) and delivers a certain kind of output (an action potential or a graded potential. In principle it seems possible that we could use a nano-machine to mimic a neurons functional profile. This nano-machine would be able to take all of the same input and deliver all of the same output. One might think of it as an artificial neuron in the sense that we have artificial hearts. It is a bit of metal and plastic but it is designed to do the exact same job that the original was meant to do. Suppose now that this nano-machine zaps the neuron and quickly takes its place. Now you have all of your regular neurons and one artificial neuron. But it does everything the original neuron did, so we have no reason to think that this should change you conscious experience overly much. But now we do it with another neuron, and another, and another. The question then, is what happens to consciousness when we replace all of the neurons?

David Chalmers, a philosopher at the Australian National University and New York University, has argued that as a person moves through this process of having their neurons replaced with artificial ones we have a few options. We might say that as their neurons are being replaced their conscious experience is slowly fading like a light on a dimmer switch or we might say that conscious experience was just cut off at some point when some number of neurons were replaced, maybe even the first one! But each of these has a very strange consequence. Suppose that I am having a headache during the hour that my brain is being “fitted” with nanobots. Now suppose that my conscious experience is fading as the process progresses with it being absent at the end. Well, ok, but the first thing to notice is that there can be no difference in your behavior as we go through the process. Each nanobot performs exactly the same function as the neuron, and we can think of the nanobot as instantaneously zapping and replacing the neuron so that you could being driving a car or reading a book while this was happening. But then we end up with the very strange result that we cannot really know that we have conscious experience right now! How do I know that I have a conscious pain? Well, I feel it! But if we were right that it can fade out, or even pop into and out of existence, without me noticing then how do I know it is there in the first place? Chalmers concludes that it is safer to think that the conscious experience would be the same at the end of the process. But if this is right then consciousness depends of functional organization and not on the biology, or non-biology, of the hardware. Those like Searle and Block hold that real neurons with their biological properties are needed in order to have consciousness and that the neural net at the end of the process would no longer be you or have thoughts or pains, but would only simulate those things. Whatever your intuitions are this may not be science fiction for long. Neuroscience is already well along in its investigation of ways to design brain-machine interfaces (for instance as a way of helping amputees with prosthetic limbs that are controlled just like one’s own limbs) and enhancement of the human mind by prosthetic neurology is perhaps not far off.

Notice that in thinking about the question of whether the mind ultimately depends on biological or a functional properties we appealed to a thought experiment. We did not go out and do an actual experiment. We consulted what we intuitively thought about a piece of science fiction. In contemporary philosophy of mind there are those who think that these kinds of intuitions carry great weight and then there are those who think that they do not. Those who think that they carry water think that we can know some deep fact about the nature of consciousness on the basis of reason alone For instance, take Frank Jackson’s Mary thought experiment (Jackson is also a philosopher at the Australian National University). Imagine a brilliant scientist who is locked in a black and white room but who is able to communicate with the outside world via a black and white television screen. Mary is able to learn all of the science that we will ever be able to know. So imagine that she knows the TRUTH about physics, whatever it is. Now suppose that she is released from her room and shown a red ripe tomato. It seems natural to think that she would learn something that she might express by ‘oh, THAT’s what it is like to see red! Everyone out here kept talking about red, but now that I have seen it I know what they mean’. But since she knew all of the physical facts, and yet did not know at least one fact, what it is like for her to see red, it seems like that fact must not be a physical fact. If this and related thought experiments are right then it seems that we do not need empirical evidence of any kind to know that consciousness cannot be physical (note, that David Chalmers talked about above, has advocated this line against physicalism as well. He has introduced philosophical zombies, creatures that are physical duplicates of us but which lack consciousness. If these are possible then consciousness is not a consequence of physics alone).

These arguments, and the knowledge argument of Jackson in particular, have spawned a huge amount of responses. One very natural response is to question the inattention to scientific discoveries. Dan Dennett, a philosopher at Tufts University, argues that this whole strategy is thoroughly misguided. We seem to think that there are these magical conscious properties –the experience of having a pain– that just aren’t there. What is there is the seeming that it is so. Dennett often makes a comparison to magic. Take some professional magician, say David Copperfield. What David Copperfield does is to make it seem as though he has done something else. If you wanted to know how Copperfield performed some trick you would need to explain how he made it the case that it seemed that the statue was gone, or how he made it seem that the person was levitating. You don’t try to show that he really did it but how he made it seem as though he really did. Now is what he does real magic? There is some temptation to say no. Real magic is not just a trick. But sadly, the only magic that is in fact real is the kind that is fake. Dennett thinks the same is true of consciousness. When the functionalist explains what a pain is and someone says that this is not magic enough (Mary wouldn’t know it, or a zombie would lack it), the functionalist should respond that there is no such thing as that kind of magic. What is in fact true is that the brain makes it seem to us as though we have all of this magical stuff going on, but it only seems to be going on. Why think this? Dennett’s main argument is that this has been shown to us by the empirical sciences. Take just one example, the case of so-called change blindness (go online and search for ‘the amazing color changing card trick’ to see a cool example). In these kinds of cases people are presented with a scene where there is a very large central thing that changes. People are usually very bad at spotting the change. Yet when they see the difference they cannot believe that they did not notice it before. That is, from the first-person point of view it really seems as though one has access to a very rich and detailed scene, but actually one is mostly unaware of very large and salient changes in one’s environment. If this is right then our intuitions about science fiction cases may not be that reliable. And this is what Dennett and those like him think.

(cross posted at Brains)

12 thoughts on “Consciousness Studies in 1000 words (more) or less

  1. I’m an amateur philosopher, probably comparable to an inexperienced undergrad in some respects.

    It might be useful to footnote the university affiliations to remove a potential distraction. Alongside the footnote, you could include any relevant links to an online version of their important article. Dennett has a great TEDTalk on YouTube that would go nicely with the “real magic” bit.

  2. “Well, ok, but the first thing to notice is that there can be no difference in your behavior as we go through the process. Each nanobot performs exactly the same function as the neuron, and we can think of the nanobot as instantaneously zapping and replacing the neuron so that you could being driving a car or reading a book while this was happening. But then we end up with the very strange result that we cannot really know that we have conscious experience right now! How do I know that I have a conscious pain? Well, I feel it! But if we were right that it can fade out, or even pop into and out of existence, without me noticing then how do I know it is there in the first place? Chalmers concludes that it is safer to think that the conscious experience would be the same at the end of the process. But if this is right then consciousness depends of functional organization and not on the biology, or non-biology, of the hardware. Those like Searle and Block hold that real neurons with their biological properties are needed in order to have consciousness and that the neural net at the end of the process would no longer be you or have thoughts or pains, but would only simulate those things.”

    Consider a similar argument about the past:

    “If the world were created 1 micro-second ago complete with memory traces and dinosaur bones and all the other evidence of the past, everything now would be just as it actually is. You would apparently be mid-way through a driving trip or reading a book, but you would never have started the trip or the book despite appearances to the contrary. But then we end up with the very strange result that we cannot really know that we started our trips or books. The past could be there or not without us being any the wiser now. But if this is right, then it is safer to think that the past just IS its traces in the present rather than some mysterious inaccessible metaphysical reality.”

    People once were convinced by this kind of specious argument, but I hope they have given it up. The trick of this argument and the one you gave is the setting up of a skeptical scenario in which a counterpart of oneself would not know something that we in fact do know and the conclusion is that knowledge of the things we know requires an implausible metaphysical thesis. Haven’t we learned that this is a fallacious form of argument?

    • hi richard — thanks for pointing me to ned’s comment. i more or less agree with ned. your bit about “But then we end up with the very strange result that we cannot really know that we have conscious experience right now” is no part of my argument. the mere possibility of other beings who are functionally like me but have false beliefs doesn’t show that my counterpart beliefs aren’t knowledge — especially if they have different evidence from me, as is plausible in these cases. even on my own view, i think that zombies, fading qualia, dancing qualia are all at least logically possible (although not nomologically possible), but i still think we can know that we are conscious when we are. of course there’s then a nontrivial question about just how to reconcile those claims — i try to do that in my paper on the content and epistemology of phenomenal belief.

  3. Hi Dave and Ned, thanks for these very helpful comments!

    I guess I thought that passages like this,

    The crucial point here is that Joe is systematically wrong about everything that he is experiencing. He certainly says that he is having bright red and yellow experiences, but he is merely experiencing tepid pink. If you ask him, he will claim to be experiencing all sorts of subtly different shades of red, but in fact many of these are quite homogeneous in his experience. He may even complain about the noise, when his auditory experience is really very mild. Worse, on a functional construal of judgment, Joe will even judge that he has all these complex experiences that he in fact lacks. In short, Joe is utterly out of touch with his conscious experience, and is incapable of getting in touch.

    There is a significant implausibility here. This is a being whose rational processes are functioning and who is in fact conscious, but who is completely wrong about his own conscious experiences. Perhaps in the extreme case, when all is dark inside, it is reasonable to suppose that a system could be so misguided in its claims and judgments – after all, in a sense there is nobody in there to be wrong. But in the intermediate case, this is much less plausible. In every case with which we are familiar, conscious beings are generally capable of forming accurate judgments about their experience, in the absence of distraction and irrationality. For a sentient, rational being that is suffering from no functional pathology to be so systematically out of touch with its experiences would imply a strong dissociation between consciousness and cognition. We have little reason to believe that consciousness is such an ill-behaved phenomenon, and good reason to believe otherwise.

    and,

    Indeed, if we are to suppose that Dancing Qualia are empirically possible, we are led to a worrying thought: they might be actual, and happening to us all the time. The physiological properties of our functional mechanisms are constantly changing. The functional properties of the mechanisms are reasonably robust; one would expect that this robustness would be ensured by evolution. But there is no adaptive reason for the non-functional properties to stay constant. From moment to moment there will certainly be changes in low-level molecular properties. Properties such as position, atomic makeup, and so on can change while functional role is preserved, and such change is almost certainly going on constantly.

    If we allow that qualia are dependent not just on functional organization but on implementational details, it may well be that our qualia are in fact dancing before our eyes all the time. There seems to be no principled reason why a change from neurons to silicon should make a difference while a change in neural realization should not; the only place to draw a principled line is at the functional level. The reason why we doubt that such dancing is taking place in our own cases is that we accept the following principle: when one’s experiences change significantly, one can notice the change. If we were to accept the possibility of Dancing Qualia in the original case, we would be discarding this principle, and it would no longer be available as a defense against skepticism even in the more usual cases.

    were making the claim that you quote. But now I guess the better way to put it (leaving aside the second argument about bizarre relationships between physical states and belief contents) might be that “we end up with the result that there could be a creature functionally and rationally just like us who did not know that it had conscious experience and given that we know that we have conscious experience this result is very implausible.” Is that better?

    • hi richard — neither of these passages involves the inference to the claim that we don’t know that we are conscious or even that the creature in question doesn’t know. i do say that joe is “out of touch” with his consciousness, but that wasn’t meant to be as strong as the claim that he doesn’t know that he is conscious. upon seeing the second paragraph in your second quote, though, it does look like i am invoking some sort of skeptical threat: not the threat that we don’t know we are conscious, but the threat that we don’t know that our qualia are not dancing over time (n.b. a somewhat different threat).

      i think matters are a bit more complex than this paragraph suggests. the inference from nomological possibility of dancing qualia to lack of knowledge is nontrivial. of course even the logical possibility of dancing qualia could be used to raise a similar threat, and various sorts of anti-skeptical replies are available in both cases. the rough thought, though, was that if dancing qualia are nomologically possible, then there will be relatively nearby worlds in which qualia dance (whereas if they’re nomologically impossible those worlds are much further away). furthermore, it then becomes harder to exclude the possibility that qualia depend on low-level molecular features so that they’re dancing all the time. so there’s at least a serious skeptical worry to be addressed.

      i think ned’s and my attitude’s to the swampman hypothesis are somewhat different. i’m not so inclined to say that the swampman hypothesis is excluded by my evidence. rather, i’m inclined to say that it involves a much more complex explanation of my evidence than its denial, so that simplicity and inference to the best explanation favor its denial. it’s an interesting question how similar moves play out in the dancing qualia case. e.g. it’s open to someone like ned to argue that a neurobiological story about what determines qualia makes for a simpler and better explanation of our evidence than a low-level molecular story.

  4. A similar reductio applies: “There could be a creature functionally and rationally just like us who did not know that it had a past [because it has no past; it just came into existence] and given that we know that we have have a past, this result is very implausible.” As Dave says, the logically possible creature that has no past (a “pombie”) would have different evidence from us, so its epistemic problem is not ours.

  5. Thanks again Ned.

    I take your point about the plight of the logically possible creature, but if these creatures are nomologically possible then their problems would be ours, right? As Dave says in the quote, if these things are nomological possibilities they may, for all we know, be actual. And if that were the case we would not know that we had conscious experience.

    Perhaps I should phrase it as ‘but if this was the way our world worked we would end up with the very strange result that we wouldn’t notice if our headaches were fading or popping in and out of existence right now!’

    I think that avoids the problem…

  6. A near molecular duplicate of you with no past (a “swampman”) IS nomologically possible–at least according to the quasi-ergodic hypothesis–which is or at least used to be widely accepted among physicists. (The idea is that even on Newtonian physics, any particular constellation of positions and velocities (minus a few odd cases) will be approached arbitrarily closely if you wait long enough.) Does that destroy YOUR knowledge of the past? I don’t think so. You have evidence that the swampman does not have though the difference is not available “from the inside”. Skeptical scenarios just don’t destroy knowledge.

  7. Just to be clear, in this passage i am trying to phrase the arguments and positions for a textbook for an intro phil course and not advocating the view. Overall I am probably in agreement with you since I accept a version of the identity theory…

    But I thought that Dave’s argument could also rule out the skeptical scenario about the past as well. It is logically possible but if it were the truth about our world we would have to reject a principle relating causes to their effects or some such. So, though logically possible it is unlikely to be true of our world.

    As I saw it the same was meant to hold for functional isomorphs having consciousness. If we allow that dancing qualia are nomologically possible (as non functionalists in this respect have to) then we have to reject the principle that we notice large changes in our conscious experience. So, though logically possible it is unlikely to be true of our world.

    Does this affect my noticing of changes in my conscious experience? If Dave is right about his point that the realizers of the functions in any given human brain actually vary then it seems like it does. We are not talking about counter-parts of me, but the actual me. I thought that Dave’s claim was that unless we accept that consciousness is a functional invariant we are forced to accept that we may in fact have dancing qualia and that it is this claim that is very improbable. So, it doesn’t seem like your response works, though I may be missing something…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s