Some Cool Links

(via David Pereplyotchik)

Below are links to some examples of talks that fall well within the cognitive science arena. I’ve found, however, that many of the non-cogsci talks are more interesting, because they introduce one, often in a vivid way, to a subject matter that is less familiar. (For instance, Wade Davis’s talk on anthropological fieldwork was, for me, genuinely exciting.)

You can browse the talks by clicking on the topic links at the bottom right of each video’s page. Or just start here


David Pereplyotchik

On the Off Chance you Missed It

David Chalmers and one of his graduate students have launched MindPapers: A Bibliography in the Philosophy of Mind and the Science of Consciousness. This is a truly amazing resource as it includes all kinds of on-line papers! It is also searchable and has many other ‘capabilities’…I just hope it doesn’t one day take over the internet and steal my credit card info!!! 🙂

I think by far the best part is Part 7: Philosophy of Cognitive Science, section 3: Philosophy of Neuroscience, sub-section f: Philosophy of Neuroscience, Misc. ;^)

Implementing the Transitivity Principle

A conscious mental state, for Pete, is a complex state made up of two interacting states. One a first-order sensory state that carries information about the world and the other a higher-representation that characterizes the first-order state in terms of the concepts available to the creature and that also has ‘egocentric’ content, which is content to the effect that the state in question belongs to the creature in question. Recently I have been arguing that theories of consciousness like Pete’s and Prinz’s, and Churchland’s are really just implementations of the transitivity principle (even though and in spite of the fact that they do not think that they are implemting it (Is There Such a Thing as a Neurophilosophical Theory of Consciousness?)).

 In Ch. 5 of Pete’s book-in-progress The subjective Brain he address this concern by saying the following.

Aren’t mental representations with conceptualized egocentric contents automatically implementations of the Transitivity Principle?

Nope. According to Transitivity, a state is conscious only if one is conscious of it. However, according to the theory to be further fleshed out in the next chapter, one set of mental representations that would suffice for consciousness would include the following. I have a sensational state that carries the information that, among other things, there is a coffee cup to my left which triggers the conceptualization that there is a coffee cup to my left which in turn (the conceptualization) exerts (yet to be specified) causal influences on the sensational state. What I would be conscious of, on this view, is a coffee cup as being to my left. I would not be conscious of either the sensational state or the conceptual state or their mutual causal interaction. I need not be conscious of any mental state of me. (There being a coffee cup to my left is arguably a state of me, but it is pretty clearly not one of my mental states.) Therefore, the conceptual egocentric representations that suffice for consciousness need not implement Transitivity.

Now one way of responding to this claim, and the way that is currently being debated over at Bran Hammer (Contents, Vehicles, and Transitive Consciousness and more here), is to argue, as Robert Lurz does, that I can be conscious of my mental states by being conscious of what those states represent. If this is true then it is obvious that Pete and company are just offering an alternative way of implementing the transitivity principle. I do not want to talk about this issue here, as it is being debated at Brain Hammer and I am content to let it continue there.

What I do want to talk about is the claim that I have made that everything that Pete says is something that Rosenthal can agree with and so nothing that he has said shows that there is anything wrong with transitivity or that his theory doesn’t implement it. (A Tale of Two T’s). So, I was reading Ch. 4 of Conscious and Mind entitled ‘Introspection and Self-Interpretation’ while following up on my Introspective HOT Zombie of the previous post (more on that later) when I found this nice passage.

When one has a thought that one’s own experience visually represents a red physical object, that thought need not be in any way consciously inferential or based on theory; it might well be independent of any inference of which one is conscious. From a first person point of view, any such thought would seem unmediated and spontaneous. And it is the having of just such thoughts that makes one conscious of one’s experiences. Such a thought, morover, by representing the experience as itself visually representing a red physical object, makes one conscious of the experience as being of the type that qualitatively represents red objects. And being an experience of that type simply is having the relevant mental quality. So, being conscious of oneself as having a sensation of that type is automatically being consciousof oneself as having a sensation with the quality of mental red, and thus of the mental quality itself. (p. 119)

This is interesting because Rosenthal seems to be arguing, in the reverse of Lurz, that being conscious of my self as being in a certain mental state just is being conscious of what the state represents.

So for Rosenthal it will be true that when we introspect we will be conscious of the tomatoe. That is from the first person point of veiw it will seem to us that we are conscious only of the properties of the tomatoe. How is this possible? he makes this a little cearer on the next page where he says,

When one shifts one’s attention from the tomatoe to one’s visual experience of it, it does not seem, subjectively, that some new qualities arise in ones stream of consciousness. This may well seem to underwrite Harman’s insistence that the only quality one is aware of in either case is that of the tomatoe. But that is too quick. As noted earlier, we can be conscious of a particular thing in particular ways. When one see a red tomatoe consciously but unreflectively, one conceptualizes the quality one is aware of as a property of the tomatoe. So that is how one is conscious of that qulity.

So again, we conceptualize the mental quality as a property of the tomatoe when the state is conscious and so we are concious of it as a property of the tomatoe; to us it will seem as though all we are conscious of is the property of the tomatoe. When we introspect we conceptualize the quality as a property of the experience, not of the tomatoe. So Rosenthal can agree that what we are conscious of is the coffee cup or the tomatoe and yet all the while this is just an implementation of the transitivity principle.

Is There Such a Thing as a Neurophilosophical Theory of Consciousness?

Pete has Ch. 4 of his book-in-progress up over at the Brain Hammer, entitled The Neurophilosophy of Consciousness. His stated goal is to discuss

philosophical accounts of state consciousness, transitive consciousness, and phenomenal character that make heavy use of contemporary neuroscientific research in the premises of their arguments.

This is because he defines ‘neurophilosophy’ as the bringing to bear of concepts from neuroscience to solve problems in philosophy, as he says

neurophilosophical work on consciousness proceeds largely by bringing neuroscientific theory and data to bear on philosophical questions such as the three questions of consciousness.

But it is unclear to me in what sense a theory of consciousness can be neurophilophical at all.

For instance, here is how he charecterizes Churchland’s account of what a conscious state is,

Paul Churchland articulates what he calls the “dynamical profile approach” to understanding consciousness (2002). According to the approach, a conscious state is any cognitive representation that is involved in (1) a moveable attention that can focus on different aspects of perceptual inputs, (2) the application of various conceptual interpretations of those inputs, (3) holding the results of attended and conceptually interpreted inputs in a short-term memory that (4) allows for the representation oftemporal sequences.

How is this neurophilophical? To be sure, Churchland goes on to talk about how this could be implemented in a connectionist neural architecture, but the actual theory of what a conscious state is isn’t much different from standard higher-order accounts. It involves being aware of myself as being in a certain state. Nothing neurophilosphical here! And his account of the what it is linke-ness just involves appeal to the representational content of sensory states, again nothing specifically neurophilosophical about this.

The same can be said about Prinz’s AIR model, which Pete quotes a summary of,

When we see a visual stimulus, it is propagated unconsciously through the levels of our visual system. When signals arrive at the high level, interpretation is attempted. If the high level arrives at an interpretation, it sends an efferent signal back into the intermediate level with the aid of attention. Aspects of the intermediate-level representation that are most relevant to interpretation are neurally marked in some way, while others are either unmarked or suppressed. When no interpretation is achieved (as with fragmented images or cases of agnosia), attentional mechanisms might be deployed somewhat differently. They might ‘‘search’’ or ‘‘scan’’ the intermediate level, attempting to find groupings that will lead to an interpretation. Both the interpretation-driven enhancement process and the interpretation-seeking search process might bring the attended portions of the intermediate level into awareness. This proposal can be summarized by saying that visual awareness derives from Attended Intermediatelevel Representations (AIRs). (p. 249)

Again, it is difficult to see how Prinz is doing anything more than discussing a possible implementation of the transitivity principle, which is not neurophilosophical. Pete does note that Prinz does not WANT his theory to be an implementation of the transitivity principle, but the challenge is to explain how it isn’t, not merely indicate that one wants it to be different.

Pete himself makes this clear in his summary of the three positions.

Churchland, Prinz, and Tye agree that conscious states are representational states. They also agree that what will differentiate a conscious representation from an unconscious representation will involve relations that the representation bears to representations higher in the processing hierarchy. For both Churchland and Prinz, this will involve actual interactions, and further these interactions will constitute relations that involve representations in processes of attention, conceptual interpretation and short term memory. Tye disagrees on the necessity of actually interacting with concepts or attention. His account is dispositional meaning that the representations need only be poised for uptake by higher levels of the hierarchy.

So, in so far as these are theories of consciousness, they are the standard ones. Now, I am not denying that these guys are neurophilosophers in the sense that Pete means; they do appeal to detailed neuroscience in the premises of their arguments. But I don’t see how the neuro stuff is supposed to be a theory of consciousness. As I have said, it looks like spelling out ways of implementing the two standard (first-order/higher-order) representational theories of consciousness.

The challenge then, is to spell out a neurophilosophical theoryof consciousness that is distinct from these standard theories which are not themselves neurophilosophical.

On Hallucinating Pain

OK, so one more for the rode…

I was recently re-reading one of Ned Block’s papers (‘Bodily Sensations as an Obstacle for representationism’) where he denies that there is an appearance/reality distinction when it comes to pain. This is a commn view to have about pain (had for instance by Kripke in his argument against the Identity Theory). Here is what he says

 My color experience represents colors, or colorlike properties. (In speaking of colorlike properties, I am alluding to Sydney Shoemaker’s “phenomenal properties”  or “appearance properties” or Michael Thau’s nameless properties.) But, according to me, there is no obvious candidate for an objectively assessable property that bears to pain experience the same relation that color bears to color experience. But first, let us ask a prior question: what in the domain of pain corresponds to the tomato, namely, the thing that is red? Is it the chair leg on which I stub my toe (yet again), which could be said to have a painish or painy quality to it in virtue of its tendency to cause pain–experience in certain circumstances, just as the tomato causes the sensation of red in certain circumstances? Is it the stubbed toe itself, which we experience as aching, just as we experience the tomato as red? Or, given the fact of phantom-limb pain, is it the toeish part of the body image rather than the toe itself? None of these seems obviously better than the others.

Now if one has adopted a higher-order theory of consciousness one will think that there is indeed an appearance/reality distinction to be made here. Since it is the higher-order state, and only the higher-order state, that accounts for there being something that it is like to have a conscious pain it follows that there is the real possibility that one may misrepresent oneself as being in pain when one is not, or as not being in pain when one is.

So it is no suprise to find David Rosenthal saying stuff like this

Just as perceptual sensations make us aware of various physical objects and processes, so pains and other bodily sensations make us aware of certain conditions of our own bodies. In standard cases of feeling pain, we are aware of a bodily condition located where the pain seems phenomenologically to be located. It is, we say, the foot that hurts when we have the relevant pain. and in standard cases we describe teh bodily condition using qualitative words, such as painful, burning, stabbing, and so forth. Descartes’s famous Sixth Meditation appeal to phantom pains reminds us that pains are purely mental statess. But we need not, on that account, detach them from the bodily conditions they reveal in the standard, nonhallucinatory cases. (from Sensory Quality and the Relocation Story)

 So Rosenthal seems to be saying that it is bodily conditions that play the role that the tomatoe does and it is first-order states which constitute an awareness of those conditions which play the role that Block calls ‘representing color or colorlike properties’. If these are all distinct states, then we should expect for them to come apart.

 I have addressed the issue of unconscious pains in some previous posts. An unconscious pain, for Rosenthal and those like him, is a state that makes us conscious of some bodily condition and which will resemble and differ other pains states in ways that are homomorphic to the resembelances and differences between these bodily states. But what about the other case mentioned? Is it even possible to think that one is in pain and be wrong?

Rosenthal cites what he calls ‘the dental fear phenomenon’ as evidence for this claim. Here is what he says (in the same article as before)

Dental patients occasionally report pain when physiological factors make it clear that no pain could occur. The usual explanation is that fear and the non-painful sensation of vibration cause the patient to confabulate pain. When the patient learns this explanation, what it’s like for the patient no longer involves anything painful. But the patient’s memory of what it was like before learning the explanation remains unchanged. Even when what it’s like results from confabulation, it may be no less vivid and convincing than a nonconfabulatory case.

Now, I have always felt that this dental fear stuff was a really convincing way of showing that there really is a reality/appearance distinction for pains, but when I have tried to research this I have not been able to find very much on it (and Rosenthal offers no citations), but it does seem to be a reletively common phenomenon. Here is an excerpt from a paper on dental fear in children that tells a dentists how to deal with this

Problems that a dentist is convinced are associated with misinterpretation of pain may be addressed by explaining the gate theory of pain. A very basic explanation which is suitable for children as young as five is as follows. ‘You have lots of different types of telephone wires called nerves going from your mouth to your brain (touch appropriate body parts). Some of them carry “ouch!” messages and the others carry messages about touch (demonstrate) and hot and cold. The sleeping potion stops the ouch messages being sent, but not the touch and the hot and cold messages. So you will still know that I am touching the tooth and you will still feel the cold of the water. Your brain looks out for messages all the time. If you are convinced that it will hurt, it will. This is because if I make the ouch nerves go off to sleep and I touch you, a touch message gets sent. But your brain is looking for ouch messages and it says to itself, ‘There’s a message coming. It must be an ouch message.’ So you go ‘ouch’ and it hurts, but all I did was to touch you. It’s just that your brain was confused.’ (The language may, of course, be adjusted for older children.) If this fails to work, then active treatment should be stopped. (from Dental Fear in Children)

This is clearly a pain hallucination, as evidenced by the fact that the way they treat it is not with more medication, but with an explanation, pitched at the kids level, of why what they are fealing is not pain.

Now this is very different from what is called neuropathic pain, which is pain that is caused by a misinterpretation of an innocuous stimuli, like touch, or pains like phantom limb pain. This is the result of one kind of stimuli, for one reason or another, causing the bodily state that gives rise to the perception of pain.

Peripheral nociceptive fibers located in tissues and possibly in the nervi nervorum can become hyperexcitable by at least by 4 major mechanisms: a) nociceptor sensitization (“irritable nociceptors”); b) spontaneous ectopic activity; c) abnormal connections between peripheral fibers; and d) hypersensibility to catecholamines. This peripheral sensitization results in increased pain responses from noxious stimuli (primary hyperalgesia) and previously innocuous stimuli elicits pain (peripheral allodynia). Central nociceptive second order neurons in the spinal cord dorsal horn can also be sensitized when higher frequency inputs activate spinal interneurons. This results in the release of neuromodulators that activate glutamate receptors and voltage-gated calcium channels with a net effect of an increase of intracellular calcium that windup action potential discharges. Degeneration of peripheral nociceptive neurons may trigger changes in the properties of low-threshold sensitive neurons and axonal sprouting of the central processes of thesefibers that connect with central nociceptive interneurons. (from Neuropathic Pain Treatment: The Challenge

So it does look like we can distinguish the three states and that we do in fact find cases on one without the other.

Shesh! that turned out to be longer than i expected…but what the hell? I’m Outa Here!

Swimming Vegetables? Fish, Pain, and Consciousness

There has been for some time now a debate between fishing enthusiasts and animal rights activists over whether or not fish feel pain. A recent study by scientists in Scotland has reopened this debate by claiming to have demonstrated that fish in fact do feel pain.

They claim that fish have nociceptors and a part of the brain that responds to them, which is to say that they have a pain pathway. Also, when trout had their lips stung by bees they exhibited a rocking motion that is similar to pain behavior seen in other animal species (see for a report on the study.) It has already been known for some time that fish have endogenous opiodes and so it really looks like the preponderance of evidence suggests that fish do feel pain. (see for a table comparing various vertebrates and invertebrates on what we take to be requirements for feeling pain). When you think about it this is what we should expect, seeing as how fish are vertebrates and all. Of course not all fishes are vertebrates and the study I was just talking about used trout so when I talk about fishes I will be talking about fish like trout.

These findings are disputed by some. The standard claim that is made by people who want to deny that fish feel pain is that fish lack the cerebral cortex that allows them to experience the psychological state of being in pain. Pain behavior is not enough, nor is nociception. Pain is a psychological state distinct from the awareness of tissue damage. The problem with this response is that it is not the case that trout do not have any cerebral cortex at all but rather that they have very primitive ones. Their cortex is so simple, in fact, that it does not require a thalamus to relay information to it but rather is directly hooked up to the sensory neurons. Thus we cannot conclude that they do not have pains at all, but only that they have some primitive form of pain.

Also, +notice that the question ‘do fish feel pain’ is an empirical question, not a philosophical question and both parties recognize it depends on the particular brain structures that fish have. This supposes that we can tell, by looking at the brain of the fish, whether or not it experiences pain. Notice also, though that this objection assumes that something is not a pain unless it is felt as painful by the organism that has it, that is unless it is a conscious pain. So, for example, consider a fish like a trout except that its nociceptors are not connected to the brain. This fish will be in the very same states as the one who does have this connection. They will even behave in all the same ways because the brain stem and spinal cord is where most of the action in fish occurs any way. If the higher-order theory turns out to be right, then the way to characterize this situation is one where the latter fish has an (in principle) unconscious pain.

This brings out three important points. 1. It is likely that some fish do have conscious pains and therefore there is reason for thinking that sport fishing is immoral, and that eating fish is as immoral, or moral, as eating other kinds of animals. 2. Fish look like good candidates for helping us to empirically test the higher-order theory of consciousness. And 3. It raises an interesting question for Utilitarians; Do unconscious pains matter? Is it wrong to torture a zombie?

Brain Reading, Brain States, and Higher-order Thoughts

Recently there has been a lot of progress in brain reading; for instance Here is a nice piece done by CNN, here is a nice article on brain reading video games, and here is a link to Frank Tong’s lab, who may be familiar to those who regularly attend the ASSCor the Tuscon conferences. This stuff is important to me because it will ultimately help to solve the empirical question of whether or not animals, or for that matter whether we, have the higher-order states necessary to implement the higher-order strategy for Explaining What It’s Like so I am very encouraged by this kind of progress. The technology involved is mostly fMRI, though in the video game case it is scalp EEG. But though this stuff is encouraging fMRI and scalp EEG are the wrong tools for decoding neural representation, or so I argued in my paper “What is a Brain State?” (2006) Philosophical Psychology 19(6) (which I introduced over at Brain a while ago in my post Brain Statves Vs. States of the Brain). Below is an excerpt from that paper where I introduce an argument from Tom Polger’s (2004) book Natural Minds and elaborate on it a bit.

Polger argues that thinking

that an fMRI shows how to individuate brain states would be like thinking that the identity conditions for cricket matches are to pick out only those features that, statistically, differentially occur during all the cricket games of the past year. (p 56)

The obvious difficulty with this is that it leaves out things that may be important for cricket matches but unique (injuries, unusual plays (p 57)) as well as includes things that are irrelevant to them (number of fans, snack purchasing behavior (ibid)). The same problems hold for fMRI’s: they may include information that is irrelevant and exclude information that is important but unusual. Irrelevant information may be included because fMRI’s show brain areas that are statistically active during a task, while they may exclude relevant information because researchers subtract out patterns of activation observed in control images.

I would add that at mostwhat we should expect from fMRI images are picture of where the brain states we are interested in can be found not pictures of the brain states themselves. They tell us that there is something in THAT area of the brain that would figure in an explanation of the task but they don’t offer us any insight into what that mechanism might be. Knowing that a particular area of the brain is (differentially) active does not allows us to explain how the brain performs the function we associate with that brain area. We need to know more about the activity. Consider an analogy: we have a simple water pump and want to know how it works. We know that pumping the handle up and down gets the water flowing but ‘activity in the handle area’ does not explain how the pump works. Finding out that the handle is active every time water flows out of the pump would lead us to examining the handle with an eye towards trying to see how and why moving it pumps the water.

And, as I go on to argue, after examining those areas to find what the actual mechanisms are neuroscience suggests that it is synchronized neural activity in a specific frequency that codes for the content, both perceptual and intentional, of brain states. So, multi-unit recording technology (recording from several different nuerons in the brain at the same time) is the right kind of technology for looking at brain states. This is not to say, of course, that the fMRI and EEG technology is not valuable and useful. It is, and we can learn a lot about the brain from studying it, but it must be acknowledged that it is ultimatly, explanatorily, useless. To find higher-order thoughts or perceptions we will need to use advanced multi-unit recordings.