Pete has Ch. 4 of his book-in-progress up over at the Brain Hammer, entitled The Neurophilosophy of Consciousness. His stated goal is to discuss
philosophical accounts of state consciousness, transitive consciousness, and phenomenal character that make heavy use of contemporary neuroscientific research in the premises of their arguments.
This is because he defines ‘neurophilosophy’ as the bringing to bear of concepts from neuroscience to solve problems in philosophy, as he says
neurophilosophical work on consciousness proceeds largely by bringing neuroscientific theory and data to bear on philosophical questions such as the three questions of consciousness.
But it is unclear to me in what sense a theory of consciousness can be neurophilophical at all.
For instance, here is how he charecterizes Churchland’s account of what a conscious state is,
Paul Churchland articulates what he calls the “dynamical profile approach” to understanding consciousness (2002). According to the approach, a conscious state is any cognitive representation that is involved in (1) a moveable attention that can focus on different aspects of perceptual inputs, (2) the application of various conceptual interpretations of those inputs, (3) holding the results of attended and conceptually interpreted inputs in a short-term memory that (4) allows for the representation oftemporal sequences.
How is this neurophilophical? To be sure, Churchland goes on to talk about how this could be implemented in a connectionist neural architecture, but the actual theory of what a conscious state is isn’t much different from standard higher-order accounts. It involves being aware of myself as being in a certain state. Nothing neurophilosphical here! And his account of the what it is linke-ness just involves appeal to the representational content of sensory states, again nothing specifically neurophilosophical about this.
The same can be said about Prinz’s AIR model, which Pete quotes a summary of,
When we see a visual stimulus, it is propagated unconsciously through the levels of our visual system. When signals arrive at the high level, interpretation is attempted. If the high level arrives at an interpretation, it sends an efferent signal back into the intermediate level with the aid of attention. Aspects of the intermediate-level representation that are most relevant to interpretation are neurally marked in some way, while others are either unmarked or suppressed. When no interpretation is achieved (as with fragmented images or cases of agnosia), attentional mechanisms might be deployed somewhat differently. They might ‘‘search’’ or ‘‘scan’’ the intermediate level, attempting to find groupings that will lead to an interpretation. Both the interpretation-driven enhancement process and the interpretation-seeking search process might bring the attended portions of the intermediate level into awareness. This proposal can be summarized by saying that visual awareness derives from Attended Intermediatelevel Representations (AIRs). (p. 249)
Again, it is difficult to see how Prinz is doing anything more than discussing a possible implementation of the transitivity principle, which is not neurophilosophical. Pete does note that Prinz does not WANT his theory to be an implementation of the transitivity principle, but the challenge is to explain how it isn’t, not merely indicate that one wants it to be different.
Pete himself makes this clear in his summary of the three positions.
Churchland, Prinz, and Tye agree that conscious states are representational states. They also agree that what will differentiate a conscious representation from an unconscious representation will involve relations that the representation bears to representations higher in the processing hierarchy. For both Churchland and Prinz, this will involve actual interactions, and further these interactions will constitute relations that involve representations in processes of attention, conceptual interpretation and short term memory. Tye disagrees on the necessity of actually interacting with concepts or attention. His account is dispositional meaning that the representations need only be poised for uptake by higher levels of the hierarchy.
So, in so far as these are theories of consciousness, they are the standard ones. Now, I am not denying that these guys are neurophilosophers in the sense that Pete means; they do appeal to detailed neuroscience in the premises of their arguments. But I don’t see how the neuro stuff is supposed to be a theory of consciousness. As I have said, it looks like spelling out ways of implementing the two standard (first-order/higher-order) representational theories of consciousness.
The challenge then, is to spell out a neurophilosophical theoryof consciousness that is distinct from these standard theories which are not themselves neurophilosophical.
One thought on “Is There Such a Thing as a Neurophilosophical Theory of Consciousness?”
[…] Implementing the Transitivity Principle A conscious mental state, for Pete, is a complex state made up of two interacting states. One a first-order sensory state that carries information about the world and the other a higher-representation that characterizes the first-order state in terms of the concepts available to the creature and that also has ‘egocentric’ content, which is content to the effect that the state in question belongs to the creature in question. Recently I have been arguing that theories of consciousness like Pete’s and Prinz’s, and Churchland’s are really just implementations of the transitivity principle (even though and in spite of the fact that they do not think that they are implemting it (Is There Such a Thing as a Neurophilosophical Theory of Consciousness?)). […]