Over at Brains…

In case anyone here doesn’t know, I am also a contributor to Brains a group blog in the philosophy of Mind, Psychology and Cognitive Science…if anyone is interested here are some links to my posts there…

1. When Platitudes Collide

2. The Qualitative Character of Conscious Thoughts

3. Do Horses Sense Their Riders desires?

4. Brain States Vs. States of the Brain

5. Multiple Realization  Vs. Multiple Instantiation

6. Just Like Home

50th Philosophers’ Carnival

Welcome to the July 16, “Dog Days of Summer ’07” edition of the philosophers’ carnival.

The theme, as advertized, is: Mind, Meaning and Morals. I hope you find some interesting articles below and manage to avoid work for a litle while longer 🙂

 MIND

Ivana Simic addresses an issue in modal epistemology introduced by Crispen Wright in The Cautious Man Problem posted at Florida Student Philosophy Blog

Gualtiero Piccinini asks Two Questions About the Origins of Connectionism posted at Brains.

Avery Archer examines a classic debate in 20th Century Analytic philosophy in Naturalised Epistemology: Quine vs. Stroud posted at The Space of Reasons.

Tanasije Gorgoski tries to figure out what in the hell philosophers are talking about when they talk about experience in The Meaning of ‘Experience’ posted at A brood comb.

Thad Guy gives us another classic philosophy cartoon: Witness My Power and Be Awed posted at Thad Guy

MEANING

Jason Kuznicki presents Open Society IV: That Which Melts Into Air posted at Positive Liberty, saying, “I’m reading Karl Popper’s The Open Society and Its Enemies, as well as much of the supporting philosophy. Along the way, I’m blogging my observations.”

For some reason I recently had a discussion about what it meant to be an American and who the greatest American was. Well, after reading Charles Modiano’s History’s Hit Job on Thomas Paine I say Thomas Paine is a strong candidate! posted at CLEAN OUR HOUSE! – Killing the Bigotry in all of US

Richard Brown continues to pit the pragmatic thesis of frigidity against against the semantic thesis of rigidity and to argue for the supiority of frigidity both theoretically and in capturing the spirit of Kripke’s picture, in Logic, Languange, and Existence posted at Philosophy Sucks!

MORALS

Brian Berkey argues that the demandingness of ethics is not an objection to an ethical theory in What is a Moral Demand? posted at Philosophy from the Left Coast.

Steve Gimbel asks When Is Good Enough, Good Enough? posted at Philosophers’ Playground, saying, “Most classical ethical theories include some sort of maximization notion in the definition of moral rightness. This post asks Susan Wolf’s question, “isn’t there some point where an act is morally good enough?””

Rebecca Roache reflects on the lessons that debates in ethics can take from Hempel in Hempel’s Dilemma and Human Nature posted at Ethics Etc

David Hunter continues his examination of The Human Tissue Act: When should applications to not require consent be approved? posted at Philosophy and Bioethics

Matt Brown suggests that the thought experiments employed in our introductory courses on ethics may be doing more harm than good in cooked up thought experiments and the viciousness of ethics posted at Weitermachen!

Enigman wonders who the moral experts are in Physics and Ethics posted at Enigmania

And finally, Thom Brooks invites you to look at the introduction to his book The Global Justice Reader posted at The Brooks Blog.

That concludes this edition. Submit your blog article to the next edition of philosophers’ carnival using our carnival submission form. Past posts and future hosts can be found on our blog carnival index page.

Technorati tags:
, .

That’s Not an Argument

Mandik seems to think that if he deletes my arguments and threatens to ban me from his blog then he doesn’t have to address the issues…that’s fine. But there is an issue to address that I think is worth spelling out, as I think it generalizes a bit.

So, then what exactly is the disagreement?

In some other posts (Implementing the Transitivity Principle, and Is There Such a Thing as a Neurophilosophical Theory of Consciousness?) I have been arguing that theories like Mandik’s (which include, I think, people like Churchland and Prinz as well) must really be implementing some actual theory about what conscious states are rather than actually giving a distinctly new kind of theory. The notion of what a conscious state is seems to be somehow theoretically/copnceptually prior to neural investigation.

So take Mandik’s version of the theory. On his view you have a sensory state which carries information about the outside world and which triggers an ‘egocentric conceptul’ state. An egocentric conceptual state is a state that has two kinds of content. It has objectively third person concepts and concepts that single out the thinker as the one having the experience. When these two states start to causally interact a conscious state is born. So, as an example take me having a conscious experience of seeing a red ball. Then I have a sensory state which carries information about the red ball (whatever that means…it probably means something like ‘there are properties that the objects in the world have and there are there cells in the retina, LGN, and V1, etc, which are ‘tuned’ to those properties’ and then there is a causal story about how those physical objects cause those things that are tuned to them to operate) and that sensory state triggers a egocentric conceptual state, perhaps something like ‘there is a red ball in that direction from me’ or perhaps more simply, ‘red ball in front of me’.  These two states start to causally interact (whatever that means) and then, for some unexplained reason, there is a conscious seeing of a red ball.

Now when one hears this, one naturally sees a lot of parrellels with Rosenthal’s version of higher-order thought theory. On that theory there is a first-order state which has qualitative properties, these properties represent the perceptible properties of physical objects. So on both views we have some states and those states are in the business of representing physcial properties. Rosenthal wants to call the properties of the sensory states ‘sensory qualities’ or ‘qualitative properties’ and Mandik doesn’t, but other than that purely terminological point there is no difference at this point. Ok, so then the first-order sensory state ‘triggers’ (not really for Rosenthal, but let’s let that slide since Pete and I can both agree on it) a higher-order thought. The higher-order thought is something like ‘I am, myself, seeing a red ball’. The person is now conscious of themselves as seeing a red ball and so is consciously seeing a red ball.

So then one might be tempted to say ‘so, Mandik, you’re conceptualized egocentric states sound a lot like higher-order thoughts’ since they are basically states that conceptually characterize the sensory state. So, though it is not exacly Rosenthal’s higher-order theory, it is a theory that implements the transitivity principle and that takes care of the mystery about why having these states causally interact results in a conscious state. To which Mandik responds, ‘no, if they were higher-order thoughts then I would be conscious of the first-order sensory state, but on my theory I am only conscious of the red ball’. Mandik seems to think that that constitutes an argument that his view doesn’t implement transitivity (well, to be fair, I guess you need the premise ‘and I am only conscious of the red ball’ to make it an argument). I claim that it is something that itself needs an argument; it is the thing that you need to establish via an argument and so is not an argument that his view doesn’t implement transitivity, for the following reasons.

First, according to Rosenthal’s version of the theory it will in fact seem to you as though all you are aware of is the red ball because the higher-order thought conceptualizes the first-order state as having properties that belong to the tomatoe. But you become so conscious by being conscious of the first-order state. For it is the properties of the first-order state that you are conceptualizing! How could we conceptualize the object itself? We would not be conscious of the object if we did not have some first-order state that represented it. So to simply say ‘oh, on my view I am only conscious of the tomatoe, so it is not a higher-order view’ is to beg the question at hand. One needs to explain how it is that the egocentric ceonceptual state gets to be about the red ball without utilizing the sensory state. This is what I meant when I said that Mandik had not given an argument. He has not given an account of how it is that his ‘but I am only conscious of the tomatoe’ is true in a way that isn’t the above way.

And we do sometimes have the kinds of higher-order thoughts that Rosenthal affirms and Mandik denies. So for instance, I might think ‘I am having a really bad migrane right now’ (I am thinking about my experience) or ‘I felt really nausaus last night’ (I am thinking about when the experience occurred) or looking at a stick in some water ‘the way that stick looks is not the way that it is’ (I am thinking about the experience as different from reality)…We do think about vehicular properties of the experience (in Mandik’s terms). The claim that Rosenthal makes is that we are actually doing it all the time, though we are not conscious of doing it. So again Mandik needs to spell out how his view isn’t the above kind of view, which he hasn’t done. 

When you point out that his evidence isn’t really evidence and whithout it he is simply insisting that his view is different without justification (again how else could it work if not in the way talked about above?) so it would be nice if he could be explicit and give an argument as to why it is different (that is, answer the above question), he refuses to answer. Now, I didn’t think that was such a big deal…in fact I was hoping that he would just give me an argument that showed me how I was wrong…But he didn’t…instead he deleted the post where I made the case just as I did above and threatened to ban me. Surely this is Cyber Sophistry!

Implementing the Transitivity Principle

A conscious mental state, for Pete, is a complex state made up of two interacting states. One a first-order sensory state that carries information about the world and the other a higher-representation that characterizes the first-order state in terms of the concepts available to the creature and that also has ‘egocentric’ content, which is content to the effect that the state in question belongs to the creature in question. Recently I have been arguing that theories of consciousness like Pete’s and Prinz’s, and Churchland’s are really just implementations of the transitivity principle (even though and in spite of the fact that they do not think that they are implemting it (Is There Such a Thing as a Neurophilosophical Theory of Consciousness?)).

 In Ch. 5 of Pete’s book-in-progress The subjective Brain he address this concern by saying the following.

Aren’t mental representations with conceptualized egocentric contents automatically implementations of the Transitivity Principle?

Nope. According to Transitivity, a state is conscious only if one is conscious of it. However, according to the theory to be further fleshed out in the next chapter, one set of mental representations that would suffice for consciousness would include the following. I have a sensational state that carries the information that, among other things, there is a coffee cup to my left which triggers the conceptualization that there is a coffee cup to my left which in turn (the conceptualization) exerts (yet to be specified) causal influences on the sensational state. What I would be conscious of, on this view, is a coffee cup as being to my left. I would not be conscious of either the sensational state or the conceptual state or their mutual causal interaction. I need not be conscious of any mental state of me. (There being a coffee cup to my left is arguably a state of me, but it is pretty clearly not one of my mental states.) Therefore, the conceptual egocentric representations that suffice for consciousness need not implement Transitivity.

Now one way of responding to this claim, and the way that is currently being debated over at Bran Hammer (Contents, Vehicles, and Transitive Consciousness and more here), is to argue, as Robert Lurz does, that I can be conscious of my mental states by being conscious of what those states represent. If this is true then it is obvious that Pete and company are just offering an alternative way of implementing the transitivity principle. I do not want to talk about this issue here, as it is being debated at Brain Hammer and I am content to let it continue there.

What I do want to talk about is the claim that I have made that everything that Pete says is something that Rosenthal can agree with and so nothing that he has said shows that there is anything wrong with transitivity or that his theory doesn’t implement it. (A Tale of Two T’s). So, I was reading Ch. 4 of Conscious and Mind entitled ‘Introspection and Self-Interpretation’ while following up on my Introspective HOT Zombie of the previous post (more on that later) when I found this nice passage.

When one has a thought that one’s own experience visually represents a red physical object, that thought need not be in any way consciously inferential or based on theory; it might well be independent of any inference of which one is conscious. From a first person point of view, any such thought would seem unmediated and spontaneous. And it is the having of just such thoughts that makes one conscious of one’s experiences. Such a thought, morover, by representing the experience as itself visually representing a red physical object, makes one conscious of the experience as being of the type that qualitatively represents red objects. And being an experience of that type simply is having the relevant mental quality. So, being conscious of oneself as having a sensation of that type is automatically being consciousof oneself as having a sensation with the quality of mental red, and thus of the mental quality itself. (p. 119)

This is interesting because Rosenthal seems to be arguing, in the reverse of Lurz, that being conscious of my self as being in a certain mental state just is being conscious of what the state represents.

So for Rosenthal it will be true that when we introspect we will be conscious of the tomatoe. That is from the first person point of veiw it will seem to us that we are conscious only of the properties of the tomatoe. How is this possible? he makes this a little cearer on the next page where he says,

When one shifts one’s attention from the tomatoe to one’s visual experience of it, it does not seem, subjectively, that some new qualities arise in ones stream of consciousness. This may well seem to underwrite Harman’s insistence that the only quality one is aware of in either case is that of the tomatoe. But that is too quick. As noted earlier, we can be conscious of a particular thing in particular ways. When one see a red tomatoe consciously but unreflectively, one conceptualizes the quality one is aware of as a property of the tomatoe. So that is how one is conscious of that qulity.

So again, we conceptualize the mental quality as a property of the tomatoe when the state is conscious and so we are concious of it as a property of the tomatoe; to us it will seem as though all we are conscious of is the property of the tomatoe. When we introspect we conceptualize the quality as a property of the experience, not of the tomatoe. So Rosenthal can agree that what we are conscious of is the coffee cup or the tomatoe and yet all the while this is just an implementation of the transitivity principle.

Varieties of Higher-Order Zombie

A philosophical zombie is supposed to be a creature that is functionally/physically identical to you but which lacks qualitative consciousness. Although there is something that it is like for you to drink orange juice just after brushing your teeth, there is nothing that it is like for your zombie twin to do the same thing. Of course there is a huge debate over whether these things are really possible or not and if so what they show about consciousness. I don’t really want to get into this traditional problem (my own view is that the answers are ‘no’ and ‘nothing’), but rather want to discuss some kinds of higher-order zombies.

Disregarding the ‘functionally/physically identical’ bit, a zombie on the higher-order theory of consciousness is a creature that has all of my first-order states but none of my higher-order states. There will be nothing that it is like for this creature to have any of its mental states, even though he and I will be pretty much behaviorally indistinguishable (since conscious mental states have very little function on the higher-order theory (but not ‘no function’, as I argued in The Function of Consciousness in Higher-Order Theories)).

I was recently reading Rosenthal’s Metacognition and Higher-Order Thoughts, which is a response to several commentaries on his 2000 Consciousness & Cognition piece. In it Rosenthal addresses the possibility of a HOT zombie, which is a creature “whose inner life is subjectively indistinguishable from ours despite the lack of sensory states.” A HOT zombie is a creature who has all of my higher-order states but none of my first-order states. This is, of course, a radical version of the objection from the ’empty HOT’ and while it is wildly implausible, it is a theoretical possibility and so something must be said about it.

Now some may find the possibility of a HOT zombie to be paradoxical (in fact one of the commentors does). Rosenthal’s response to this is his usual one. He says,

[T]he intuitive paradox rests on an ambiguity in ‘sensory state.’ The sensory states the HOT zombie would lack are only nonconscious states. Since conscious states are states one is conscious of oneself as being in, notional states are allthat matter for the purposes of consciousness.

So me and my HOT zombie twin will have indistinguishable conscious experience but, as Rosenthal notes, we will behave in very different ways. This is because the first-order states that the HOT zombie lacks are the states that have most of the causal efficacy.

Now this is all very interesting in its own right (but I don’t want to discuss it now…Pete and I have argued over this stuff beofre, like here), but last night, as I was introspecting while listening to some live jazz music, I started thinking about another kind of higher-order zombie; an introspective HOT zombie. Introspection, on the higher-order theory, is the occurance of a suitable higher-order state that is about one’s higher-order states. A conscious experience occurs when one is conscious of oneself as being in a certain first-order state and in introspection one becomes conscious of oneself as being conscious of a certain higher-order state. Since introspection is simply the occurance of some third-order state about my second-order states all of the issues about misrepresentation come up again at this higher level.

So we could (theoretically) have a creature who lacked all of my first-order states and all of my second-order states but which had all of my third-order states. This is the introspective HOT zombie. This creature has no conscious states even though it seems to him as though he does. When I see red I will be conscious of the red and conscious of myself as seeing red and were I to introspect I would be conscious of myself as being conscious of myself as seeing red, but the introspective HOT zombie is just conscious of itself as being conscious of itself as seeing red. What will it be like for this ceature? It will be like consciously and introspectively seeing red.

As if this wasn’t bizzare enough we could (again theoretically) have a case of a creature who had a first-order state that was a seeing of red and that had a HOT misrepresenting this first-order state as a seeing of green. What it is like for this creature to have the first-order state will be like seeing green so it will be like seeing green for this creature. Now suppose that this creature introspects its conscious mental states and (for some reason) has a third-order state that represents the second-order state as a seeing of red (that is it accidently gets things right). What will it be like for this creature? Are we to say that this creature is conscious of itself as seeing red and not conscious of itself as seeing red? That what it is like for this creature is like seeing red and not seeing red?

I will have to think about this some more…

Is There Such a Thing as a Neurophilosophical Theory of Consciousness?

Pete has Ch. 4 of his book-in-progress up over at the Brain Hammer, entitled The Neurophilosophy of Consciousness. His stated goal is to discuss

philosophical accounts of state consciousness, transitive consciousness, and phenomenal character that make heavy use of contemporary neuroscientific research in the premises of their arguments.

This is because he defines ‘neurophilosophy’ as the bringing to bear of concepts from neuroscience to solve problems in philosophy, as he says

neurophilosophical work on consciousness proceeds largely by bringing neuroscientific theory and data to bear on philosophical questions such as the three questions of consciousness.

But it is unclear to me in what sense a theory of consciousness can be neurophilophical at all.

For instance, here is how he charecterizes Churchland’s account of what a conscious state is,

Paul Churchland articulates what he calls the “dynamical profile approach” to understanding consciousness (2002). According to the approach, a conscious state is any cognitive representation that is involved in (1) a moveable attention that can focus on different aspects of perceptual inputs, (2) the application of various conceptual interpretations of those inputs, (3) holding the results of attended and conceptually interpreted inputs in a short-term memory that (4) allows for the representation oftemporal sequences.

How is this neurophilophical? To be sure, Churchland goes on to talk about how this could be implemented in a connectionist neural architecture, but the actual theory of what a conscious state is isn’t much different from standard higher-order accounts. It involves being aware of myself as being in a certain state. Nothing neurophilosphical here! And his account of the what it is linke-ness just involves appeal to the representational content of sensory states, again nothing specifically neurophilosophical about this.

The same can be said about Prinz’s AIR model, which Pete quotes a summary of,

When we see a visual stimulus, it is propagated unconsciously through the levels of our visual system. When signals arrive at the high level, interpretation is attempted. If the high level arrives at an interpretation, it sends an efferent signal back into the intermediate level with the aid of attention. Aspects of the intermediate-level representation that are most relevant to interpretation are neurally marked in some way, while others are either unmarked or suppressed. When no interpretation is achieved (as with fragmented images or cases of agnosia), attentional mechanisms might be deployed somewhat differently. They might ‘‘search’’ or ‘‘scan’’ the intermediate level, attempting to find groupings that will lead to an interpretation. Both the interpretation-driven enhancement process and the interpretation-seeking search process might bring the attended portions of the intermediate level into awareness. This proposal can be summarized by saying that visual awareness derives from Attended Intermediatelevel Representations (AIRs). (p. 249)

Again, it is difficult to see how Prinz is doing anything more than discussing a possible implementation of the transitivity principle, which is not neurophilosophical. Pete does note that Prinz does not WANT his theory to be an implementation of the transitivity principle, but the challenge is to explain how it isn’t, not merely indicate that one wants it to be different.

Pete himself makes this clear in his summary of the three positions.

Churchland, Prinz, and Tye agree that conscious states are representational states. They also agree that what will differentiate a conscious representation from an unconscious representation will involve relations that the representation bears to representations higher in the processing hierarchy. For both Churchland and Prinz, this will involve actual interactions, and further these interactions will constitute relations that involve representations in processes of attention, conceptual interpretation and short term memory. Tye disagrees on the necessity of actually interacting with concepts or attention. His account is dispositional meaning that the representations need only be poised for uptake by higher levels of the hierarchy.

So, in so far as these are theories of consciousness, they are the standard ones. Now, I am not denying that these guys are neurophilosophers in the sense that Pete means; they do appeal to detailed neuroscience in the premises of their arguments. But I don’t see how the neuro stuff is supposed to be a theory of consciousness. As I have said, it looks like spelling out ways of implementing the two standard (first-order/higher-order) representational theories of consciousness.

The challenge then, is to spell out a neurophilosophical theoryof consciousness that is distinct from these standard theories which are not themselves neurophilosophical.

Consciousness is Not a Relational Property

I’m Back! At least for the next five days until I go to Vegas for the ASSC on Friday for some more HOT Fun in the Summertime!

Wow, what a trip!!! Toronto is much nicer than I thought it would be, and the East Coast is truely beautiful this time of year (the highlight for me was the saltwater pool in Kennebunkport…almost like being in the ocean in Hawaii, or Jamaica or something, nice!)…but it is good to be back in Brooklyn…

Anyways, here is the passage from p. 211 of Consciousness and Mind that I mentioned in the previous post (Consciousness, Relational Properties, and Higher-Order Theories)

Since there can be something it’s like for one to be in a state with particular mental qualities even if no such state occurs, a mental state’s being conscious is not strictly speaking a relational property of that state. A state’s being conscious consists in its being a state one is conscious of onself as being in. Still, it is convienent to speak loosely of the property of a state’s being conscious as relational so as to stress that it is in any case not an intrinsic property of mental states.

’nuff said? This is the real reason that Rosenthal’s view is not targeted by objections like Pete Mandik’s Unicorn argument, or the common objection from the possibility of the HOT occuring in the absence of the first-order state, or as I argued, from Uriah’s charge that higher-order theories, like Rosenthals’s, that claim that the first-order state does not acquire a new property (i.e. of being a conscious state) are committed to the claim that consciousness is epiphenomenal.

I agree that the confusion is due mostly to Rosenthal’s ‘loose way of speaking’ and his reluctance to disabuse people of this intuitive picture of the higher-order thought theory. This is at least in part because this way of thinking of the theory agrees better with our common sense conception of how things like this should work. This, as I have already said, is yet another reason to prefer K-HOTs to Q-HOTs.