Gary and Jerry

I have been working on my paper ‘Consciousness, Higher-Order Thoughts, and What It’s Like’ which I will be presenting in a couple of weeks, parts of which have appeared here and over at Brains. I was reading through it today and something interesting occurred to me. It has been a project of mine for a while now to show that all and only mental states have qualitative properties, and so that the qualitative is the mark of the mental. To that end I have been developing a model of the propositional attitudes that treats the mental attitudes as a distinctive way of feeling about some represented proposition (I give an introduction to the account in my award winning 😉 paper The Mark of the Mental).

 In this current paper I am trying to show that one prominent theory of consciousness requires that thoughts be modeled as qualitative states, and that this view that I have independantly worked out fits very nicely with the higher-order account but I am also interested in ways of trying to get people to see that they already think that the attitude of belief has a distintive qualitative feel. I point out what I think are good ways of seeing that in the paper, one of which is a intuition pump that Alvin Goldman came up with in his 1993 paper “The Psychology of Folk Psychology”. Here is what I say.

Goldman offers us a nice intuition pump. Imagine a Mary-like thought experiment with a super-scientist called Gary. Gary has never had a desire, now imagine that he suddenly does have one. Won’t he have learned something new? Namely won’t he now know what it is like for him to have a desire? It seems to me that this suggests that there is a qualitative aspect to this mental attitude. But what about beliefs?

What occurred to me was a way to extend Goldman’s intuition pump to the case of beliefs. Given that we think that there coul be unconscious beliefs, consider the following super-scientist Jerry. Imagine that Jerry has been raised in a special room, much like Mary and Gary, but instead of never seeing red (Mary) or never having a desire (Gary), Jerry has never had a conscious belief. He has had plenty of unconscious beliefs, but none of them have been conscious. Let us imagine that we have finally discovered the difference between conscious and unconscious beliefs and that we have fitted Jerry with a special implant that keeps all of his beliefs unconscious, no matter how much he introspects. Let us also imagine that this device is selective enough so that it wipes out only the beliefs and so Jerry has plenty of other conscious experiences. He consciously sees red, has pain, wants food, fears that he will be let out of his room one day, wonders what the molecular structure of Einsteinium is, etc.

Now imagine that one of Jerry’s occurrent, unconscious, beliefs suddenly becomes a conscious belief. For the first time in Jerry’s life he has a conscious belief. Won’t he learn something new? Won’t he learn what it is like for him to have the belief that he has always had? Doesn’t this suggest that it is part of what we ordinarily think about beliefs that they are qualitative states? Consider a Jerry-like Mary experiment. Let us suppose that Mary has never had a conscious experience of red, though she has had all kinds of unconscious red experiences and all kinds of other conscious experiences (perhaps, though, no conscious color experiences?). Now imagine that an unconscious, occurrent, experience of red suddenly becomes conscious…it seems to me that these two cases are identical.

HOT Fun in the Summertime 2

Given that higher-order theories of consciousness are committed to the claim that there are unconscious sensory states (like pains, and seeings of red, etc) and that such unconscious states are not like anything for the creature that has them, they need a way to identify the sensory qualitative properties independently of our access to those properties (i.e. independent of their being conscious). This is where homomorphism theory comes in.

Rosenthal begins by noting that we characterize our sensory qualities in terms of their resemblances and differences within families of properties. These families of properties are in turn specified by reference to the perceptible properties of things in the world. For example we can characterize red as more similar to pink than to brown and so on and these resembelances and differences are homomorphic to the family of perceptible properties (presuambly wavelength reflective properties) that give rise to the mental qualities. What we get from doing this systematically is a ‘quality space’ which is homomorphic to the quality space of the perceptible properties. Our being aware of the qualitative properties of sensory states explains how it is that we have mental access to the perceptible properties. An unconscious pain state, then, will be one that resembles and differs other pain states in ways that are homomorphic to a family of perceptible properties, and via which we gain mental access to those properties. Though there may be other ways to independently specify the qualitative properties all higher-order theories need some way to do it and homomorphism theory looks promising. It is, at the very least, an illustration that it can be done. How can we extend this to cover the requirement that there is something that it is like for a creature to have a conscious thought?

I have elsewhere argued (The Qualitative Character of Conscious Thoughts) that the propositional attitudes can be modeled as taking some specific mental attitude towards some represented proposition and that the mental attitude just is some particular way of feeling about the represented proposition. So, for instance having a belief consists in feeling convinced, that is, it is the subjective feeling of certainty that one has with respect to the truth of the represented propositon. This model of the propositional attitudes actually fits very nicely with homomorphism theory. In the sensory case we become aware of the sensory qualities, which are the properties that mental states have in virtue of which they resemble and differ each other, and which resemblances and differences are homomorphic to the resemblances and differences that hold between the family of perceptible worldly properties. Our being conscious of these properties explains how it is that we have mental access to colors. So too in thought we become conscious of the cognitive qualities and this gives us access to our thoughts. To have a conscious belief is to be conscious of oneself as having a certain cognitive quality with respect to some content. And, these cognitive qualities (that is the mental attitudes themselves) will stand in various patterns of resemblances and differences from each other in just the same way that the sensory qualities do.

What are we to say about the actual homomorphism to perceptible properties? Is there any set of properties that the mental attitudes are homomorphic to? That is, is there a set of properties that have similarities and differences which resemble and differ in a way that preserves the similarities and differences between the mental attitudes? This is important since we need a way to specify the attitudes apart from their qualitative component. Yes; we can hypothesize that the homomorphic properties are the illocutionary forces of utterances. So the differences between beliefs that p and desires that p are homomorphic to the differences between the illocutionary force of the utterance of some linguistic item in the process of expressing the belief or desire.

This even may even turn out to be an explanation of why it is that having language allows us to have more fine-grained thoughts, if we could defend the claim that being conscious of our thoughts in respect of their qualitative attitude towards some represented content gives us mental access to the properties of the language that we would use to express that thought. If this were the case then the cognitive qualities would be exactly like the sensory qualities and our theory of one could be used to explain the other. Obviously more work needs to be done to flesh this out completely, but this line of thought seems to be a promising way of extending homomorphism theory to cover propositional attitudes and so this account of the propositional attitudes should be very attractive to anyone who accepts a higher-order theory of consciousness.

HOT Fun in the Summertime 1

I have been working on my paper ‘Consciousness, Higher-Order Thoughts, and What It’s Like’ which I will present as a poster at the SPP and as a talk at the ASSC in June. This paper is basically the first half of a longer paper of mine Consciousness on my Mind: Implementing the Higher-Order Strategy for Explaining What It’s Like, which I wrote in my spare time and while trying to avoid working on my dissertation 🙂 parts of this paper are adapted in various posts around here…e.g. Explaining What It’s Like, Two Concepts of Transitive Consciousness, Kripke, Consciousness, and the ‘Corn, and As ‘Corny as I Want to Be. At any rate, I thought it might be helpful/interesting to post the basics of the paper.

The paper has two parts. In the first part I give the argument that all higher-order theories of consciousness are committed to the claim that there is something that it is like for an organism to have conscious propositional attitudes (like beliefs, desires, etc).  In the second part of the paper I suggest a model of the propositional attitudes that treats them as qualitative states and show that it actually fits nicely with Rosenthal’s homomorphism theory of sensory qualities.

Given that the transitivity principle says that a conscious mental state is a mental state that I am conscious of myself as being in the argument for the commiotment to the qualitative nature of conscious beliefs is pretty simple and straight-forward.

  1. The transitivity principle commits you to the claim that any mental state can occur unconsciously and so to the claim that pains can occur unconsciously
  2. An unconscious pain is a pain that is in no way painful for the creature that has it (the transitivity principle commits you to this as well, on pain of failing to be able to give an account, as promised, of the nature of conscious qualitative states)
  3. It is the higher-order state, and solely the higher-order state, that is responsible for there being something that it is like to have a conscious pain.
  4. So, when a higher-order state of the appropriate kind is directed at a beleif it should make it the case that there is something that it is like for the creature that has the belief, otherwise there is more to conscious mental states than just higher-order representation.

I will post on the second part of the paper a little later.

Truth, Justification, and the Quasi-realist Way

In an earlier post (The Meaning and Use of ‘is True’) I argued that when discussing minimalism about truth we need to distinguish between redundancy theories (claims about the meaning is ‘is true’) and deflationism (claims about the nature of the property that the predicate is supposed to pick out). Once we see that redundancy theories conflate meaning and use we need an independant reason to accept deflationsim about truth. In this post I will argue against deflationism by arguing that it cannot account for our common sense feelings about justification in moral judgements by looking at the way that Simon Blackburn has appealed to minimalism in formulating his quasi-realist form of expressivism.

The basic problem for the deflationsist is that whatever account of moral contradiction they give will also be the correct account of contradiction in matters of taste. So ‘broccoli is disgusting’ will be true if and only if broccoli is disgusting and someone who said that it was not would really be contradicting me. From within the ‘taste framework’ broccoli is disgusting and I can just see that the Broccoli-ban and their feelings about the taste of broccoli are just objectively wrong. Of course all that any of this means is that I accept or agree with the sentiment that I expressed when I said that broccoli was disgusting. The story we tell here exactly parallels the story that is told in the case of moral judgments about cruelty, the Taliban, or whatever.

But clearly there could not be more of a difference between these two kinds of judgments. In particular, it seems obvious that this story about broccoli is just wrong. Common sense tells us that our feelings about broccoli may depend on two things. One, we may think that broccoli has a certain specific kind of taste and some people like that taste and others dislike it, which one it is may depend on what the person can taste, or it may depend on how they were raised, or just simply that they are disposed to like it or not and all of these vary from person to person. So there is nothing wrong with a person who thinks that broccoli tastes good, they simply have different tastes than ours and which you have doesn’t really matter. On the other hand we might say that broccoli has no determinate taste, it all depends on the person who does the tasting and the way that their taste buds are constituted.  Taste is a secondary property whose reality is totally mind dependant. So whether it is disgusting or not is relative to a person’s make up. Either of these common sense explanations of what is going on in the broccoli case differs dramatically from the common sense view of moral discourse. Only a madman would claim that our feelings about Saddam Hussein, the slaughter of children, truth telling, or promise keeping depended on us in either of the two ways mentioned above. Even Blackburn is not that reckless! He explicitly denies that anything like this is the right way to characterize moral disagreement. But the problem is that there is no way to distinguish these kinds of claims from the theoretical stand point of quasi-realism.

Since the theory is unable to distinguish these obviously distinguishable kinds of judgments, there must be something seriously wrong with deflationism about truth as it relates to a theory of justification. In fact, it seems obvious what is wrong with it. It very obviously and flagrantly turns moral matters into matters of personal taste. It does this by invoking redundancy and claiming that all there is to truth is its function in natural language of voicing agreement. To say that something is true is simply to repeat what we have said. If we happen to have said something about rape or the taste of broccoli makes no difference. Once we take the deflationary account of truth seriously we are no longer able to take moral discourse seriously.

Blackburn cannot respond that we can distinguish talk about broccoli and talk about genocide by the level of emotional commitment that we have to claims in one area as opposed to claims in the other because it is not inconsistent, on his view, that there be people who take broccoli as seriously as we take suffering. Thus the Broccoli-ban are every bit as serious about people who disagree with their feelings about the taste of broccoli, even to the point of putting dissenters to death. It may be the case that Simon Blackburn does not take talk about broccoli that seriously, but so what? If this is to be anything more than a mere autobiographical report what we need is a way to say that someone who did take talk about broccoli as serious as the Broccoli-ban is mistaken and further that their being mistaken is not simply an opinion of mine. Something, in short, that allows us to distinguish our talk about what depends solely on us and what does not. The deflationary theory of truth fares very badly here. It will only seem plausible if one thinks that that is all there is to truth, but this belief is not forced on us.

Not only does quasi-realism have no way to distinguish between the Taliban and the Broccoli-ban that is not mere autobiography we can see that the very same problem arises for other moral claims. Suppose someone from the Taliban were to respond to Blackburn that their views on women were the correct ones to have and that
Blackburn was wrong when he says that they (the Taliban) are objectively wrong. Let us suppose that they laugh at the idea that women are equal to men in any serious way. Then, according to the analysis that is on offer we are to conclude that what they have said is true just in case they really hold the attitudes that they say they do.
Blackburn then points out that they are ‘blind to the nature of women and the possibilities open to them’ and so on, but the important question of WHY it is that the Taliban have to agree with him on this point is left begging to be addressed. Of course by this I do not merely mean that the Taliban may irrationally refuse to admit that the evidence against them is compelling but rather the stronger claim that in some deep sense there is no way to really say which is right here. Each is saying something true when they express their moral sentiments about women. This is, of course, nothing more than relativism.

Do Thoughts Make Us Conscious of Things?

The Transitivity principle says that a conscious state is a mental state that we are conscious of ourselves as being in, thus an account of transitive consciousness is key for implementing a higher-order theory. Rosenthal is clear that he thinks that thoughts can sometimes make us conscious of things. Here is what he says in the introduction to Consciousness and Mind

We are conscious of things when we are in mental states that represent those things in some suitable way. We might be conscious of something by seeing it or sensing it in some other way, or by having an appropriate thought about it (p 4)

 In particluar Rosenthal argues that when we think of some object as present we become conscious of that object. This claim is crucial for anyone that wants to hold a higher-order thought version of higher-order theory. 

 In some recent arguing with Pete over at the Brain Hammer, he has denied that thoughts can make us conscious of things. Here is the example that I gave

You get up in the middle of the night to take a leak, it is pitch dark in your room, you can’t see a thing, you think to yourself “there’s a table in this room by the door, I better be careful not to stub my toe”.

I claim that I am conscious of the table. Or consider another case. Suppose that for some reason you think ‘John is here, in this room’ with your eyes closed and where John is in fact in the room. I claim that I would be conscious of John.

Now Pete seems to think that it is obvious that I am NOT conscious of the table or of John in these cases, whereas I seem to think that it is equaly obvious that I am. Does anyone have an argument/intuitions either way?

UPDATE: I think I have actually found an argument for the claim that thoughts makes us conscious, other than the claim that it is intuitive in the above examples that I am. Rosenthal argues that we can be conscious of one and the same experience in various ways and these ways can be more or less exact. So, I could be conscious of an experience of red as a particular shade of red or juast as a generic shade of red, but preseumably the first-order state is in fact a determinate shade. This means that there is more to my conscious experience than the first-order expereinces that I have. We need a higher-order state that is able to capture these kinds of differences and the intentional/conceptual content of thought is arguably the only way to do this. I rather like this arguement…

Swimming Vegetables? Fish, Pain, and Consciousness

There has been for some time now a debate between fishing enthusiasts and animal rights activists over whether or not fish feel pain. A recent study by scientists in Scotland has reopened this debate by claiming to have demonstrated that fish in fact do feel pain.

They claim that fish have nociceptors and a part of the brain that responds to them, which is to say that they have a pain pathway. Also, when trout had their lips stung by bees they exhibited a rocking motion that is similar to pain behavior seen in other animal species (see http://news.bbc.co.uk/2/hi/science/nature/2983045.stm for a report on the study.) It has already been known for some time that fish have endogenous opiodes and so it really looks like the preponderance of evidence suggests that fish do feel pain. (see http://www-phil.tamu.edu/~gary/awvar/lecture/pain.html for a table comparing various vertebrates and invertebrates on what we take to be requirements for feeling pain). When you think about it this is what we should expect, seeing as how fish are vertebrates and all. Of course not all fishes are vertebrates and the study I was just talking about used trout so when I talk about fishes I will be talking about fish like trout.

These findings are disputed by some. The standard claim that is made by people who want to deny that fish feel pain is that fish lack the cerebral cortex that allows them to experience the psychological state of being in pain. Pain behavior is not enough, nor is nociception. Pain is a psychological state distinct from the awareness of tissue damage. The problem with this response is that it is not the case that trout do not have any cerebral cortex at all but rather that they have very primitive ones. Their cortex is so simple, in fact, that it does not require a thalamus to relay information to it but rather is directly hooked up to the sensory neurons. Thus we cannot conclude that they do not have pains at all, but only that they have some primitive form of pain.

Also, +notice that the question ‘do fish feel pain’ is an empirical question, not a philosophical question and both parties recognize it depends on the particular brain structures that fish have. This supposes that we can tell, by looking at the brain of the fish, whether or not it experiences pain. Notice also, though that this objection assumes that something is not a pain unless it is felt as painful by the organism that has it, that is unless it is a conscious pain. So, for example, consider a fish like a trout except that its nociceptors are not connected to the brain. This fish will be in the very same states as the one who does have this connection. They will even behave in all the same ways because the brain stem and spinal cord is where most of the action in fish occurs any way. If the higher-order theory turns out to be right, then the way to characterize this situation is one where the latter fish has an (in principle) unconscious pain.

This brings out three important points. 1. It is likely that some fish do have conscious pains and therefore there is reason for thinking that sport fishing is immoral, and that eating fish is as immoral, or moral, as eating other kinds of animals. 2. Fish look like good candidates for helping us to empirically test the higher-order theory of consciousness. And 3. It raises an interesting question for Utilitarians; Do unconscious pains matter? Is it wrong to torture a zombie?

Brain Reading, Brain States, and Higher-order Thoughts

Recently there has been a lot of progress in brain reading; for instance Here is a nice piece done by CNN, here is a nice article on brain reading video games, and here is a link to Frank Tong’s lab, who may be familiar to those who regularly attend the ASSCor the Tuscon conferences. This stuff is important to me because it will ultimately help to solve the empirical question of whether or not animals, or for that matter whether we, have the higher-order states necessary to implement the higher-order strategy for Explaining What It’s Like so I am very encouraged by this kind of progress. The technology involved is mostly fMRI, though in the video game case it is scalp EEG. But though this stuff is encouraging fMRI and scalp EEG are the wrong tools for decoding neural representation, or so I argued in my paper “What is a Brain State?” (2006) Philosophical Psychology 19(6) (which I introduced over at Brain a while ago in my post Brain Statves Vs. States of the Brain). Below is an excerpt from that paper where I introduce an argument from Tom Polger’s (2004) book Natural Minds and elaborate on it a bit.

Polger argues that thinking

that an fMRI shows how to individuate brain states would be like thinking that the identity conditions for cricket matches are to pick out only those features that, statistically, differentially occur during all the cricket games of the past year. (p 56)

The obvious difficulty with this is that it leaves out things that may be important for cricket matches but unique (injuries, unusual plays (p 57)) as well as includes things that are irrelevant to them (number of fans, snack purchasing behavior (ibid)). The same problems hold for fMRI’s: they may include information that is irrelevant and exclude information that is important but unusual. Irrelevant information may be included because fMRI’s show brain areas that are statistically active during a task, while they may exclude relevant information because researchers subtract out patterns of activation observed in control images.

I would add that at mostwhat we should expect from fMRI images are picture of where the brain states we are interested in can be found not pictures of the brain states themselves. They tell us that there is something in THAT area of the brain that would figure in an explanation of the task but they don’t offer us any insight into what that mechanism might be. Knowing that a particular area of the brain is (differentially) active does not allows us to explain how the brain performs the function we associate with that brain area. We need to know more about the activity. Consider an analogy: we have a simple water pump and want to know how it works. We know that pumping the handle up and down gets the water flowing but ‘activity in the handle area’ does not explain how the pump works. Finding out that the handle is active every time water flows out of the pump would lead us to examining the handle with an eye towards trying to see how and why moving it pumps the water.

And, as I go on to argue, after examining those areas to find what the actual mechanisms are neuroscience suggests that it is synchronized neural activity in a specific frequency that codes for the content, both perceptual and intentional, of brain states. So, multi-unit recording technology (recording from several different nuerons in the brain at the same time) is the right kind of technology for looking at brain states. This is not to say, of course, that the fMRI and EEG technology is not valuable and useful. It is, and we can learn a lot about the brain from studying it, but it must be acknowledged that it is ultimatly, explanatorily, useless. To find higher-order thoughts or perceptions we will need to use advanced multi-unit recordings.  

Applying Frigidity

As commonly understood Kripke’s notion of rigidity is a property that some terms have and that others lack. I argue that there is no such property that is had by some terms and lacked by others; hence there is no rigidity as commonly construed . Recent discussions of rigidity have, I claim, forgotten the importance that stipulation plays in Kripke’s original account. In short the argument is that the truth-conditions of sentences with supposed rigid designators in them can vary depending on the stipulative act of the speaker. But if rigidity were a property of the terms themselves the truth-conditions should not vary! I introduce the notion of frigidity which is not a property that terms have, but something that we do and is a tool that we use to evaluate counter-factuals (Introducing Frigidity). We decide to ‘freeze’ the referent of a term and then try to evaluate counter-factual statements in terms of the constant referent. The ‘freezing’ is accomplished by a stipulative act on the part of the speaker.

Thus it follows that there are two ways to perform the thought experiment of frigid stipulation corresponding to taking one or the other terms flanking the identity sign as frigid and asking ‘what about that in another possible world?’ We decide that we are going to stipulate, trivially as Kripke says, that we want to find out about X in a possible world. So for water=H20 we can ask ‘what if H20, this very chemical substance, was in a world that was different from ours?’ If it turns out that H20 is not ‘watery’ that is OK. We can then also ask ‘what about water? Stuff that acts like this, fills our lakes and etc? What if we found a world that had watery stuff that was not H20?’ And that is OK as well. This has the advantage of explaining why people’s intitions vary about whether twater is water.

However Kripke (Kripke 1980) makes the claim that when it comes to mental kinds we cannot do this because in the case of pains and whatnot their properties are not separable in this way. But once we switch from rigidity to frigidity this is less obvious. We can hold the brain state frigid and ask ‘what is it like to have this brain state in a world that is different from ours?’ It may turn out that that very brain state is not like anything to have at all. On the other hand we can hold the sensation of pain frigid and ask questions about worlds with that sensation. It certainly seems logically possible that some of those worlds will have that sensation and yet not have any brain states at all!

This is just what Kripke’s bjection to the identity theory is. He says “this notion seems to me self-evidently absurd. It amounts to the view that the very pain I now have could have existed without being a mental state at all,” (p.147). Well, yes this is true if what he means is that the very brain state he is in and which is his pain might have existed but was not painful for the creature that had it. This is to do no more than admit that there might exist an unfelt pain. He is wrong if he means that a pained creature, one that felt pain, would not be in pain.