Priming, Change Blindness, and the Function of Consciousness

This Wednesday David Rosenthal will be giving  a talk at the Graduate Center entitled ‘The Poverty of Consciousness’. If you happen to be in the New York Area and you have a hankering for some hot and heavy philosophy of consciousness, come on down! (see the Cog Blog for some details).

I have been thinking about this issue and in light of my last post on priming and change blindness where I voiced my suspicion that the results posed a problem for Rosenthal’s claim about the function of consciousness. This lead to soem emailing between David and I and so I figured I would take some time to sort this stuff out.

Rosenthal’s main contention is that there is no evolutionary (read: reproductive)advantage to an organism’s having consious mental states. This is to be distinguished from the claim that there is no evolutionary advantage to the animal being conscious (creature consciousness), which quite obviously gives the creature a huge evolutionary advantage (e.g. being awake often helps one get away from predators…that is unless one has taken ambien!!!). The primary reason that he thinks this is because he endorses the higher-order theory of consciousness which claims that a mental state is conscious when I am conscious of myself as being in that state (and of course there is some experimental results which support the claim 🙂 ). This view commits one to the claim that any mental state can in principle occur unconsciously and this seems to suggest that most of a states causal powers will be had by the state whether it is conscious or not. If so then what purpose could (state) consciousness add?

When people hear this they usually think that it means that consciousness is completely epiphenomenal (has no causal efficacy). But this isn’t right, as I discussed in this post on Uriah Kriegal’s version of this argument. As Rosenthal says, 

Lack of function does not imply that the consciousness of these states has no causal impact on other psychological processes, but that causal impact is too small, varied, or neutral in respect of benefit to the organism to sustain any significant function. So my conclusion about function for does not imply epiphenomenalism.

His claim is that whatever causal powers a state’s being conscious endows it with they are too ‘small, varied, or neutral with respect to benefit’ to count as serving any function. O.K., so if this is your view then you have your work cut ouot for you because you have to A.) examine and refute all of the proposed functions for consciousness out there (from ‘deliberate control of action and rational inference’ to ‘enhances creativity’) and B.) provide an alternate explanation for how in the world conscious mental states ever cam about in the first place (tune in on Wed. to hear Rosenthal’s answers to these questions, though I gather that he will mostly be talking about intentional states and not qualitative states).

O.K., so now enter the priming results that I talked about previously (and which Rosenthal is aware of and has read and cites in his forthcoming papers/book on this subject). What that paper showed is that when one is presented with two pictures. A and B, which have some difference between them (like an extra tree or something), D, then when one is presented with A and B and one is not conscious of the difference then both A and B show priming effects (i.e. one will complete a degraded picture with what one unconsciously saw in A and B) but when one consciously notices that there is a difference between A and B then only B (i.e. not A) shows priming effects.

Now, if this is evidence for anything it will be evidence for there being a function for preceptual states (qualitative states). It would still be an open question, what, if any, function intentional states have (unless of course one, like me, thinks that intentional states are qualitative states). But is it evidence for a function of conscious states?

I suggested that it is evidence that a state’s being conscious inhibits previous ‘outdated’ representations and so serves to guide certain representations (i.e. the conscious ones) to greater causally efficacy and so to greater effect on behavior. If this were true, it seems to me that that would definitely give some evolutionary advatage to having conscious states. Suppose, for instance, that a bear is charging at you and that there is a spear that is just out of reach. The bear is running straight at you and you are casting frantically about for something to defend yourself with. As you look around, wildly, you first see the spear out of reach, and then in another pass you see the spear within reach (say it was knowcked towards you in the chaos of the bear stampeding towards you). Now let us assume that in one case you do not consciously see this difference and in the other case you do. In both cases you will have representations of the scene with the spear out of reach and with the spear within reach. But only in the case that yo consciously see the change (that is, consciously see that the spear is not in reach). The previous representation is now inhibited and the representation of the spear is now moral causally active and liable to cause you to reach for the spear and (maybe!) stave of the bear. This doesn’t seem like some minor or neutral thing. This sounds like an important function for perceptual consciousness!

During our email discussion he reffered me to the following paper,

Fernandez-Duque, Diego, and Ian M. Thornton, “Change Detection without Awareness: Do Explicit Reports Underestimate the Representation of Change in the Visual System?“, Visual Cognition 7, 1-3 (January-March 2000): 324-344.

His argument seems to be that, while I am right that these results do suggest some ‘utility’ for conscious perceptual states, it is not as useful as change detection, and that can happen unconsciously! I am still thinking about that, and will come back to it…but right now I have to go and move my car for street cleaning!!!!

Some Cool Links

(via David Pereplyotchik)

Below are links to some examples of talks that fall well within the cognitive science arena. I’ve found, however, that many of the non-cogsci talks are more interesting, because they introduce one, often in a vivid way, to a subject matter that is less familiar. (For instance, Wade Davis’s talk on anthropological fieldwork was, for me, genuinely exciting.)

You can browse the talks by clicking on the topic links at the bottom right of each video’s page. Or just start here

Enjoy.

David Pereplyotchik

Who is Morally Responsible for Actions Conducted by Military Robots in Wartime Operations?

Is the very interesting question that is addressed in this article written by a Major in the Armed Forces that I learned about via David Rosenthal. Here is an excerpt from the conclusion of the article.

The potential for a new ethical dilemma to emerge comes from the approaching capability to create completely autonomous robots. As we advance across the field of possibilities from advanced weapons to semiautonomous weapons to completely autonomous weapons, we need to understand the ethical implications involved in building robots that can make independent decisions. We must develop a distinction between weapons that augment our soldiers and those that can become soldiers. Determining where to place responsibility can begin only with a clear definition of who is making the decisions.

It is unethical to create a fully autonomous military robot endowed with the ability to make independent decisions unless it is designed to screen its decisions through a sound moral framework. Without the moral framework, its creator and operator will always be the focus of responsibility for the robot’s actions. With or without a moral framework, a fully autonomous decision-maker will be responsible for its actions. For it to make the best moral decisions, it must be equipped with guilt parameters that guide its decision-making cycle while inhibiting its ability to make wrong decisions. Robots must represent our best philosophy or remain in the category of our greatest tools.

I think that this is right. In the first instance the moral responsibility is on the creator of the autonomous robot. It would indeed be unethical for us to create a creature capable of deciding to kill without the ability to determine whether or not a particular killing is moral or immoral. Of course, as also noted, the robot has to have some motivation to do what is right (i.e. to ‘inhibition of its ability to make wrong decisions’).

But what would we have to add to a machine that was capable of making rational decisions on its own so that it would have a ‘moral framework’? The author seems to suggest that it would amount to adding ‘guilt paramaters’ to a basically utilitarian reasoning capacity (earlier in the article he talks about the robot reliably ‘maximizing the greatest good’).  But what about the twin pillars of Kantianism? Universalizability and the inherent value of rational autonomous agents? Would this kind of robot be capable of using the categorical imperative? It seems to me, as a first reaction tot this question, that it may be able to use it to get the perfect duties but that it wouldn’t work for the imperfect duties. That is, it would be able to see that some maxim, when universalized, resulted in a possible world that embodied a contradiction. So one can see that one has an obligation to keep ones promises by seeing that a world where no one did would be a world where the very action I intend to perform is not possible to perform. But what about the maxims that do not strictly embody a contradictory possible world but rather simply can’t be consistently willed? These seem to all contradict some desire that the agent has. So, the duty to help others when they need it depends on my wanting to be helped at some time. This is reasonable because it is reasonable to assume that every human will need help at some time. But why should this autonomous robot care about being helped in the future? Or for that matter about causing unnecessary pain when it itself doesn’t feel pain?

UPDATE: In the comments CHRISSYSNOW links to this very interesting article by Nick Bostrom (Director of the Oxford Future of Humanity Institute). Thanks!!!

Free Will and Omniscience, again

A while ago I was obsessed with trying to show that God’s foreknowledge of our actions was incompatible with Human free will. I have had some time to reflect on the issue and I want to take another stab at it.

So, let ‘K’ be ‘knows that’ and ‘G’ stand for God, and ‘R’ for Richard Brown (me). Then (1) says that if God knows that I will do some action then it is necessary that I do that action.

(1) (x)(K(G,R,x) –> [](D(R,x))

(1) captures the intuition that God’s knowledge necessitates our actions. I think that this is true, so to prove it I tried to show that denying it leads to a contradiction and, since it can’t be false it must be true. Here is the proof.

1. ~(x)(K(G,R,x) –> []D(R,x)) assume

2. (Ex)~(K(G,R,x) –> []D(R,x)) 1, by definition

3. (Ex)~~(K(G,R,x) & ~[]D(R,x)) 2, by def

4. (Ex) (K(G,R,x) & ~[]D(R,x)) 3, by def

5. K(G,R,a) & ~[]D(R,a) 4, EI

6. K(G,R,a) 5, CE

7. []K(G,R,a) 6, necessitation

8. ~[]D(R,a) 5, CE

9. (x)[] (K(G,R,x) –> D(R,x)) assumption (2′)

10. [](K(G,R,a) –> D(R,a)) 9, UI.

11. []K(G,R,a) –> []D(R,a) 10, distribution

12. ~[]D(R,a) –> ~[]K(G,R,a) 11, contraposition

13. ~[]K(G,R,a) 8,11 MP

14. []K(G,R,a) & ~[]K(G,R,a) 7,13 CI

15. (x)(K(G,R,x) –> [](D,R,x)) 1-14 reductio

The main objection centered on step (lucky number) 7 and my use of the rule of necessitation. 7 says that it is necessay that God knows that I perform action a. That means that it would have to be true in every possible world that God (in that world) knows that you perform action a. This may seem unreasonable if one thinks that there is a possible world where you do not perform action a. But if actions are events that can be named then it is easy to show that they must necessarily exist, in which case I would have to perform that action in every world where I exist, and snce it is just as easy to show that I must necessarily exist it follows that God would indeed know that I perform action a in every possible world and so 7 comes out true. So if one accepts S5 then one should not have a problem with 7.

But suppose that one rejects, or modifys S5 to avoid the embaressment of necessary existence? Then 7 starts to look fishy again. But is it? Say that there is some world where I do in fact perform a and some other world where I do not. Call them ‘A’ and ‘~A’. The in A God knows that I perform a but in ~A He doesn’t know that I perform a because it is false that I perform a and God does not know falsehoods. But is it really true that in ~A God does not know that I perform a? He knows everything, so He knows what is possible and so He knows that there is a possible world where I do perform a. Yes, but that just means that He knows “possibly Richard performs a’ not ‘Richard performs a'”, or in symbols; he knows <>D(R,a) not D(R,a). This I admit, and so it seems that there is a conception of God’s foreknowledge that is compatible with Human free will. But there does seem to be a sense in which He still knows that I do a; He knows in which possible worlds I do it and in which I don’t. But maybe that isn’t enough to justify 7 and so enough to avoid the issue.

But notice that it is a conception of God as confined to particular possible worlds where he knows all and only the truths in that world that is the actual world. The possible worlds are not real worlds but formal descriptions or specifications of how the actual world could have been and God has maximal knowledge of that. If one were a modal realist and thought that the possible worlds were real worlds that exist then there would be a problem here. In each world God would know either that you perform action a in that world or that you perform it in world-x. In both cases He knows that you perform action a and so it will true in all worlds that He knows that you do a. So 7 would be true again.

So I conclude that there are some interpretations where 7 comes out true; in which case there are some metaphysical systems in which God’s omniscience is incompatible with Human free will. Or He’s a dialetheist…