Who is Morally Responsible for Actions Conducted by Military Robots in Wartime Operations?

Is the very interesting question that is addressed in this article written by a Major in the Armed Forces that I learned about via David Rosenthal. Here is an excerpt from the conclusion of the article.

The potential for a new ethical dilemma to emerge comes from the approaching capability to create completely autonomous robots. As we advance across the field of possibilities from advanced weapons to semiautonomous weapons to completely autonomous weapons, we need to understand the ethical implications involved in building robots that can make independent decisions. We must develop a distinction between weapons that augment our soldiers and those that can become soldiers. Determining where to place responsibility can begin only with a clear definition of who is making the decisions.

It is unethical to create a fully autonomous military robot endowed with the ability to make independent decisions unless it is designed to screen its decisions through a sound moral framework. Without the moral framework, its creator and operator will always be the focus of responsibility for the robot’s actions. With or without a moral framework, a fully autonomous decision-maker will be responsible for its actions. For it to make the best moral decisions, it must be equipped with guilt parameters that guide its decision-making cycle while inhibiting its ability to make wrong decisions. Robots must represent our best philosophy or remain in the category of our greatest tools.

I think that this is right. In the first instance the moral responsibility is on the creator of the autonomous robot. It would indeed be unethical for us to create a creature capable of deciding to kill without the ability to determine whether or not a particular killing is moral or immoral. Of course, as also noted, the robot has to have some motivation to do what is right (i.e. to ‘inhibition of its ability to make wrong decisions’).

But what would we have to add to a machine that was capable of making rational decisions on its own so that it would have a ‘moral framework’? The author seems to suggest that it would amount to adding ‘guilt paramaters’ to a basically utilitarian reasoning capacity (earlier in the article he talks about the robot reliably ‘maximizing the greatest good’).  But what about the twin pillars of Kantianism? Universalizability and the inherent value of rational autonomous agents? Would this kind of robot be capable of using the categorical imperative? It seems to me, as a first reaction tot this question, that it may be able to use it to get the perfect duties but that it wouldn’t work for the imperfect duties. That is, it would be able to see that some maxim, when universalized, resulted in a possible world that embodied a contradiction. So one can see that one has an obligation to keep ones promises by seeing that a world where no one did would be a world where the very action I intend to perform is not possible to perform. But what about the maxims that do not strictly embody a contradictory possible world but rather simply can’t be consistently willed? These seem to all contradict some desire that the agent has. So, the duty to help others when they need it depends on my wanting to be helped at some time. This is reasonable because it is reasonable to assume that every human will need help at some time. But why should this autonomous robot care about being helped in the future? Or for that matter about causing unnecessary pain when it itself doesn’t feel pain?

UPDATE: In the comments CHRISSYSNOW links to this very interesting article by Nick Bostrom (Director of the Oxford Future of Humanity Institute). Thanks!!!

Free Will and Omniscience, again

A while ago I was obsessed with trying to show that God’s foreknowledge of our actions was incompatible with Human free will. I have had some time to reflect on the issue and I want to take another stab at it.

So, let ‘K’ be ‘knows that’ and ‘G’ stand for God, and ‘R’ for Richard Brown (me). Then (1) says that if God knows that I will do some action then it is necessary that I do that action.

(1) (x)(K(G,R,x) –> [](D(R,x))

(1) captures the intuition that God’s knowledge necessitates our actions. I think that this is true, so to prove it I tried to show that denying it leads to a contradiction and, since it can’t be false it must be true. Here is the proof.

1. ~(x)(K(G,R,x) –> []D(R,x)) assume

2. (Ex)~(K(G,R,x) –> []D(R,x)) 1, by definition

3. (Ex)~~(K(G,R,x) & ~[]D(R,x)) 2, by def

4. (Ex) (K(G,R,x) & ~[]D(R,x)) 3, by def

5. K(G,R,a) & ~[]D(R,a) 4, EI

6. K(G,R,a) 5, CE

7. []K(G,R,a) 6, necessitation

8. ~[]D(R,a) 5, CE

9. (x)[] (K(G,R,x) –> D(R,x)) assumption (2′)

10. [](K(G,R,a) –> D(R,a)) 9, UI.

11. []K(G,R,a) –> []D(R,a) 10, distribution

12. ~[]D(R,a) –> ~[]K(G,R,a) 11, contraposition

13. ~[]K(G,R,a) 8,11 MP

14. []K(G,R,a) & ~[]K(G,R,a) 7,13 CI

15. (x)(K(G,R,x) –> [](D,R,x)) 1-14 reductio

The main objection centered on step (lucky number) 7 and my use of the rule of necessitation. 7 says that it is necessay that God knows that I perform action a. That means that it would have to be true in every possible world that God (in that world) knows that you perform action a. This may seem unreasonable if one thinks that there is a possible world where you do not perform action a. But if actions are events that can be named then it is easy to show that they must necessarily exist, in which case I would have to perform that action in every world where I exist, and snce it is just as easy to show that I must necessarily exist it follows that God would indeed know that I perform action a in every possible world and so 7 comes out true. So if one accepts S5 then one should not have a problem with 7.

But suppose that one rejects, or modifys S5 to avoid the embaressment of necessary existence? Then 7 starts to look fishy again. But is it? Say that there is some world where I do in fact perform a and some other world where I do not. Call them ‘A’ and ‘~A’. The in A God knows that I perform a but in ~A He doesn’t know that I perform a because it is false that I perform a and God does not know falsehoods. But is it really true that in ~A God does not know that I perform a? He knows everything, so He knows what is possible and so He knows that there is a possible world where I do perform a. Yes, but that just means that He knows “possibly Richard performs a’ not ‘Richard performs a'”, or in symbols; he knows <>D(R,a) not D(R,a). This I admit, and so it seems that there is a conception of God’s foreknowledge that is compatible with Human free will. But there does seem to be a sense in which He still knows that I do a; He knows in which possible worlds I do it and in which I don’t. But maybe that isn’t enough to justify 7 and so enough to avoid the issue.

But notice that it is a conception of God as confined to particular possible worlds where he knows all and only the truths in that world that is the actual world. The possible worlds are not real worlds but formal descriptions or specifications of how the actual world could have been and God has maximal knowledge of that. If one were a modal realist and thought that the possible worlds were real worlds that exist then there would be a problem here. In each world God would know either that you perform action a in that world or that you perform it in world-x. In both cases He knows that you perform action a and so it will true in all worlds that He knows that you do a. So 7 would be true again.

So I conclude that there are some interpretations where 7 comes out true; in which case there are some metaphysical systems in which God’s omniscience is incompatible with Human free will. Or He’s a dialetheist…

58th Philosophers’ Carnival

Welcome to 58th edition of the Philosophers’ Carnival!

I am happy to be hosting the carnival again and glad to see that it seems to be doing well. I always liked the way that Avery did the 46th (international) Carnival and so I modeled this edition on his ‘psuedo-conference’ format. What follows is, indeed, a ‘narrow cross-section of philosophy from accross the web’.

Special Session on the Employability of Philosophers

  1. Presenter: Tom Brooks, The Brooks Blog
    The truth is out there: employers want philosophers
  2. Respondent: Rich Cochrane, Big Ideas
    The Value of a Philosophical Education

Symposium on Philosophy of Science

  1. Sharon Crasnow, Knowledge and Experience
    Is Science Based on Faith?
  2. Matt Brown, Weitermachen!
    Common Sense, Science, and “Evidence for Use”

Symposium on Race and Liberty 

  1. Richard Chapell, Philosophy, et cetera
    Implicit Interference
  2. Joseph Orosco, Engage: Conversations in Philosophy
    It’s Only Racism When I Say It Is

Invited Session

 Symposium on Philosophy of Consciousness

  1. Tanasije Gjorgoski, A brood comb
    The Myth of ‘Phenomenal/Conscious Experience’
  2. Richard Brown, Philosophy Sucks!
    Priming and Change Blindness
  3. Gabriel Gottlieb, Self and World
    Pre-reflective Consciousness: A Fichtean Intervention

Symposium on Metaphysics and Epistemology

  1. Marco, El Blog de Marcos
    Truthmaking and Explanation
  2. Kenny Pearce, blog.kennypearce.net
    What Does Bayesian Epistemology Have To Do With Probabilities?

Symposium on Philosophy of Religion

  1. Dave Maier, DuckRabbit
    D’Souza vs. Dawkins
  2. Enigman, Enigmania
    Is the Free-will Defence Defensible?
  3. Chris Hallquist, The Uncredible Hallq
    What’s the deal with philosophy of religion?

I hope you enjoyed! Be sure to check out future editions of the Philosophers’ Carnival.

    Submit your blog article to the next edition of philosophers’ carnival using our carnival submission form. Past posts and future hosts can be found on our blog carnival index page

How to Tell if you are Lying to your Kid about Santa

A while back I argued that it is immoral to lie to children about Santa, Richard Chapell over at Philosophy, etc responded that pretending with the child that there is a Santa is a morally acceptable and praise-worthy action. I tend to agree with him on this point. But I do not think that most people are pretending with their children.

Evidence for this comes from the strong social pressure not to tell children that there is no Santa. If this is all pretense and everyone knows it then what is wrong with pointiing out that we are pretending. If you tell a child pretending that a banna is a phone that the object they are using is really a bannana and not a phone they will tell you that they know that and keep pretending. But if you tell a child that there is no Santa this is not the reaction that one typically gets.

So how can you tell if you are pretending or not? One sure fire way is how you deal with the question ‘Mommy/Daddy is Santa real?’ but it seems to me that if your child has to ask this question then you haven’t been pretending.

Priming and Change Blindness

Change blindness is one of those surprising things that cognitive science has revealed about the nature of conscious experience. It turns out th at there can be rather large changes in the visual scene a person is looking at and that most people will completely miss them! I am not talking about small changes but rather very large changes right in front of the faces that are actively looking for changes (for some nice examples see this link). Once one sees the difference it is so obvious that ones attentionis drawn to it every time, but for a while it really looks as though there is no difference between the two pictures.

Any theory of consciousness should be able to account for this phenomena. Fred Dretske, in his well known paper “Change Blindness” (requires a password), gives the following account of what is going on in instances of change blindness. He distinguishes between thing-awareness and fact-awareness. Thing awareness is our being conscious of some physical thing in the world. Examples include seeing blue, hearing music, etc. Fact-awareness is our being conscious of some fact. From the way that Dretske talks about fact-awareness it sounds like it consists in having the appropriate belief, but to be honest I am not sure exactly what his view is on this (especially given that he seems doubtful as to whether or not having a belief makes one conscious of anything in the first place).

Given these distinctions he then gives his account of change blindnessas follows. When one is looking at the two pictures one is thing aware of them, where this means that one sees the two pictures. But one is not fact-aware that there is a difference between them. This view is contrasted with what he calls the ‘object view’ which claims that one sees both of the pictures and the difference between them but does not notice that one is seeing the difference.

The object view is pretty much what the higher-order thought theory of consciousness predicts. On that kind of theory one is in a first-order visual state that represents both pictures but one is also in a higher-order state that (mis)-represents the first-order states as not differing. That is, one is conscious of the difference between the two pictures but one is not conscious of it AS the difference. Dretske takes change blindness to be a counter-example to the transitivity principle as he thinks that what we have is a case of a conscious experience (the experience of the thing that is different as between the two pictures) but that we are not conscious of having.

So which of these two accounts is right?  This recent article on change blindness and priming seems to me to offer evidence against Dretske’s account (not to mention evidence against ‘naive realist’ and anto-representationalist views generally). In the experiments subjects were presented with two alternating pictures of numbers arranged in rows and columns. In the second picture one of the numbers was changed and subjects failed to notice this change. Nonetheless both the unchanged number and the changed number showed a priming effect.  What this suggests is that both pictures are represented by the visual system even though both are not consciously experienced. When one looks at the two pictures they look the same! One can spend minutes examining those pictures convinced that there really isn’t any difference between them and the whole thing must be a joke. But it isn’t. There is a difference and it is a strikingly large difference. So even though there is nothing that it is like for you to be conscious of the thing that makes the difference between the two pictures, you are conscious of it; just not as the difference. How is this going to be explained on a first-order view like Dretske’s

The other interesting thing about this study was that they found that when the change is detected, that is when one sees the two pictures and notices that the second one is different, then it is only the second picture’s information that does any priming. They suggest that the first representation is still there but is inhibited…this might pose a problem for Rosenthal’s argument that conscious states do not have any fucntion…but I will leave that for another time…