Who is Morally Responsible for Actions Conducted by Military Robots in Wartime Operations?

Is the very interesting question that is addressed in this article written by a Major in the Armed Forces that I learned about via David Rosenthal. Here is an excerpt from the conclusion of the article.

The potential for a new ethical dilemma to emerge comes from the approaching capability to create completely autonomous robots. As we advance across the field of possibilities from advanced weapons to semiautonomous weapons to completely autonomous weapons, we need to understand the ethical implications involved in building robots that can make independent decisions. We must develop a distinction between weapons that augment our soldiers and those that can become soldiers. Determining where to place responsibility can begin only with a clear definition of who is making the decisions.

It is unethical to create a fully autonomous military robot endowed with the ability to make independent decisions unless it is designed to screen its decisions through a sound moral framework. Without the moral framework, its creator and operator will always be the focus of responsibility for the robot’s actions. With or without a moral framework, a fully autonomous decision-maker will be responsible for its actions. For it to make the best moral decisions, it must be equipped with guilt parameters that guide its decision-making cycle while inhibiting its ability to make wrong decisions. Robots must represent our best philosophy or remain in the category of our greatest tools.

I think that this is right. In the first instance the moral responsibility is on the creator of the autonomous robot. It would indeed be unethical for us to create a creature capable of deciding to kill without the ability to determine whether or not a particular killing is moral or immoral. Of course, as also noted, the robot has to have some motivation to do what is right (i.e. to ‘inhibition of its ability to make wrong decisions’).

But what would we have to add to a machine that was capable of making rational decisions on its own so that it would have a ‘moral framework’? The author seems to suggest that it would amount to adding ‘guilt paramaters’ to a basically utilitarian reasoning capacity (earlier in the article he talks about the robot reliably ‘maximizing the greatest good’).  But what about the twin pillars of Kantianism? Universalizability and the inherent value of rational autonomous agents? Would this kind of robot be capable of using the categorical imperative? It seems to me, as a first reaction tot this question, that it may be able to use it to get the perfect duties but that it wouldn’t work for the imperfect duties. That is, it would be able to see that some maxim, when universalized, resulted in a possible world that embodied a contradiction. So one can see that one has an obligation to keep ones promises by seeing that a world where no one did would be a world where the very action I intend to perform is not possible to perform. But what about the maxims that do not strictly embody a contradictory possible world but rather simply can’t be consistently willed? These seem to all contradict some desire that the agent has. So, the duty to help others when they need it depends on my wanting to be helped at some time. This is reasonable because it is reasonable to assume that every human will need help at some time. But why should this autonomous robot care about being helped in the future? Or for that matter about causing unnecessary pain when it itself doesn’t feel pain?

UPDATE: In the comments CHRISSYSNOW links to this very interesting article by Nick Bostrom (Director of the Oxford Future of Humanity Institute). Thanks!!!

11 thoughts on “Who is Morally Responsible for Actions Conducted by Military Robots in Wartime Operations?

  1. hello professor. So it seems that maybe that a whole new code of military ethics would need to be implemented via the new and improved Geneva ConventionsII. While in Israel earlier this year one of their National Geographic programs showcased a border patrol robot. While it was not autonomous and merely an augmentation of their man power,I and others are aware that the Israelis are highly motivated to come up wih new military security defences. Worth considering is the author Asimov’s Three Laws Of Robotics. In this same vein how is it possible to absolve the actions of the robot without having to hold the creator and programmer of the robot responsible. Is’nt this the same arguement of God being responsible for human wrong doing? And where would this leave the possibility of race and ethnic specific germ warfare? I know I always have these off the track ideas but I think that if it can be thought of it can eventually be done.

  2. If I understand this right those machines are supposed to kill or help in killing other people. So, I don’t think we can speak of moral responsibility. Also, I don’t think creators have moral responsibility any more than creators of guns. That is, they have only responsibility (moral or not) to create the machine which functions as it should.

    Anyway, I wouldn’t trust programmers with some ‘complex moral framework’. What I think we want here is a predictable machine . A machine whose behavior the people in the leading military positions can predict to some amount. It seems to me that in that case, it is the moral responsibility of those people where they will deploy those machines.

  3. What I meant to say is that maybe the way to solve this problem is by creating certain maxims, and even if we make certain maxims for the robot to adhere to then we can exclude the killing of humans if possible period… Maybe let’s say to aim to seriously wound as a last resort and not to kill an enemy in combat. The predictions that Tanasije Gjorqoski would like to anticipate would be configured into the mainframe of the robot’s brain and allow them to calculate all the possible scenarios that the robot would be confronted with. The officer that “massacred” the Iraqi civillians because they were possibly afraid of the threat they posed to them when they stopped them in their car on the road. Ideally the robots would be able to decipher the real threat if any and spare civillian and military personnel’s life. Question, once we have reached a concensus on the meaning of the word “consciousness”, do you or Tanasije, or anyone else think that the robots can be programmed with these distinctive attributes. It seems that this would solve the problem overall. No??? Then we would have a baseline of what it means to be conscious and then be aware of when we have reached an ethical dillema possibly ,allowing us to move forward with the ethical problems that seem to plague us in society. Of course maybe having to deal with always , sometimes, never questions. In reality there would be no absolute autonomy of the robots, but a programmed rational intelligence that is embedded into their mainframe enabling them to come up with the most ideally humane and rational moral action called for in a specific situation. Ok this is all I have to say on this subject at this time. 🙂 Promise

  4. Hey, thanks for the link CHRISSYSNOW! That was a very interesting article.

    Worth considering is the author Asimov’s Three Laws Of Robotics.

    That’s a good suggestion for commercial robots, but not for military robots, since one of their functions will have to be killing (at least at first).

    In this same vein how is it possible to absolve the actions of the robot without having to hold the creator and programmer of the robot responsible. Is’nt this the same arguement of God being responsible for human wrong doing?

    That’s a good point. The author of the article that I linked to argues that it would be immoral of us to create a robot that was capable of decieding to kill a human being on its own (an autonomous agent) that did not run that decission through a ‘sound moral framework’. The same argument does apply to God (if there is one). That is why it seems to me that God should have made us so that we have free will and yet we freely chose not to do evil every single time (see the post on Freedom and Evil.

    Will the robots have free will

    That depends on what we mean by ‘free will’. If one is a compatibilist and one thinks that what it means to have free will is to have our actions caused by our own beliefs and desires then these autonomous agents will indeed have free will. One can now eaisly see how we could make an agent with free will that never did evil. Make it so that it can’t have the desires and beliefs that would lead to it performing evil (the article you link to actually has a couple examples of ‘superintelligences’ like this). They may even have free will if one is not a compatibilist…it will depend on what ones general feeling is about whether or not the robots could have done something other than what they actually did do (or so I think…there are people who deny this)….But whatever one thinks about whether they have free will or not, if one believes the author of the article, they will definetly be capable of making decisions on their own…

    Tanasije, you say

    If I understand this right those machines are supposed to kill or help in killing other people. So, I don’t think we can speak of moral responsibility. Also, I don’t think creators have moral responsibility any more than creators of guns. That is, they have only responsibility (moral or not) to create the machine which functions as it should.

    Why can’t we speak of moral responsibility? The machines are made for war, but not every killing in wartime is a morally justified killing (indeed, some think that there are no morally justified killings in wartime at all!!). So, if these machines are to decide on their own who gets killed and who doesn’t then we can talk about who is morally responsible…As for your second point, about guns, I think I agree when we are talking about an instrument/tool like a gun. So, it is the users responsibility to use the gun appropriately. So, the inventor of the gun bears no moral responsibility for the way that people use guns (contrary to what Winschester thought, apparently…). But there is a big difference between that and the scenerio we are talking about. We are talking about creating an autonomous agent, not merely a tool like a gun, but something that can make decisions on its own. It is because we are creating something like that that the moral responsibility lies with the creator (notice: NOT the manufacturer, but the inventor)…You are right about the last part…but the author says as much in the article. If the thing we make is just a tool then the user is morally responsible, but if the machine is making the decisions, then it is the machine that is responsible for the decisions, and we who are responible for making sure that its decisions are bound by morality….

  5. So this is an interesting topic to me and i was thinking , if there was a way to biologically interface nanotechnology with the cells of a human or a lab animal, or all of the above to create a chimera type of organism then what would the implications if this was the “robot” or “superintelligence” that was now being used in the military. Would the entire chimera beresponsible as a whole for their actions or would the initiator of this creationbe responsible? Providing of course that the chimera was an autonomous creation.

  6. Hi CHRISSYSNOW,

    Sorry I missed this comment. Yeah, that is an interesting question. I suppose that if the chimera is autonomous (that is, make decisions for itself) then it is responsible for its actions… This seems to me to be one of the points that the author wants to make. If it is just a weapon, then the user is responsible, it it is making decisions for itself, then it is responsible…would you agree?

  7. OK what my real question is, is since it would be a chimera then
    which organism would be presumed to be responsible for the decision making?

Leave a Reply to CHRISSYSNOW Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s