Is the very interesting question that is addressed in this article written by a Major in the Armed Forces that I learned about via David Rosenthal. Here is an excerpt from the conclusion of the article.
The potential for a new ethical dilemma to emerge comes from the approaching capability to create completely autonomous robots. As we advance across the field of possibilities from advanced weapons to semiautonomous weapons to completely autonomous weapons, we need to understand the ethical implications involved in building robots that can make independent decisions. We must develop a distinction between weapons that augment our soldiers and those that can become soldiers. Determining where to place responsibility can begin only with a clear definition of who is making the decisions.
It is unethical to create a fully autonomous military robot endowed with the ability to make independent decisions unless it is designed to screen its decisions through a sound moral framework. Without the moral framework, its creator and operator will always be the focus of responsibility for the robot’s actions. With or without a moral framework, a fully autonomous decision-maker will be responsible for its actions. For it to make the best moral decisions, it must be equipped with guilt parameters that guide its decision-making cycle while inhibiting its ability to make wrong decisions. Robots must represent our best philosophy or remain in the category of our greatest tools.
I think that this is right. In the first instance the moral responsibility is on the creator of the autonomous robot. It would indeed be unethical for us to create a creature capable of deciding to kill without the ability to determine whether or not a particular killing is moral or immoral. Of course, as also noted, the robot has to have some motivation to do what is right (i.e. to ‘inhibition of its ability to make wrong decisions’).
But what would we have to add to a machine that was capable of making rational decisions on its own so that it would have a ‘moral framework’? The author seems to suggest that it would amount to adding ‘guilt paramaters’ to a basically utilitarian reasoning capacity (earlier in the article he talks about the robot reliably ‘maximizing the greatest good’). But what about the twin pillars of Kantianism? Universalizability and the inherent value of rational autonomous agents? Would this kind of robot be capable of using the categorical imperative? It seems to me, as a first reaction tot this question, that it may be able to use it to get the perfect duties but that it wouldn’t work for the imperfect duties. That is, it would be able to see that some maxim, when universalized, resulted in a possible world that embodied a contradiction. So one can see that one has an obligation to keep ones promises by seeing that a world where no one did would be a world where the very action I intend to perform is not possible to perform. But what about the maxims that do not strictly embody a contradictory possible world but rather simply can’t be consistently willed? These seem to all contradict some desire that the agent has. So, the duty to help others when they need it depends on my wanting to be helped at some time. This is reasonable because it is reasonable to assume that every human will need help at some time. But why should this autonomous robot care about being helped in the future? Or for that matter about causing unnecessary pain when it itself doesn’t feel pain?
UPDATE: In the comments CHRISSYSNOW links to this very interesting article by Nick Bostrom (Director of the Oxford Future of Humanity Institute). Thanks!!!