Should we ban fully autonomous weapons?

Is it justified at all that a machine kills people, or takes decisions about life and death? Philosopher Mark Coeckelbergh argues in his guest article for uni:view why the use of fully autonomous weapons is ethically highly problematic.

Weapon systems become more and more autonomous. This means that they can take decisions without human intervention. Sometimes human operators can still override the decision or are still "in the loop": they are still involved in the decision. But as the technology advances, we get systems that in principle do not need the human. Do we want this? Do we want to allow this?

Able to kill humans

A good example of such systems are unmanned areal vehicles, sometimes called "drones," that have lethal capacities. That is, they are able to kill humans. Today usually the decision is taken by humans at a distance; they are remote controlled. But soon it will be possible to leave out the human decision.

For military organizations the advantage of using such systems is that it saves lives of military people, for example an unmanned drone means that a human being does not have to risk his or her life. Another argument in favour of totally autonomous systems is that in specific situations it might be necessary to react quickly – faster than humans can react –for instance in the case of missile defense systems.

However, there are many ethical issues with such technologies, which make it hard to support their use.

A global problem

Some problems are similar to those of other digital technologies. For example, many digital technologies that are used in daily life can also be used for military purposes. This is called the "dual use" problem. This renders it difficult to ban the development of the technology, since then one would have to ban technologies that could be potentially useful in other domains, for instance in a household context or in the medical domain. Similar to digital technologies, autonomous weapon systems are a global problem too. The technology can be used everywhere, and the context of military action is a global one. And like all digital technologies, drones can be hacked. What if the enemy uses them against you?

Isn’t it easier to kill?

Even if non-autonomous, the use of "killer" drones is anyway ethically problematic. An ethical and legal issue with drones is that they seem to blur the distinction between warfare and executions. If an individual can be targeted, is it still warfare or is it an execution? Moreover, employing drones seems to be cheaper and easier than using, say, an expensive fighter jet with a human in it, and hence may make it easier to start a war. And since drones enable killing at a distance, isn’t it easier to kill? It seems that there are less psychological barriers to pull the trigger. Killing then becomes like a videogame. The relation is also very asymmetrical: the drone operator sees the person on the ground but remains invulnerable, whereas the person on the ground (the target) doesn’t necessarily know that he or she is being watched and is vulnerable to whatever the drone operator decides to do.

The system is unpredictable

However, there are difficult ethical issues that are specific to autonomous systems, including autonomous weapon systems. One is that current artificially intelligent systems are increasingly complex and machine learning often implies that the behavior of the system is unpredictable and not explicable for a human: humans don’t know how the machine came to its decision. This is problematic when there are questions regarding the action of the machine and the human. The machine, if sufficiently intelligent and complex, cannot explain why it did something or why it did not do something. It does not reason like humans.

This takes us to the next problem: responsibility. When a machine takes over and the human is not involved, who is responsible? The owner of the machine (the military – but who?)? The user of the machine (e.g. the operator/pilot)? The company who developed it? The designer? Compare with a self-driving car: who is responsible when it causes an accident?

Machines lack emotions

Another important ethical problem with fully autonomous weapon systems is whether it is justified at all that a machine kills people, or takes decisions about life and death. In an article on this issue I distinguish between two kinds of arguments:

One type of philosophical argument focuses on the capacities of the machines: it is argued that machines don’t have the capacities necessary for making a proper moral judgment. For example, the machine has difficulties to take into account the specifics of the situation and can for instance not very well discriminate between combatants (military people, the enemy) and non-combatants (civilians). Machines also lack emotions, whereas humans use emotions in their judgment. Therefore, they should not decide about killing; if it is allowed at all, humans should do that job.

The other type of philosophical argument is more based on moral patiency. Here the argument against autonomous weapons is that they cannot suffer and they cannot experience the existential risk. For them nothing is at stake. They don’t realize what it means to be in danger, what it means to have their life threatened. Therefore, we may conclude, machines should not be allowed to decide about the life of humans. At most, they could defend us against other machines. But they should not be given a degree of autonomy that allows them to take human lives. Humans should always be in the loop, not outside the loop.

Ethically highly problematic

There are more problems with fully autonomous weapons, but the conclusion is clear to me: while they have some advantages, their use is ethically highly problematic in many ways. Therefore, we should not use them and perhaps ban them. Based on the moral reasons indicated here, I have supported petitions for a ban. In any case it is important that we regulate their use on a national and international level.

This does not mean that we should not use artificial intelligence, in the military and in other domains. But we should try to use these systems in a way that keeps the human in the loop. The solution to at least some of the problems indicated here is to ensure that humans make decisions about humans, perhaps aided by smart machines and working together with them, but not replaced by them.

Keep humans in the loop

This is also a solution that is relevant to other, non-military domains where intelligent autonomous systems are used, in the medical sector for instance. It may be very problematic if expert systems and other intelligent systems decide about our medical diagnosis and treatment. It is much less problematic if we make sure the human doctors are involved and use their human capacities of interpretation and judgment. Let machines do what they are good in, such as recognizing patterns in large data sets; let humans do what they can do best – and what they should do: making ethical decisions in complex situations.

Thus, there is no need to stop the development of artificial intelligence and robotics. But there is a need for us humans to intervene in their development and use, and to make sure that that development and use is ethical and responsible.

Mark Coeckelbergh is Professor of Philosophy at the Department of Philosophy of the University of Vienna. His research focuses on the philosophy of technology and media, in particular on understanding and evaluating new developments in robotics, artificial intelligence and (other) information and communication technologies. (Photo: University of Vienna)