Drones, Morality, and Vulnerability: Two Arguments Against Automated Killing
Date
Authors
Advisors
Journal Title
Journal ISSN
ISSN
Volume Title
Publisher
Type
Peer reviewed
Abstract
This chapter articulates and discusses several arguments against the lethal use of unmanned aerial vehicles, often called drones. A distinction is made between targeted killing, killing at a distance, and automated killing, which is used to map the arguments against lethal drones. After considering issues concerning the justification of war, the argument that targeted killing makes it easier to start a war, and the argument that killing at a distance is problematic, this chapter focuses on two arguments against automated killing, which are relevant to all kinds of “machine killing”. The first argument (from moral agency) questions if machines can ever be moral agents and is based on differences in capacities for moral decision-making between humans and machines. The second argument (from moral patiency), which has received far less attention in the literature on machine ethics and ethics of drones, focuses on the question if machines can ever be “moral patients”. It is argued that there is a morally significant qualitative difference in vulnerability and way of being between drones and humans, and that because of this asymmetry fully automated killing without or with little human involvement is not justified.