Autonomous Weapons Systems & Meaningful Human Control PDF
Document Details
Uploaded by InvigoratingGrowth
University of Amsterdam
2020
Daniele Amoroso & Guglielmo Tamburrini
Tags
Related
- Introduction to Autonomous Mobile Robots PDF
- Ch 6 Rise of Autonomous Kingdoms PDF
- Presentazioni Articoli Raviglione PDF
- A Diplomat's Guide to Autonomous Weapons Systems PDF
- A Diplomat's Guide to Autonomous Weapons Systems PDF
- Autonomous Weapons Systems & Meaningful Human Control: Ethical & Legal Issues (PDF)
Summary
This 2020 academic review article discusses ongoing debates about the morality and legality of allowing robotic systems to engage in warfare, focusing on the concept of meaningful human control (MHC). The article presents various perspectives and highlights emerging concerns in these debates, such as adherence to the laws of war, accountability, and the dignity of individuals affected by the use of such weapons.
Full Transcript
Current Robotics Reports (2020) 1:187–194 https://doi.org/10.1007/s43154-020-00024-3 ROBOETHICS (G VERUGGIO, SECTION EDITOR) Autonomous Weapons Systems and Meaningful Human Control: Ethical and Legal Issues Daniele Amoroso 1 & Guglielmo Tamburrini 2 Published online: 24 August 202...
Current Robotics Reports (2020) 1:187–194 https://doi.org/10.1007/s43154-020-00024-3 ROBOETHICS (G VERUGGIO, SECTION EDITOR) Autonomous Weapons Systems and Meaningful Human Control: Ethical and Legal Issues Daniele Amoroso 1 & Guglielmo Tamburrini 2 Published online: 24 August 2020 # The Author(s) 2020 Abstract Purpose of Review To provide readers with a compact account of ongoing academic and diplomatic debates about autonomy in weapons systems, that is, about the moral and legal acceptability of letting a robotic system to unleash destructive force in warfare and take attendant life-or-death decisions without any human intervention. Recent Findings A précis of current debates is provided, which focuses on the requirement that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC) in order to be ethically acceptable and lawfully employed. Main approaches to MHC are described and briefly analyzed, distinguishing between uniform, differentiated, and prudential policies for human control on weapons systems. Summary The review highlights the crucial role played by the robotics research community to start ethical and legal debates about autonomy in weapons systems. A concise overview is provided of the main concerns emerging in those early debates: respect of the laws of war, responsibility ascription issues, violation of the human dignity of potential victims of autonomous weapons systems, and increased risks for global stability. It is pointed out that these various concerns have been jointly taken to support the idea that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC). Main approaches to MHC are described and briefly analyzed. Finally, it is emphasized that the MHC idea looms large on shared control policies to adopt in other ethically and legally sensitive application domains for robotics and artificial intelligence. Keywords Autonomous weapons systems. Roboethics. International humanitarian law. Human-robot shared control. Meaningful human control Introduction Predator ). The deployment of these military robots has been seldom objected to on ethical or legal grounds, with the Robotics has extensively contributed to modify defense sys- notable exception of extraterritorial targeted killings accom- tems. Significant examples from the recent past include plished by means of unmanned aerial vehicles. These targeted teleoperated robots detecting and defusing explosive devices killings have raised concerns about the infringement of other (e.g., PackBot) , in addition to unmanned vehicles for re- States’ sovereignty and overly permissive application of lethal connaissance and combat missions, operating on the ground force in counter-terrorism operations [5–7]. (e.g., Guardium or TALON ) or in the air (e.g., MQ-1 One should carefully note that the release of destructive force by any weaponized robot in the above list is firmly in This article is part of the Topical Collection on Roboethics the hands of human operators. Accordingly, ethical and legal controversies about these systems were confined to a handful of * Daniele Amoroso their specific uses, and their overall acceptability as weapons [email protected] systems was never questioned. However, the entrance on the scene of autonomous weapons systems (AWS from now on) Guglielmo Tamburrini [email protected] has profoundly altered this ethical and legal landscape. To count as autonomous, a weapons system must be able to 1 Department of Law, University of Cagliari, Cagliari, Italy select and engage targets without any human intervention after 2 Department of Electrical Engineering and Information Technology, its activation [8 , 9, 10]. Starting from this basic and quite in- University of Naples Federico II, Naples, Italy clusive condition, the Stockholm International Peace Research 188 Curr Robot Rep (2020) 1:187–194 Institute (SIPRI) introduced additional distinctions between discussions about the ethical and legal acceptability of types of existing AWS: (i) air defense systems (e.g., Phalanx AWS. Arkin emphasized some ethical pros of autonomy in , MANTIS , Iron Dome , Goalkeeper ); (ii) ac- weapons systems. He was concerned about the poor record of tive protection systems, which shield armored vehicles by iden- human compliance with international norms governing the tifying and intercepting anti-tank missiles and rockets (e.g., conduct of belligerent parties in warfare (Laws of War or LEDS-150 and Trophy ); (iii) robotic sentries, like the international humanitarian law (IHL)). In his view, this state Super aEgis II stationary robotic platform tasked with the sur- of affairs ultimately depends on human self-preservation veillance of the demilitarized zone between North and South needs and emotional reactions in the battlefield—fear, anger, Korea ; (iv) guided munitions, which autonomously identify frustration, and so on—that a robot is immune to. Arkin’s own and engage targets that are not in sight of the attacking aircraft (e. research on military applications of robotics was inspired by a g., the Dual-Mode Brimstone ); and (v) loitering munitions, vision of “ethically restrained” autonomous weapons systems such as the Harpy NG , which overfly an assigned area in that are capable of abiding “by the internationally agreed upon search of targets to dive-bomb and destroy. Laws of War” better than human warfighters. He presented This classification stands in need of continual expansion on this vision and its ethical motivations in an invited talk at the account of ongoing military research projects on unmanned First International Symposium on Roboethics, organized by ground, aerial, and marine vehicles that are capable of auton- Scuola di Robotica, chaired by Gianmarco Veruggio, and held omously performing targeting decisions. Notably, research in 2004 at Villa Alfred Nobel in Sanremo, Italy. Arkin later work based on swarm intelligence technologies is paving the described this meeting as “a watershed event in robot ethics” way to swarms of small-size and low-cost unmanned weapons [28 , 29, 30]. systems. These are expected to overwhelm enemy defenses by In contrast with Arkin’s views, Sharkey emphasized vari- their numbers and may additionally perform autonomously ous ethical cons of autonomy in weapons systems. He argued targeting functions [21–24]. that foreseeable technological developments of robotics and The technological realities and prospects of AWS raise a artificial intelligence (AI) offer no support for the idea of au- major ethical and legal issue: Is it permissible to let a robotic tonomous robots ensuring a better-than-human application of system unleash destructive force and take attendant life-or-death the IHL principles. He emphasized that interactions among decisions without any human intervention? This issue prompted AWS in unstructured warfare scenarios would be hardly pre- intense and ongoing debates, at both academic and diplomatic dictable and fast enough to bring the pace of war beyond levels, on the legality of AWS under international law. An human control. And he additionally warned that AWS threat- idea that has rapidly gained ground across the opinion spectrum en peace at both regional and global levels by making wars in this debate is that all weapons systems, including autonomous easier to wage [31–34]. Sharkey co-founded the International ones, should remain under meaningful human control (MHC) in Committee for Robot Arms Control (ICRAC) in 2009 and order to be ethically acceptable and lawfully employed (see the played a central role in creating the conditions for launching reports by the UK-based NGO Article 36 [26, 27]). the Campaign to Stop Killer Robots. This initiative is driven Nevertheless, the precise normative content of such requirement by an international coalition of non-governmental organiza- is still far from being precisely spelled out and agreed upon. tions (NGOs), formed in 2012 with the goal of “preemptively This review provides a general survey of the AWS debate, ban[ning] lethal robot weapons that would be able to select focusing on the MHC turning point and its ethical and legal and attack targets without any human intervention.” underpinnings. After recalling the initial stages of the debate, a A similar call against “offensive autonomous weapons be- schematic account is provided of chief ethical and legal con- yond meaningful human control” was made in the “Open cerns about autonomy in weapons systems. Then, the main Letter from AI & Robotic Researchers,” released in 2015 by proposals regarding the MHC content are introduced and an- the Future of Life Institute and signed by about 4500 AI/ alyzed, including our own proposal of a “differentiated and robotics researchers and more than 26,000 other persons, in- prudential” human control policy on AWS. Finally, it is point- cluding many prominent scientists and entrepreneurs. Quite ed out how our proposal may help overcome the hurdles that remarkably, the Open Letter urges AI and robotics researchers are currently preventing the international community from to follow in the footsteps of those scientists working in biolo- adopting a legal regulation on the matter. gy and chemistry, who actively contributed to the initiatives that eventually led to international treaties prohibiting biolog- ical and chemical weapons. Highlights from the AWS Ethical and Legal Worldwide pressures from civil society prompted States to Debate initiate discussion of normative frameworks to govern the design, development, deployment, and use of AWS. Members of the robotics community, notably Ronald C. Arkin Diplomatic dialogs on this topic have been conducted since and Noel Sharkey, were chief protagonists of early 2014 at the United Nations in Geneva, within the institutional Curr Robot Rep (2020) 1:187–194 189 framework of the Convention on Certain Conventional These sources of concern jointly make the case for Weapons (CCW). The CCW’s main purpose is to restrict claiming that a meaningful human control (MHC) over and possibly ban the use of weapons that are deemed to cause weapons systems should be retained exactly in the way of unnecessary or unjustifiable suffering to combatants or to af- their critical target selection and engagement functions. fect civilians indiscriminately. Informal Meetings of Experts Accordingly, the notion of MHC enters the debate on AWS on lethal autonomous weapons systems were held on an an- as an ethically and legally motivated constraint on the use of nual basis at the CCW in Geneva, from 2014 to 2016. any weapons systems, including autonomous ones. The issue Subsequently, the CCW created a Group of Governmental of human-robot shared control in warfare is thereby addressed Experts (GGE) on lethal autonomous weapons systems from a distinctive humanitarian perspective, insofar as auton- (LAWS), which still remains (as of 2020) the main institution- omous targeting may impinge, and deeply so, upon the inter- al forum where the issue of autonomy in weapons systems is ests of persons and groups of persons that are worthy of pro- annually debated at an international level. Various mem- tection from ethical or legal standpoints. bers of the robotics research community take part to the But what does MHC more precisely entail? What is nor- GGE’s meetings. So far, the main outcome of the GGE’s work matively demanded to make human control over weapons is the adoption by consensus of a non-binding instrument, that systems truly “meaningful”? The current debate about AWS, is, the 11 Guiding Principles on LAWS, which include broad which we now turn to consider, is chiefly aimed to provide an recommendations on human responsibility (Principles (b) and answer to these questions. (d)) and human-machine interaction (Principle (c)). A clear outline of the main ethical and legal concerns raised by AWS is found already in a 2013 report, significantly de- Uniform Policies for Meaningful Human voted to “lethal autonomous robotics and the protection of Control life,” by the UN Special Rapporteur on extrajudicial, summa- ry, or arbitrary executions, Christof Heyns [38 ]. These con- The foregoing ethical and legal reasons go a long way towards cerns are profitably grouped under four headings: (i) compli- shaping the content of MHC, by pinpointing general functions ance with IHL, (ii) responsibility ascription problems, (iii) that should be prescriptively assigned to humans in shared violations of human dignity, and (iv) increased risk for peace control regimes and by providing general criteria to distin- and international stability. Let us briefly expand on each one guish perfunctory from truly meaningful human control. of them, by reference to relevant sections in Heyns’ report. More specifically, the ethical and legal reasons for MHC sug- gest a threefold role for human control on weapons systems to (i) Compliance with IHL would require capabilities that are be “meaningful.” First, the obligation to comply with IHL presently possessed by humans only and that no robot is entails that human control must play the role of a fail-safe likely to possess in the near future, i.e., to achieve situa- actor, contributing to prevent a malfunctioning of the weapon tional awareness in unstructured warfare scenarios and to from resulting in a direct attack against the civilian population formulate appropriate judgments there (paras. 63–74) (in or in excessive collateral damages [53 ]. Second, in order to the literature, see [39–41] for a critique of this argument avoid accountability gaps, human control is required to func- and [42–44] for a convincing rejoinder). tion as accountability attractor, i.e., to secure the legal condi- (ii) Autonomy in weapons systems would hinder responsi- tions for responsibility ascription in case a weapon follows a bility ascriptions in case of wrongdoings, by removing course of action that is in breach of international law. Third human operators from the decision-making process and finally, from the principle of human dignity respect, it (paras. 75–81) (for further discussion, see [45–47]). follows that human control should operate as a moral agency (iii) The deployment of lethal AWS would be an affront to enactor, by ensuring that decisions affecting the life, physical human dignity, which dictates that decisions entailing integrity, and property of people (including combatants) in- human life deprivation should be reserved to humans volved in armed conflicts are not taken by non-moral artificial (paras. 89–97) (see [48–50] for more in-depth analysis, agents. as well as for a critical perspective). But how are human-weapon partnerships to be more pre- (iv) Autonomy in weapons systems would threaten in spe- cisely shaped on the basis of these broad constraints? Several cial ways international peace and stability, by making attempts to answer this question have been made by parties wars easier to wage on account of reduced numbers of involved in the AWS ethical and legal debate. The answers involved soldiers, by laying the conditions for unpre- that we turn to examine now outline uniform human control dictable interactions between AWS and their harmful policies, whereby one size of human control is claimed to fit outcomes, and by accelerating the pace of war beyond all AWS and each one of their possible uses. These are the human reactive abilities (paras. 57–62) (this point has “boxed autonomy,” “denied autonomy,” and “supervised au- been further elaborated in ). tonomy” control policies. 190 Curr Robot Rep (2020) 1:187–194 The boxed autonomy policy assigns to humans the role of with a very low risk of civilian harm. They are fixed base, constraining the autonomy of a weapons system within an even on Naval vessels, and have constant vigilant human operational box, constituted by “predefined [target] parame- evaluation and monitoring for rapid shutdown”. ters, a fixed time period and geographical borders”. SARMO systems expose the overly restrictive character of Accordingly, the weapons system would be enabled to auton- a denied autonomy policy. Thus, one wonders whether milder omously perform the critical functions of selecting and engag- forms of human control might be equally able to strip the ing targets, but only within the boundaries set forth by the autonomy of weapons systems of its ethically and legally human operator or the commander at the planning and activa- troubling implications. This is indeed the aim of the super- tion stages [56–58]. vised autonomy policy, which occupies a middle ground be- The boxed autonomy policy seems to befit a variety of tween boxed and denied autonomy, insofar as it requires deliberate targeting situations, which involve military objec- humans to be on the loop of AWS missions. tives that human operators know in advance and can map with As defined in the US DoD Directive 3000.09 on high confidence within a defined operational theater. It seems, “Autonomy in Weapons Systems,” human-supervised AWS however, unsuitable to govern a variety of dynamic targeting are designed “to provide human operators with the ability to situations. These require one to make changes on the fly to intervene and terminate engagements, including in the event planned objectives and to pursue targets of opportunity. The of a weapon system failure, before unacceptable levels of latter are unknown to exist in advance (unanticipated targets) damage occur” (p. 13). Notably, human-supervised AWS or else are not localizable in advance with sufficient precision may be used for defending manned installations and platforms in the operational area (unplanned targets). Under these con- from “attempted time-critical or saturation attacks,” provided ditions, boxed autonomy appears to be problematic from a that they do not select “humans as targets” (p. 3, para. 4(c)(2); normative perspective, insofar as issues of distinction and see, e.g., the Phalanx Close-In Weapons System in use on the proportionality that one cannot foresee at the activation stage US surface combat ships). While undoubtedly effective for may arise during mission execution. these and other warfare scenarios, supervised autonomy is By the same token, a boxed autonomy policy may not even not the silver bullet for every ethical and legal concern raised suffice to govern deliberate targeting of military objectives by AWS. To begin with, by keeping humans on-the-loop, one placed in unstructured warfare scenarios. To illustrate, consider would not prevent faster and faster offensive AWS from being the loitering munition Harpy NG, endowed with the capability developed, eventually reducing the role of human operators to of patrolling for several hours a predefined box in search of a perfunctory supervision of decisions taken at superhuman enemy targets satisfying given parameters. The conditions li- speed while leaving the illusion that the human control re- censing the activation of this loitering munition may become quirement is still complied with. Moreover, the automa- superseded if civilians enter the boxed area, erratic changes oc- tion bias—the human propensity to overtrust machine cur, or surprise-seeking intentional behaviors are enacted. decision-making processes and outcomes—is demonstrably Under these various circumstances, there is “fail-safe” work for exacerbated by a distribution of control privileges that entrusts human control to do at the mission execution stage too. humans solely with the power of overriding decisions auton- In sharp contrast with the boxed autonomy policy, the de- omously taken by the machines. nied autonomy policy rules out any autonomy whatsoever for To sum up, each one of the boxed, denied, and supervised weapons systems in the critical targeting function and there- autonomy policies provides useful hints towards a normative- fore embodies a most restrictive interpretation of MHC. ly adequate human-machine shared control policy for military Denied autonomy undoubtedly fulfills the threefold normative target selection and engagement. However, the complementa- role for human control as fail-safe actor, accountability attrac- ry defects of these uniform control policies suggest the im- tor, and moral agency enactor. However, this policy has been plausibility of solving the MHC problem with one formula, to sensibly criticized for setting too high a threshold for machine be applied to all kinds of weapons systems and to each one of autonomy, in ways that are divorced from “the reality of war- their possible uses. This point was consistently made by the fare and the weapons that have long been considered accept- US delegation at GGE meetings in Geneva: “there is not a able in conducting it”. To illustrate this criticism, consider fixed, one-size-fits-all level of human judgment that should air defensive systems, which autonomously detect, track, and be applied to every context”. target incoming projectiles. These systems have been aptly classified as SARMO weapons, where SARMO stands for “Sense and React to Military Objects.” SARMO systems are Differentiated Policies for Meaningful Human hardly problematic from ethical and legal perspectives, in that Control “they are programmed to automatically perform a small set of defined actions repeatedly. They are used in highly structured Other approaches to MHC aim to reconcile the need for dif- and predictable environments that are relatively uncluttered ferentiated policies with the above ethical and legal constraints Curr Robot Rep (2020) 1:187–194 191 on human control. Differentiated policies modulate human as some weapons (notably SARMO systems) have long been control along various autonomy levels for weapons systems. considered acceptable in warfare operations. Autonomy levels have been introduced in connection with, In the light of these difficulties, one might be tempted to say, automated driving, surgical robots, and unmanned com- conclude that the search for a comprehensive and normatively mercial ships to discuss technological roadmaps or ethical and binding MHC policy should be given up and that the best one legal issues [66–68]. A taxonomy of increasing autonomy can hope for is the exchange of good practices between States levels concerning the AWS critical target selection and en- about AWS control, in addition to the proper application of gagement functions was proposed by Noel Sharkey (and only national mechanisms to review the legality of weapons slightly modified here, with regard to levels 4 and 5) [69 ]. [71–73]. But alternatives are possible, which salvage the idea of a comprehensive MHC policy, without neglecting the need L1. A human engages with and selects targets and initi- for differentiated levels of AWS autonomy in special cases. ates any attack. Indeed, the authors of this review have advanced the proposal L2. A program suggests alternative targets, and a human of a comprehensive MHC policy, which is jointly differenti- chooses which to attack. ated and prudential [74, 75]. L3. A program selects targets, and a human must approve The prudential character of this policy is embodied before the attack. into the following default rule: low levels of autonomy L4. A program selects and engages targets but is super- L1–L2 should be exerted on all weapons systems and vised by a human who retains the power to override its uses thereof, unless the latter are included in a list of choices and abort the attack. exceptions agreed on by the international community of L5: A program selects targets and initiates attack on the States. The prudential imposition by default of L1 and basis of the mission goals as defined at the planning/ L2 is aimed at minimizing the risk of breaches of IHL, activation stage, without further human involvement. accountability gaps, or affronts to human dignity, should international consensus be lacking on whether, in relation The main uniform control policies, including those exam- to certain classes of weapons systems or uses thereof, ined in the previous section, are readily mapped onto one of higher levels of machine autonomy are equally able to these levels. grant the fulfillment of genuinely meaningful human L5 basically corresponds to the boxed autonomy policy, control. The differentiated character of this policy is em- whereby MHC is exerted by human commanders at the plan- bodied in the possibility of introducing internationally ning stage of the targeting process only. As noted above, agreed exceptions to the default rule. However, these boxed autonomy does not constitute a sufficiently comprehen- exceptions should come with the indication of what level sive and normatively acceptable form of human-machine is required to ensure that the threefold role of MHC (fail- shared control policy. safe actor, accountability attractor, moral agency enactor) L4 basically corresponds to the supervised autonomy pol- is adequately performed. icy. The uniform adoption of this level of human control must In the light of the above analysis, this should be done by also be advised against in the light of automation bias risks taking into account at least the following observations: and increasing marginalization of human oversight. In certain operational conditions, however, it may constitute a norma- 1. The L4 human supervision and veto level might be tively acceptable level of human control. deemed as an acceptable level of control only in case L3 has been seldom discussed in the MHC debate. At of anti-materiel AWS with exclusively defensive func- this level, control privileges on critical targeting functions tions (e.g., Phalanx or Iron Dome). In this case, ensuring are equally distributed between weapons system (target that human operators have full control over every single selection) and human operator (target engagement). To targeting decision would pose a serious security risk, the extent that the human deliberative role is limited to which makes the application of L1, L2, and L3 problem- approving or rejecting targeting decisions suggested by atic from both military and humanitarian perspectives. the machine, this level of human control does not provide The same applies to active protection systems, like adequate bulwarks against the risk of automation bias. Trophy, provided that their use in supervised-autonomy In the same way as L4, therefore, it should not be adopted mode is excluded in operational environments involving as a general policy. a high concentration of civilians. L1 and L2 correspond to shared control policies where the 2. L1 and L2 could also be impracticable in relation to weapons system’s autonomy is either totally absent (L1) or certain missions because communication constraints limited to the role of adviser and decision support system for would allow only limited bandwidth. In this case, mili- human deliberation (L2). The adoption of these pervasive tary considerations should be balanced against humani- forms of human control must also be advised against insofar tarian ones. One might allow for less bandwidth-heavy 192 Curr Robot Rep (2020) 1:187–194 (L3) control in two cases: deliberate targeting and dy- Compliance with Ethical Standards namic targeting in fully structured scenarios, e.g., in high seas. In both hypotheses, indeed, the core targeting deci- Conflict of Interest The authors declare that they have no conflict of interest. sions have actually been taken by humans at the planning/activation stage. Unlike L4, however, L3 en- Human and Animal Rights and Informed Consent This article does not sures that there is a human on the attacking end who contain any studies with human or animal subjects performed by any of can verify, in order to deny or grant approval, whether the authors. there have been changes in the battlespace which may Open Access This article is licensed under a Creative Commons affect the lawfulness of the operation. Looking at Attribution 4.0 International License, which permits use, sharing, adap- existing technologies, L3 might be applied to sentry ro- tation, distribution and reproduction in any medium or format, as long as bots deployed in a fully structured environment, like the you give appropriate credit to the original author(s) and the source, pro- South Korean Super aEgis II. vide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included 3. The L5 boxed autonomy level should be considered in- in the article's Creative Commons licence, unless indicated otherwise in a compatible with the MHC requirement, unless operation- credit line to the material. If material is not included in the article's al space and time frames are so strictly circumscribed to Creative Commons licence and your intended use is not permitted by make targeting decisions entirely and reliably traceable statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this to human operators. licence, visit http://creativecommons.org/licenses/by/4.0/. Concluding Remarks References Recent advances in autonomous military robotics have raised unprecedented ethical and legal issues. Regrettably, diplomat- Papers of particular interest, published recently, have been ic discussions at the GGE in Geneva not only have so far highlighted as: fallen short of working out a veritable legal regime on mean- Of importance ingful human control over AWS, but—what is worse—are Of major importance currently facing a stalemate, which is mainly determined by the opposition of major military powers, including the US and 1. Rudakevych P, Ciholas M. PackBot EOD firing system. SPIE Proceedings, Volume 5804, Unmanned Ground Vehicle the Russian Federation, to the adoption of any kind of inter- Technology VII. 2005. national regulation on the matter. 2. Egozi A. Robotics: Israel Aerospace Industries (IAI) ground robotic Our proposal of relinquishing the quest for a one-size-fits- systems. Asia-Pac Defence Rep. 2016;42(7):46–7. all solution to the MHC issue in favor of a suitably differen- 3. Wells P, Deguire D. TALON: a universal unmanned ground vehi- tiated approach may help sidestep current stumbling blocks. cle platform, enabling the mission to be the focus. SPIE Proceedings, Volume 5804, Unmanned Ground Vehicle Diplomatic and political discontent about an MHC require- Technology VII. 2005. ment that is overly restrictive with respect to the limited au- 4. Visnevski NA, Castillo-Effen M. A UAS capability description tonomy of some weapons systems might indeed be mitigated framework: reactive, adaptive, and cognitive capabilities in robot- recognizing the possibility of negotiating exceptions to L1–L2 ics. 2009 IEEE Aerospace Conference, Big Sky, Montana. 2009. human control, by identifying weapons systems and contexts 5. Alston P. Report of the United Nations Special Rapporteur on ex- trajudicial, summary or arbitrary executions. Addendum. Study on of use where milder forms of human control will suffice to targeted killings. UN Doc. A/HRC/14/24/Add.6. 28 May 2010. ensure the fulfillment of the fail-safe, accountability, and mor- 6. O’Connell ME. The choice of law against terrorism. J Natl Sec Law al agency properties whose preservation generally underpins Policy. 2009;4:343–68. the normative concerns about weapons’ autonomy in targeting 7. Melzer N. Human rights implications of the usage of drones and unmanned robots in warfare. Report requested by the European critical functions. Parliament’s Subcommittee on Human Rights. EXPO/B/DROI/ In a broader perspective, a differentiated approach to MHC 2012/12. May 2013. may be of some avail as regards the general issue of human 8. US Department of Defense. Directive 3000.09 “Autonomy in control over intelligent machines operating in ethically and Weapons Systems”. 21 November 2012. The Directive crucially legally sensitive domains, insofar as the MHC language has contributed to the AWS debate by introducing a workable def- inition of AWS and by establishing the principle that “appro- been recently used about autonomous vehicles [76, 77] and priate levels of human judgment over the use of force” should surgical robots. always be ensured over weapons systems. 9. International Committee of the Red Cross (ICRC). Views on au- Funding Information Open access funding provided by Università degli tonomous weapon system. Paper submitted to the Informal meeting Studi di Cagliari within the CRUI-CARE Agreement. of experts on lethal autonomous weapons systems of the Curr Robot Rep (2020) 1:187–194 193 Convention on Certain Conventional Weapons (CCW), Geneva. 11 32. Sharkey NE. Cassandra or false prophet of doom: AI robots and April 2016. war. IEEE Intell Syst. 2008;23(4):14–7. 10. Campaign to Stop Killer Robots. Urgent Action Needed to Ban 33. Sharkey NE. Saying ‘no!’ to lethal autonomous targeting. J Military Fully Autonomous Weapons. Press release. 23 April 2013. Ethics. 2010;9(4):369–83. 11. Boulanin V, Verbruggen N. Mapping the development of autono- 34. Sharkey NE. The evitability of autonomous robot warfare. Int Rev my in weapon systems. Solna: SIPRI Report; 2017. Red Cross. 2012;94:787–9 This article provides the most com- 12. Stoner RH. R2D2 with attitude: the story of the Phalanx Close-In prehensive outline, from a roboticist’s perspective, of the ethical weapons. 2009. Available at: www.navweaps.com/index_tech/ and legal concerns raised by autonomy in weapons systems. tech-103.htm. 35. Autonomous weapons: an open letter from AI & robotics re- 13. NBS MANTIS Air Defence Protection System. Available at: searchers. 28 July 2015. Available at: https://futureoflife.org/ https://www.army-technology.com/projects/mantis/. open-letter-autonomous-weapons/?cn-reloaded=1. 14. Landau EB, Bermant A. Iron Dome protection: missile defense in 36. Human Rights Watch. Stopping Killer Robots. Country Positions Israel’s security concept. In: Kurz A, Brom S, editors. The lessons on Banning Fully Autonomous Weapons and Retaining Human of operation protective edge. Tel Aviv: Institute for National Control. August 2020. Security Studies; 2014. p. 37–42. 37. Final Report of the 2019 Meeting of the high contracting parties to 15. 30 mm (1.2″) Goalkeeper SGE-30. Available at: http://www. the CCW. UN Doc CCW/MSP/2019/CRP2/Rev1, Annex III 15 navweaps.com/Weapons/WNNeth_30mm_Goalkeeper.php. November 2019. 16. SAAB Group. LEDS full spectrum active protection for land vehi- 38. Heyns C. Report of the Special Rapporteur on extrajudicial, sum- cles Available at: https://saab.com/globalassets/commercial/land/ mary or arbitrary executions. UN Doc. A/HRC/23/47. 9 April 2013. force-protection/active-protection/leds/leds-product-sheet.pdf. Heyns’ report is a milestone in the AWS debate: it raised 17. Trophy Active Protection System; 10 April 2007. Available at: awareness within the United Nations as to the ethical and legal https://defense-update.com/20070410_trophy-2.html. implications of autonomy in weapons systems, by illustrating— 18. Parkin S. Killer Robots: the soldiers that never sleep. BBC Future; concisely, but also comprehensively—the main normative is- 16 July 2015. Available at: www.bbc.com/future/story/20150715- sues at stake. killer-robots-the-soldiers-that-never-sleep. 39. Schmitt MN, Thurnher JS. “Out of the loop”: autonomous weapon 19. UK Royal Air Force. Aircraft & weapons. 2007: p. 87. systems and the law of armed conflict. Harvard Natl Secur J. 20. Gettinger D, Michel AH. Loitering munitions. Center for the Study 2013;4:231–81. of the Drone; 2017. 40. Sassòli M. Autonomous weapons and international humanitarian 21. Scharre P. Robotics on the battlefield part II. The coming swarm. law: advantages, open technical questions and legal issues to be Center for a New American Security; October 2014. clarified. Int Law Stud. 2014;90:308–40. 22. Brehm M, De Courcy Wheele A. Swarms. Article36 discussion 41. Anderson K, Waxman M. Debating autonomous weapon systems, paper for the Convention on Certain Conventional Weapons their ethics, and their regulation under international law. In: (CCW), Geneva. March 2019. Brownsword R, Scotford E, Yeung F, editors. The Oxford hand- 23. Verbruggen M. The question of swarms control: challenges to en- book of law, regulation, and technology. New York: Oxford suring human control over military swarms. Non-Proliferation and University Press; 2017. p. 1097–117. Disarmament Paper No. 65; December 2019. 42. Krupyi T. Of souls, spirits and ghosts: transposing the application of 24. Ekelhof MAC, Persi Paoli G. Swarm robotics: technical and oper- the rules of targeting to lethal autonomous robots. Melbourne J Int ational overview of the next generation of autonomous systems. Law. 2015;16(1):145–202. United Nations Institute for Disarmament Research (UNIDIR); 43. Brehm M. Defending the boundary: constraints and requirements 2020. on the use of autonomous weapon systems under international hu- 25. Amoroso D. Autonomous Weapons Systems and International manitarian and human rights law. Geneva Academy of International Law. A study on human-machine interactions in ethically and le- Humanitarian Law and Human Rights, Academy briefing No. 9: gally sensitive domains. Naples/Baden-Baden: ESI/Nomos; 2020. May 2017. 26. Roff HM, Moyes R. Meaningful Human Control, Artificial 44. Geiss R, Lahmann H. Autonomous weapons systems: a paradigm Intelligence and Autonomous Weapons. Article36 briefing paper shift for the law of armed conflict? In: Ohlin JD, editor. Research prepared for the CCW informal meeting of experts on lethal auton- handbook on remote warfare. Cheltenham: Edward Elgar omous weapons systems. April 2016. Publishing; 2017. p. 371–404. 27. Moyes R. Key elements of meaningful human control. Article 36 45. Sparrow R. Killer robots. J Appl Philos. 2007;24(1):62–77 Background paper to comments prepared for the CCW Informal Interested readers find here a first, ground-breaking illustra- Meeting of Experts on Lethal Autonomous Weapons Systems. tion of the risk that increasing autonomy in weapons system April 2016. creates “accountability gaps” in case of unlawful targeting 28. Arkin RC. Governing Lethal Behavior in Autonomous Robots. decisions. Boca Raton: CRC Press; 2009. Arkin’s book represents the 46. Amoroso D, Giordano B. Who is to blame for autonomous first—and, so far, the best articulated—attempt by a roboticist weapons systems’ misdoings? In: Lazzerini N, Carpanelli E, edi- to argue for the desirability of AWS from an ethical and legal tors. Use and misuse of new technologies. Contemporary perspective. It deeply influenced the subsequent debate by pro- Challenges in International and European Law. The Hague: viding major military powers with a convenient argument to Springer; 2019. p. 211–32. assert the legality of AWS. 47. McDougall C. Autonomous weapon systems and accountability: 29. Arkin RC. Lethal autonomous systems and the plight of the non- putting the cart before the horse. Melbourne J Int Law. combatant. AISB Quarterly. 2013;137:1–9. 2019;20(1):58–87. 30. Arkin RC. A roboticist’s perspective on lethal autonomous weapon 48. Asaro P. On banning autonomous weapon systems: human rights, systems. In: Perspectives on lethal autonomous weapon systems. automation, and the dehumanization of lethal decision-making. Int UNODA Occasional Papers No 30; November 2017: p. 35–47. Rev Red Cross. 2012;94:687–709. 31. Sharkey NE. Automated killers and the computing profession. 49. Leveringhaus A. Ethics and autonomous weapons. London: Computer. 2007;40(11):122–3. Palgrave; 2016. 194 Curr Robot Rep (2020) 1:187–194 50. Heyns C. Autonomous weapons in armed conflict and the right to a autonomous weapons systems. Working paper submitted to the dignified life: an African perspective. South African J Hum Rights. Group of Governmental Experts on lethal autonomous weapons 2017;33(1):46–71. of the CCW, Geneva. UN Doc. CCW/GGE.2/2018/WP.4. 28 51. Birnbacher D. Are Autonomous weapons systems a threat to human August 2018. dignity? In: Bhuta N, et al., editors. Autonomous weapons systems. 66. SAE International. Taxonomy and definitions for terms related to Law, ethics, policy. Cambridge: Cambridge University Press; 2016. driving automation systems for on-road motor vehicles. 15 p. 105–21. June 2018. Available at: https://www.sae.org/standards/content/ 52. Altmann J, Sauer F. Autonomous weapon systems and strategic j3016_201806/. stability. Survival. 2017;59(5):117–42. 67. Yang G-Z et al. Medical robotics – regulatory, ethical, and legal 53. Scharre P. Army of none. Autonomous weapons and the future of considerations for increasing levels of autonomy. Sci Robot. 2017: war. New York/London: W.W. Norton; 2018. This volume repre- 2(4). sents one of the few book-size studies taking stock of the discus- 68. DNV GL. Autonomous and remotely operated ships. September sion and attempting to delineate workable solutions to the prob- 2018. lems raised by autonomy in weapons systems. As an added 69. Sharkey NE. Staying the Loop: human supervisory control of value, it has been written by a key-actor in the AWS debate, weapons. In: Bhuta N, et al., editors. Autonomous weapons sys- since Scharre coordinated the Drafting process that eventually tems. Law, ethics, policy. Cambridge: Cambridge University Press; brought about the US DoD Directive 3000.09. 2016. p. 23–38. Sharkey’s chapter offers a valuable comparison 54. ICRC. Ethics and autonomous weapon systems: an ethical basis for of strengths and weaknesses of human and machine decision- human control? Working paper submitted to the Group of making processes, paying particular attention to implications Governmental Experts on lethal autonomous weapons of the for autonomous robotics in military applications. Also, by pro- CCW, Geneva. UN Doc. CCW/GGE.1/2018/WP.5. 29 viding a clear taxonomy of possible forms of human-weapon March 2018. partnership, it laid the groundwork for subsequent research on 55. International Panel on the Regulation of Autonomous Weapons MHC. (iPRAW). Focus on Human Control. August 2019. 70. Cummings ML. Automation and accountability in decision support 56. Dutch Advisory Council on International Affairs (AIV) and system interface design. J Technol Stud. 2006:23–31. Advisory Committee on Issues of Public International Law 71. Switzerland. A “compliance-based” approach to autonomous (CAVV). Report on Autonomous weapon systems: the need for weapon systems. Working paper submitted to the Group of meaningful human control. No. 97 AIV / No. 26 CAVV. 2015. Governmental Experts on lethal autonomous weapons of the 57. Roorda M. NATO’s targeting process: ensuring human control over CCW, Geneva. UN Doc. CCW/GGE.1/2017/WP.9. 10 November and lawful use of ‘autonomous’ weapons. In: Williams A, Scharre 2017. P, editors. Autonomous systems: issues for defence policymakers. 72. United States. Statement for the General Exchange of Views. Group The Hague: NATO Communications and Information Agency; of Governmental Experts on lethal autonomous weapons of the 2015. p. 152–68. CCW, Geneva. 9 April 2018. 58. Ekelhof MAC. Moving beyond semantics on autonomous weapons 73. Israel. Statement on Agenda Item 6(d). Group of Governmental systems: meaningful human control in operation. Glob Policy. Experts on lethal autonomous weapons of the CCW, Geneva. 29 2019;10(3):343–8. August 2018. 59. Akerson D. The illegality of offensive lethal autonomy. In: Saxon 74. Amoroso D, Tamburrini G. What makes human control over weap- D, editor. International humanitarian law and the changing technol- on systems “meaningful”? ICRAC working paper #4. August 2019. ogy of war. Leiden/Boston: Brill/Nijhoff; 2013. p. 65–98. 75. Amoroso D, Tamburrini G. Filling the empty box: a principled 60. Chengeta T. Defining the emerging notion of ‘meaningful human approach to meaningful human control over weapons systems. control’ in autonomous weapon systems. New York J Int Law ESIL Reflections. 2019:8(5). Politics. 2017;49:833–90. 61. Horowitz MC, Scharre P. Meaningful human control in weapon 76. Santoni de Sio F, Van Den Hoven J. Meaningful human control systems: a primer, CNAS Working Paper, March 2015. over autonomous systems: a philosophical account. Front Robotic 62. International Committee for Robot Arms Control. Statement on AI. 2018. technical issues. Informal meeting of experts on lethal autonomous 77. Mecacci G, Santoni de Sio F. Meaningful human control as reason- weapons, Geneva. 14 May 2014. responsiveness: the case of dual-mode vehicles. Ethics Inf Technol. 63. Schwarz E. The (Im)possibility of meaningful human control for 2020;22:103–15. lethal autonomous weapon systems. Humanitarian Law & Policy 78. Ficuciello F, et al. Autonomy in surgical robots and its meaningful (Blog of the ICRC). 29 August 2018. human control. Paladyn J Behav Robot. 2019;10(1):30–43. 64. Skitka LJ, Mosier KL, Burdick M. Does automation bias decision- making? Int J Hum-Comput Stud. 1999;51(5):991–1006. 65. United States. Human-machine interaction in the development, de- Publisher’s Note Springer Nature remains neutral with regard to jurisdic- ployment and use of emerging technologies in the area of lethal tional claims in published maps and institutional affiliations.