The Importance of Flexibility in Regulating Lethal Autonomous Weapon Systems

Written by Tarisa Yasin

 

__________

The debate on the development and use of lethal autonomous weapon systems (LAWS), which will also be referred to as weapon systems, has been going on for several years now. A LAWS, as defined by the International Committee of the Red Cross, is ‘a weapon that can select (i.e. search for, detect, identify, track or select) and attack (i.e. use force against, neutralize, damage or destroy)’ targets with little to no human intervention. Therefore, a key issue in the debate is whether it is necessary for human control to be retained over the use of a LAWS to ensure that it complies with existing international humanitarian law (IHL) principles such as the principles of distinction and proportionality as well as the prohibition on indiscriminate attacks.

The exploration of this key issue has also led to discussions around whether new rules within IHL need to be drafted in order to effectively regulate LAWS. However, if further regulations governing the development and use of LAWS are to be drafted, there are important factors such as levels of autonomy in weapon systems, the stages of a weapon system’s lifecycle, and the various forms of human control in each stage that need to be considered. In light of these factors, there is a need for regulations on LAWS to be flexible.

It has been argued that there is an implied requirement for ‘meaningful human judgment’ in decisions to use lethal force. The potential for increased automation to diminish the control human operators have over the use of force over time makes it necessary for this requirement to be explicit. Therefore, future regulations on LAWS must address its novel aspects: the ability to independently identify, select, and attack targets with little to no human intervention. This novelty is what warrants the question of whether there is an appropriate level of human control over LAWS and should be the focus of future regulations rather than how technically advanced a weapon system is. That being said, when considering the importance of flexibility in regulating LAWS, existing lethal weapon systems with autonomous functions, such as close-in weapon systems on naval ships, should also be considered. This will ensure such regulations will be effective in addressing current and future lethal weapon systems with autonomous functions, even where not ‘fully’ autonomous.

To examine the potential of regulating weapon systems, important factors are the levels of autonomy that different weapon systems possess as well as the various stages of a weapon systems life cycle (research and development, deployment and operation) as different forms of human control apply in each stage. Figure one depicts an automation scale to explain how different weapon systems possess various levels of autonomy, in which there is no definitive line between what is considered a semi-autonomous weapon system, a supervised weapon system, and a fully autonomous weapon system. Figure two considers the three overarching components of human control described by Horowitz and Scharre and places them in the context of the stages in a weapon system lifecycle. Those overarching components are that:

  1. human operators are making informed, conscious decisions about the use of weapons;
  2. human operators have sufficient information to ensure the lawfulness of the action they are taking, given what they know about the target, the weapon, and the context for action; and
  3. the weapon is designed and tested, and human operators are properly trained, to ensure effective control over the use of the weapon.

The table also considers other elements of human control and places them in the context of the lifecycle.

The Flexible Scale

Figure one provides a rough visual representation of the flexible scale of automation. This reflects the various degrees of automation that exist in current weapon systems. As you move up and down the automation scale, the degree of automation increases or decreases, and the form of human control over weapon systems varies accordingly. The three points identified in figure one each represent a level of automation and are only highlighted to describe certain points along the scale, not because they are the only three points in the automation scale, as ‘there are no discrete levels of machine autonomy in reality’. Moreover, both the table of general requirements, discussed further below, and the automation scale for weapon systems should be considered simultaneously when developing regulations for LAWS as demonstrated by figure two.

Figure One: Automation Scale for Weapon Systems

Contextual information for Figure Two

The table in figure two focuses on three key stages of a weapon systems lifecycle and the three highlighted points on the automation scale of weapon systems, proposing general requirements for each stage in the lifecycle. There are two reasons why the table is structured this way. First, in each stage of the lifecycle, the form of control a human operator has over the weapon system is different. For example, in the research and development phase, you have developers, whether engineers or coders, developing the weapon, as well as people conducting weapon reviews when necessary to ensure the weapon complies with IHL. That form of human control takes place early in the lifecycle. However, a different form of human control applies when a commander decides to deploy the weapon system or when an operator launches the weapon. Second, each category of weapon system has varying degrees of automation and the level of human-machine interaction will vary accordingly as well. Essentially, it would be better to create a set of general regulations covering all weapon systems rather than detailed regulations that individually address all weapon systems at each stage.

Neil Davidson has suggested that control over a weapon system can take various forms and degrees at the three key stages of its lifecycle. During the research and development stage, software engineers and programmers should ensure that the predictability and reliability of the weapon system are verified and tested in accurate environments. This includes testing whether there is a way for human operators to supervise and intervene after the weapon system is deployed, whether there are sufficient restrictions placed on the use of a weapon system to ensure that it complies with IHL, and that commanders and operators know how to use a weapon system in its proper manner, allowing for transparency. These elements of human control can be considered when conducting a review under article 36 of Additional Protocol I. Conducting a legal review of weapons provides State and military officials with the chance to determine whether or not the use of a weapon system prohibited or restricted by IHL.

The deployment or activation stage involves a commander or operator deciding to employ a weapon system for a purpose that must be ‘based on sufficient knowledge and understanding of the weapon’s function in the given circumstances’. This is when the commander considers the principle of distinction and proportionality to ensure that the target is a lawful military objective and that there will be minimal collateral damage. Therefore, this is the stage where retaining human control over the weapon system is essential as it is the commander or operator that will make the assessment and go through the principles of IHL before deciding to deploy a weapon system.

At the operation stage, Davidson mentions that humans can maintain control over weapon systems during this stage by supervising a weapon system through a ‘two-way communication link’. This would allow operators to adjust the ‘engagement criteria’, such as re-directing the weapon system to the correct target if the weapon system misidentified the target, or to terminate an attack. However, this may not work for all weapon systems. For example, underwater weapon systems such as encapsulated torpedo weapons are difficult for humans to communicate with and deactivate. Even if a two-way communication link during the operation stage may not work for all weapon systems, there should be adequate human supervision and control in the previous stages for the commander or operator to have made an informed decision given the knowledge they have about the target, the weapon, and the reason for taking action.

Categories of Weapon Systems

1. Semi-autonomous
2. Supervised
3. Autonomous

Research and Development

General Requirements:

a) Weapon systems are to be designed in a way to ensure that human operators have control over how and when the weapon is used.

b) Weapon systems are to be tested to ensure that they can be used in a manner that complies with IHL.

c) Elements of human control that should be considered in the development stage to ensure that a weapon system is predictable and reliable are:

  • The ability for human supervision and intervention;
  • Operational restrictions; and
  • Transparency.

Deployment

General Requirements:

a) Commanders, operators and others who take part in planning an attack should have sufficient information to confirm the lawfulness of the actions taken.

b) Commanders, operators and others taking part in the planning of an attack should make ‘informed, conscious decisions‘ on the use of weapons before its deployment.

c) Initiating an attack needs to have a positive action by an operator.

Operation

General Requirements:

a) An adequate form of monitoring (based upon the type of weapon system) of the weapon system and or its payload to ensure the correct target is hit.

b) Commanders and operators having all relevant information about the target, weapon system and reason for using force on a target to make an informed decision and remain accountable for their actions.

Conclusion

Whether the regulations are in the form of a separate treaty, an additional protocol to the Convention on Certain Conventional Weapons or a manual like the Tallin Manual on cyberwarfare, an instrument is needed that will address the development and use of current weapon systems effectively and allow for regulations and definitions to adapt to future weapon systems.

Tarisa Yasin
Tarisa Yasin
+ posts

Tarisa Yasin is a PhD candidate and Assistant Teaching Fellow in the Faculty of Law at Bond University. Her PhD thesis focuses on the challenges of the increasing development and use of lethal autonomous weapon systems to international humanitarian law and the concept of human control over such weapon systems. Other research she is involved in includes examining various possible methods for regulating the development and use of lethal autonomous weapon systems to address gaps in accountability within international humanitarian law and international criminal law.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Get in Touch with the Editor

ANZSIL Perspective is pleased to hear from its readership and answer any questions from prospective contributors. We aim to respond within three business days.