ANZSIL Conversation post – Law and the Future of War update on the REAIM Summit

Written by Dr Lauren Sanders, Rain Liivoja and Dr Brendan Walker-Munro

 

__________

On 15 and 16 February 2023, the Netherlands and Korean governments co-hosted a summit on Responsible Artificial Intelligence in the Military (REAIM) at the World Forum in The Hague. Attracting over 1800 attendees from academia, industry, and – importantly – governments, the Summit was the first of its kind in terms of scale, scope and ambition.

Against the background of a decade of slow-moving debates about ‘lethal autonomous weapon systems’ in Geneva, the REAIM Summit seemed to send two signals:

First, the current international and multidisciplinary attention on autonomy used in defence technology may need to be turned to the broader issue of AI in the military.

Second, that the Group of Government Experts on Lethal Autonomous Weapons Systems CCW GGE on LAWS), meeting under the auspices of the Convention on Certain Conventional Weapons since 2017, may be on its last legs.

The Summit featured a diverse representation of perspectives on the use of military AI. Perhaps the starkest reminder of the divergence in views on this issue was the lively opening plenary panel discussion which included the Netherlands Chief of Defence Force, the Chief Technical Officer of Lockheed Martin — a major defence manufacturer — and the Head of Amnesty International. They all brought rather differing perspectives to the central issues of what responsibility and trust look like in the use and deployment of AI-enabled military systems.

Despite this divergence of views, which continued to be on display throughout the two-day Summit, a number of themes emerged, demonstrating a degree of convergence in the current debates. For one, it was common ground that AI-enabled capabilities differ fundamentally from conventional military capabilities, which is relevant to the future regulation of these technologies. 

Several panels focused on how military AI requires regulation, and how such regulation needs to lean on military-specific values and be considered during the design of these technological capabilities.  This broad framing of the use of AI in the military goes well beyond the unclear and unsettled definition of autonomous weapons systems (AWS). It also assumes – rightly in our view – that the potentially ubiquitous nature of AI in military systems requires special treatment and consideration from a regulatory perspective, rather than limiting the discussion only to those systems that are self-contained weapons that function autonomously.

While the current CCW GGE on LAWS has not settled on a definition on what constitutes lethal autonomous weapon systems, the 11 Guiding Principles, agreed by GGE participants, specify that ‘systems capable of acting without any form of human supervision or dependence on a command chain by setting their own objectives or by modifying, without any human validation, their initial programme (rules of operation, use, engagement) or their mission framework’ should be excised from the list of lethal autonomous weapons system that are capable of being lawfully used.

Turning to the language used at the Summit, the prevailing term ‘responsible AI’ is interesting in itself and reflects the current zeitgeist.  The term ‘responsibility’ appears in a number of national and supra-national military AI frameworks (such as those of the OECD, UK, US and the military services, in Australia’s case) and thus enjoys considerable support as a useful overarching paradigm to capture the requirements of military AI that would be palatable for deployment.

The underlying presumption is that being responsible enables the lawful use of this technology.   Previous framings have turned on issues of ‘trust’ or ‘meaningful human control’.  As for the former, one speaker argued in some detail how the concept was ill-suited to describe the relationship between a human and a machine, whereas the latter means different things to different people.

Several speakers, including Sir Lawrence Friedman and General Onno Eichelsheim, the Chief of Defence of the Netherlands Armed Forces, sought to articulate the distinction between the use of AI in the military in defensive and offensive modes, raising the possibility of this distinction impacting the methodology for regulation.  From a legal perspective, while it may be easier to avoid legal and ethical challenges with the use of autonomy in a defensive posture, from an international humanitarian law perspective, there is no meaningful legal distinction between the two. 

This appears to turn on the implicit understanding that issues such as ethical and moral justifications; and critical laws of armed conflict principles, such as the proportionality calculus, assessment of military necessity, and questions of humanity, can be more easily assessed, and thus devolved to a machine in a situation of obvious defence of life. 

While this legal distinction is not strictly relevant in terms of the justification or authorisation for the use of force in an armed conflict situation, it does make the decision to use force more apparent and based upon an easier set of objective criteria that a machine could conceivably identity. 

A number of panels emphasised the need to approach military AI capability design having regard to the proposed context of the use of the capability, and the operational system within which the capability will be utilised. These two aspects of design implicitly recognised that AI may have impacts beyond their immediate use environment, which trigger legal or regulatory concerns. For example, the operation of a decision support system can raise legal issues – even though it is not in and of itself a method or means of warfare – because it forms part of the ‘kill chain’.

Used outside an armed conflict situation, the same decision support system may raise other legal issues relating to privacy, or trigger different human rights considerations. In all cases, this requires special legal attention as the AI functionality subsumes or influences judgments undertaken by a human actor.

Presenters reinforced that the testing of military AI requires rigour, and necessitates a methodology that accurately depicts the complex, ‘dirty’ and unpredictable nature of armed conflict.  At the same time, multiple panels reinforced the Clausewitzian adage that while the character of war may change, its nature remains the same. Hence, the emergence of AI as a novel and disruptive technology will not change the fundamental human and political nature of conflict, but the manner in which conflict is fought (its character) will be irrevocably changed by AI systems.

Many acknowledged that civilian or commercial use of AI should not be ignored, but rather considered for its impact on the methodology for use in armed conflict. The power of industry in shaping how AI is used by militaries, designed or developed – and the thousands of small decisions taken by designers and developers in creating algorithms for military use – must be properly understood in order to assess how the AI might perform.

Equally, the societal implications when AI technology is embraced, augmented and used by the civilian population will also impact the manner in which AI will be utilised by the military. Understanding and borrowing from both sides of the military and civilian dual-use coin will assist in creating a balanced and future-proof regulatory framework.

There was little discussion about export controls as a mechanism to control military AI. While there are existing state-based frameworks in place to address the third party trade in advanced technologies, such as the US Foreign Military Sales (FMS) and International Traffic in Arms Regulations (ITAR) regimes, these mechanisms only control technology with a nexus to the originating state (ie, ITAR and FMS only relate to US technologies). Some defence export control experts are calling for a fifth multi-lateral weapons counter proliferation regime to support the existing multilateral framework to enable the proper regulation of advanced technologies which include autonomy.

Finally, the Summit reinforced the view that the best pathway for the regulation of military AI is not clear, partly because the scope of issues that require regulation also remains unclear. Discussants at the Summit proposed numerous methodologies to progress the regulation of military AI:  they addressed all aspects relevant to its responsible use, such as cyber security controls, legislation incorporating principles and values-based approaches, self-regulation by the military and even some calls for independent certification bodies. The debate about treaty-based or other legally-binding solutions compared to policy-driven or even industry-led self-regulation continues without real resolution. 

The divergence of views on the appropriate mechanism to achieve responsible use of military AI still appears to be the most significant barrier to concrete action being taken to moderate, mitigate or control the use of this technology in the military domain.

The Summit resulted in the release, by participating States, of a ‘Call For Action’. This called for ongoing cooperation and sharing of knowledge and practices in the research and development of the use of AI in the military context. However, despite this positive step, it is unclear whether any concrete outcomes will result following this inaugural Summit. While the Republic of Korea will host a follow-up event in 2024, it remains to be seen whether REAIM will become a regular event on the calendar of international debate relating to the regulation of military AI. The richness of discussion, the diversity of participants and the breadth of content all go to show that there are many unsettled regulatory issues pertaining to military AI that require attention. The Summit itself also draws attention to the relative urgency of the task at hand: armed forces are already prolific users of AI, with the use of higher-end autonomous systems being reported in current conflicts.  Many Summit discussants referred to the autonomous weapons system ‘train’ having left the station, with the regulatory framework still catching up. Nevertheless, this Summit clearly recognised the importance and the urgency of this issue. The significant investments that the Governments of the Netherlands and the Republic of Korea have made to further the debate are heartening. It at least speaks of interest by two technologically and militarily advanced states in taking incremental steps to ensure that the inevitable development and use of military AI happens in a responsible (and legally acceptable) manner.  


This contribution expresses the personal views of the authors, and does not necessarily reflect the views or opinions of the Australian Government or any of its departments.

Lauren Sanders author photo
Dr Lauren Sanders
+ posts

Dr Lauren Sanders is a Senior Research Fellow with the TC Beirne School of Law, The University of Queensland in the Law and Future of War project. Her doctoral studies were in international criminal law accountability measures, and her expertise is in international criminal law and the practice of international humanitarian law including advising on the accreditation and use of new and novel weapons technology.  She has over two decades of military experience and has advised the ADF on the laws applicable to military operations in Iraq and Afghanistan and domestic terrorism operations.

Rain Liivoja author photo
Rain Liivoja
+ posts

Rain Liivoja is a Professor at the University of Queensland Law School, where he leads the Law and the Future of War research group. Rain is also a Senior Fellow with the Lieber Institute for Law and Land Warfare at the United States Military Academy at West Point, and a Visiting Legal Fellow with the Department of Foreign Affairs and Trade. He has published on a range of international law issues relating to military operations, and his current research focuses on the legal challenges associated with military applications of science and technology.

Dr Brendan Walker-Munro
+ posts

Dr Brendan Walker-Munro is a Senior Research Fellow with the University of Queensland's Law and the Future of War research group. Brendan's research focus is on criminal and civil aspects of national security law, and the role played by intelligence agencies, law enforcement and the military in investigating and responding to critical incidents. He is also interested in the national security impacts of law on topics such as privacy, identity crime and digital security.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Get in Touch with the Editor

ANZSIL Perspective is pleased to hear from its readership and answer any questions from prospective contributors. We aim to respond within three business days.