Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2022. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 25 May 2022

NATO’s Role in Responsible AI Governance in Military Affairs

Abstract and Keywords

This chapter explores a role for the North Atlantic Treaty Organization (NATO) in the emerging military artificial intelligence (AI) governance architecture. As global powers compete for capabilities that AI can offer, NATO has the challenging task of recalibrating strategic relationships in the coming years. NATO has begun to recognize technological change as a necessary variable and, in turn, adapt its organizational composition and strategy to increase the Alliance’s capacity to meet emerging security challenges. As NATO bodies and Allies prepare for the impact of AI on future military operations, NATO has its own responsibility to steward AI in ways that promote harmonization among Allies and advance the NATO mission. Toward this effort, the chapter highlights two governance mechanisms within NATO’s competency—strategic and policy planning, and standards and certification—as practices that exemplify NATO’s power to shape the trajectory of technological development. We operationalize these governance tools by examining the three pillars that are particularly challenging for AI governance: ethics and values, legal norms, and safety and security. Within each pillar, we examine NATO’s facilitation of strategic policy planning and standards and certification to emerge as a leader in establishing responsible technological development and, ultimately, a more secure international security environment. This chapter finds there is space for NATO to pursue its agenda to maintain technological superiority not just to protect and defend its way of life, but to build on AI governance pillars to steward military innovation on a responsible trajectory.

Keywords: international security, military technology, global governance, international relations, international law, international organizations, North Atlantic Treaty Organization

Introduction

The introduction of artificial intelligence (AI) as a general-purpose technology has prompted analysts and researchers to reconsider the implications for warfare. As this Handbook edition illustrates, AI has, and will continue to, shape global dialogue, policy, and governance structures in international politics, including for future military operations.

In this chapter, we explore a role for the North Atlantic Treaty Organization (NATO) in the emerging military AI governance architecture. NATO (or the Alliance) is a military and political alliance among 30 contributing member states that are committed to collective security. Much of NATO’s original purpose and current core tasks arguably leave the Alliance’s role uncertain in international governance regimes contending with the impact of emerging technology on international politics.1 As global powers compete for the economic and military capabilities that AI can offer, the Alliance has the enormously challenging task of navigating varying political realities and capabilities of Allies, all while effectively recalibrating strategic relationships in the coming years. Recognizing technological change as a key variable, NATO has begun to adapt its organizational composition and strategic footing to increase the Alliance’s capacity to meet emerging security challenges for military capability development trends of both its own members and those of competitors or adversaries.

New power distributions around AI and adjacent dual-use technologies are among the motivating factors causing the Alliance to reconsider whether its technological superiority may be threatened in the years ahead, as reflected in the 2019 Emerging and Disruptive Technologies (EDTs) Roadmap2 and, more recently, the NATO 2030 process.3 NATO navigates these changes and then approaches AI-accelerated changes to the international security environment in a highly political context. Notably, in 2019, French President Emmanuel Macron surprised many European counterparts by declaring NATO “brain-dead,” a warning wrapped in an even larger warning of trans-Atlantic security divisions.4 The critique that NATO is a “brain-dead” or “irrelevant” institution has existed in some form since the end of the Cold War.5 As NATO combats global perceptions of organizational irrelevance, there is a reason to push for bureaucratic adaptation to better manage technology-driven changes in the future. As such, despite some warnings to the contrary, Allies have an incentive to keep NATO a relevant military institution and ensure that it adapts to emerging threats and for future military contexts. The comment from President Macron helped prompt the NATO 2030 agenda, which is currently taking shape to increase the Alliance’s role as a political actor and as an organization with a greater focus on EDTs.6

As NATO bodies and Allies prepare for the impact of AI on future military operations, the Alliance has its own responsibility to steward AI in ways that, inter alia, promote cohesion between democratic countries, prevent risks, shore up interoperability, project deterrence, and ensure stability.7 To achieve these aims, cooperation and alignment are critical for the Alliance to maintain a competitive edge and promote further innovation in alignment with shared values.

With these incentives, we argue that an examination of NATO as a governance stakeholder is due to complement other literature on how humans, social structures, and institutions impact how technology develops. More specifically, this chapter borrows from two fields of scholarship that set the theoretical foundations for how institutions such as NATO impact technological trajectories, and thus have a responsibility to govern the technology accordingly. The two fields—science, technology, and society (STS) studies and military innovation literature—have different parameters, but both explore key questions that help establish the ways in which institutions exert their influence on the development, deployment, and diffusion of technologies like AI. We argue that this influence is a form of institutional power, building on Seth Lazar’s definition of governance in this handbook. Lazar writes that governance is “the use of power to make, implement, and enforce the constitutive norms of an institution.”8 In the context of this definition, this chapter examines AI governance as an instrument of power linking NATO’s responsibility and capacity to shape the future security environment in parallel to its own organizational interests.

To be sure, NATO is far from the only institution that impacts military AI governance and its security implications. Indeed, international technology governance is inherently complex because it includes diverse stakeholders in a system of “organizations, regimes, and other forms of principles, norms, regulations, and decision-making procedures” with a shared interest and responsibility in a given issue-area of world politics.9 Existing discussions of the impact of AI on international security have looked to nation-states, regional institutions like the European Union (EU), or international bodies like the United Nations Convention on Certain Conventional Weapons (UN CCW) for discussions on the military governance of AI.10 Without expanding on the role of these other stakeholders, this chapter begins to explore pressing questions for NATO and international relations scholars that illustrate NATO’s role in AI governance, which has not had a comprehensive analysis.11

To begin to fill this gap, the analysis in this chapter centers on two AI governance mechanisms that NATO has at its disposal, and subsequently explores the Alliance’s capacity to use these mechanisms to exert its influence in key pillars of AI governance. Of the many possible AI governance mechanisms for NATO, this chapter offers a deeper assessment of two: (1) strategic and policy planning and (2) standards and certification. We fashion these mechanisms as primary components that connect NATO technology governance measures and responsible AI use.12 To illustrate NATO’s capacity to govern AI, we then examine three pillars, or foundational issue areas, which we believe represent critical elements of technology governance. We argue that, within each pillar, NATO is uniquely situated to facilitate cooperation via its governance mechanisms, with a view to shaping the future of AI for the Alliance and maintaining a competitive edge. Each pillar—(1) ethics and values, (2) legal norms, and (3) security and safety—is an area where researchers and analysts have acknowledged significant governance challenges, both at a national level and for international organizations like NATO. Each pillar, discussed in depth below, illustrates NATO’s potential as a governance stakeholder that can encourage multinational alignment on policy and standards for safer and better outcomes in future operations.

The rest of this chapter continues as follows. First, we establish how STS studies and scholarship on military innovation focus on different aspects of technological advancement and governance outlooks. Second, this theoretical basis is applied to NATO to provide readers with an understanding of the institution’s entities and responsibilities related to AI governance. Next, the chapter discusses ways that NATO can leverage these mechanisms to ensure responsible use of AI in military operations based on ethics, law, safety, and security. Finally, the chapter concludes with reflections on NATO’s AI governance tools and, more broadly, roles for international organizations in the AI governance space.

AI Governance and Military Affairs: Tensions in Existing Literature

Academic literature has long grappled with the intersection of emerging technology and security organizations.13 Two branches of literature that tackle core questions of technological trajectories and its relationship to human and social structures—a critical question of governance for military technology—are STS studies and military innovation scholarship. Although the theoretical approaches in STS and military innovation studies differ, they both share the important assumption that technology does not have its own innate logic, and instead measure technological change by its impact on social structures and interactions with humans. In other words, both fields treat technology as an enabler in broader structures. The term technology is ubiquitous enough that it does not have a single definition, but it is often defined in relation to human intention and purpose. Alex Roland describes technology as a “purposeful human manipulation of the material world” to “serve some human purpose.”14 If extending this basic idea of technology to technological innovation, then both STS studies and military-innovation scholarship lend relevant criteria.

Both academic fields are also relevant because, in the policy space, AI governance stakeholders are pursuing responsible research and innovation (RRI), which comes from STS studies, and defense stakeholders are similarly focused on responsible innovation and responsible use. More traditionally, the direct study of military adoption of technology is considered in the separate scholarship of military innovation, which includes a school of thought that focus on cultural and organizational factors. Between these two fields, an interdisciplinary approach is helpful here to carry STS approaches to AI governance, including RRI, over to the space of military innovation.

However, this is complicated by the reality that military organizations that see technological superiority as a core element of deterrence and defense, including NATO, engage in forms of technological determinism that STS scholars squarely reject. Respective views on technological determinism—which considers that technology shapes society as a largely autonomous process with limited human agency—thus creates a tension for governance prospects.15 To spotlight the aspects of military innovation related to governance, this section briefly expands on the overlaps and tensions between STS and military innovation literature.

Science, technology, and society (STS) studies

STS studies is helpful to understand how technologies such as AI develop relative to the human, social, and political structures that shape it, rather than as an independent entity to which humans have to adapt.16 In this vein, AI is not just a computational process involving software, hardware, and data,17 so much it is a socio-technical system that encompasses “human, social, and organizational factors.”18 Together, these factors enable a focus on the trajectory of technological development relative to social structures and power dynamics. STS scholars have also helped develop RRI frameworks that seek to guide technological development in anticipatory, participatory, and adaptive frameworks to achieve desirable outcomes and prevent undesirable ones.19 RRI is a structured approach to innovation in which stakeholders identify and act on their “collective commitment of care for the future through responsive stewardship of science and innovation in the present.”20 It drives civilian AI ecosystems for NATO Allies that will also indirectly affect NATO.21

Responsible stewardship, or governance, of science and technology (S&T) requires stakeholders to change their approaches to technological development as the circumstances themselves change.22 In his book The Social Control of Technology, David Collingridge identified the double bind that makes technology governance (what he then referred to as social control) difficult: exerting social control or governing nascent technology is easy, but impossible because its evolution and eventual impacts are unknowable, and by the time the technology matures and its impact is realized, entrenched decisions will make future control more difficult.23 For now, AI remains a relatively immature technology, meaning circumstances will change as knowledge emerges and norms progressively develop. Collingridge also suggested the necessity of “corrigibility of innovation,” which refers to the “capacity to change shape or direction in response to stakeholder and public values and changing circumstances.”24 When applied to current RRI frameworks, the concept of corrigibility obligates governance stakeholders to shape the trajectory of a technology’s development and impact in ways based on social structures, both in anticipation of change and in response to decisions made in error.25 In short, stakeholders have to adopt corrigible practices to responsibly govern technology as it develops, and thus must claim their agency in guiding innovation even as technological development appears increasingly entrenched in previously made decisions and their subsequent outcomes.

This is important for AI governance because technological advancement is making AI-accelerated risks clearer, including in the military space. Risks—especially as related to AI-enabled autonomous systems, poisoning of information environments, cyberattacks, unpredictable failure modes, and emergent behavior—will evolve in form and scale as the technology matures and diffuses. If AI evolution means more entrenchment and less corrigibility, the STS foundations remind governance stakeholders how to course-correct and adapt to changing risk assessments and the overall impact of AI in the international system.26

Nevertheless, while STS scholars study how decision-making that shapes the trajectory of technological innovation becomes entrenched, the field largely rejects the premise of technological determinism. Maintaining the centrality of human agency, as exerted also through social structures and institutions, is antithetical to determinist perspectives on technology developing on its own path independent of intervention. As Allan Dafoe, another contributor to this Handbook, has argued, the STS academic community’s refusal to engage with technological determinism severely limits STS applicability to empirics.27 As discussed below, this has implications for the ability of the STS field to impart responsibility to governance stakeholders in an area such as AI and international security.

Military innovation literature

The scholars that examine the way that military stakeholders manage technology and shape its development trajectory predominantly write on military innovation.28 These scholars measure technology adoption in changes to doctrine, organizational structures, and operational concepts, rather than seeing the technology as an end in and of itself.29 From this perspective, technology subsequently shapes human and social structures and organizations. To take an example similar to Roland’s definition of technology, Jonathan Shimshoni’s concept of “military entrepreneurship” involves the active manipulation of technology, doctrine, and war plans.30 In this sense, new technology adoption has tangible and observable effects on the operational environment. Similarly, Thomas Mahnken illustrates that military services shape technology to their respective purposes, rather than the other way around.31 The purpose that this manipulation, or molding, of technology serves is the creation, and ideally sustainment, of a comparative military advantage.32

Still, the way that this military advantage is defined is relevant here because metrics of success differ from other scholarship dealing with innovation. Military innovation importantly constitutes the relationship and social structures that form between technology and military bureaucracies. Yet as a field, it does not necessarily extend these relationships to their status as stakeholders in wider technology governance regimes.33 For instance, in his review of the different schools of military innovation, Adam Grissom offers a consensus definition of military innovation that inherently links it to effectiveness in the battlespace.34 Grissom clarifies that “measures that are administrative or bureaucratic in nature, such as acquisition reform, are not considered legitimate innovation unless a clear link can be drawn to operational praxis.”35 This reinforces the idea that technology on its own does not constitute an innovation if it is not observable in military operational practice or in battlefield advantage.

The relatively narrow operational focus of military innovation scholarship means that management structures miss out on some of the uses of military power implicit in the governance of military technology. This means that both the bureaucratic entrenchment of technological advancement and the literature focusing on it do not necessarily address governance as an instrument of power in the military context. This may make sense for purely military technologies, but whether it discounts the agency that military bureaucracies have in governance of a pervasive, general-purpose technology like AI is worth separate consideration. As such, the operational measurement of the adoption and diffusion of technology as an instrument of military power likewise limits an understanding of how military technology management structures relate to governance.

Implications for military AI governance

Overall, STS offers much of the necessary groundwork for governance mechanisms and the impact of social structures on technology governance; however, it refuses to engage with technological determinism, or the independent influence of technology, that is often a driving force in military innovation. Recognizing that a comprehensive governance regime also needs to transpose to stakeholders that are engaged in the practices of governing AI, this study on NATO sees military innovation scholarship as a helpful complement to apply these STS foundations to practitioners’ perspectives. But scholarship on military innovation also has its own flaws, in that it looks at the management of technology exclusively formulated to exploit a comparative operational advantage. Measuring military innovation in relation to operational praxis makes sense to detect how military adoption of technology impacts operational excellence and upstream impacts on military strategy, but also makes it challenging for the empirics to apply to non-operational ways that military organizations exert their influence. Non-operational influence includes governance, the core topic that this chapter addresses.

Despite differences, borrowing from the layered frameworks in STS and military innovation studies still helps contextualize innovation trajectories. Indeed, select scholars have attempted to bridge the gap between the social constructivist angle in STS studies and the technologically “optimistic”36 assumptions that frame technologically deterministic undercurrents, as seen in case studies on military innovation.37 Thomas Hughes examined these undercurrents in the defense sector as part of his theory of “technological momentum,”38 which argued military organizations are subject to inaction in S&T decision-making because the entrenchment of previous investments and decisions constrain the course of future technological development. Steven Fino expands on Hughes with the idea that “technological dislocations,” are an alternative reconciliation mechanism that acknowledges that technological determinism may operate beneath the surface of a technology’s maturation trajectory, while still allowing for socially driven perturbations that “dislocate” the “otherwise logical evolutionary patterns” of that technology.39

Dafoe similarly attempts to widen the scope of technological determinism by placing it as an endpoint on a spectrum, with social constructivism on the other end. The purpose of this spectrum is to create the space for engagement with disciplines that heavily emphasize power dynamics, including military affairs and business in what he terms “military-economic adaptationism.”40 Unfortunately, both Fino and Dafoe concede that attributing agency and causality to technological developments are best “conducted after the fact”41 or “on longer timescales,” respectively.42 AI governance cannot benefit from such hindsight, as it is fundamentally a question of how to project and adapt to forces of ongoing change. For governance, this inertia places military organizations at odds with the responsiveness required to guide responsible technology governance frameworks. Our aim is not to reconcile these differences in this chapter, but rather to highlight how they frame one current governance challenge for military stakeholders such as NATO: how can they engage with the socio-technical foundations in RRI frameworks to shape, adapt to, and respond to technology-accelerated changes, while simultaneously pursuing their traditional aims of adopting technology to deter and defend?

On this note, it is worth mentioning that NATO itself has historically convened scholars from both STS and military innovation backgrounds to understand socio-technical changes to their operating environment.43 The Alliance also takes socio-technical factors into account in its S&T work on emerging technologies—including human–systems integration, technology monitoring, and forecasting work.44 This interest in socio-technical systems relating to effectiveness suggests scope for the Alliance to leverage technology governance as an instrument of its influence, as picked up in the next section.

NATO’s Mechanisms to Govern AI

NATO’s increasing interest in EDTs introduces the need to consider how governance priorities can help reinforce the Alliance’s influence. The STS and military innovation literature provide the theoretical foundations for NATO’s stewardship of AI as they place attention on “the role that institutions play in shaping technological trajectories.”45 As AI development continues, the actions that NATO and its members take will have important implications for their capacity to adopt, respond to, and shape their future operating environment. Particularly for democracies, this confers to military stakeholders a dual responsibility to prevent and manage risks, as well as to proactively shape their approach to technological development anchored in democratic values and security. As a multinational alliance with an incentive to drive cooperation and alignment, NATO is situated to define and operationalize norms, as well as promote standards that help shape the contours of future military effectiveness and technological competition.

In a RRI framework, not only is this an institutional role, but it also becomes an institutional responsibility. To apply this responsibility to NATO’s stewardship of AI, the institutional interplay between technology, structure, and concepts is a form of socio-technical system with important implications for AI governance because they link the ways that an institution uses its power to adopt and shape AI trajectory to its respective ends.

Already, several mechanisms are built into military bureaucracies to ensure that technology is adopted in alignment with responsible engineering practices and responsible state behavior.46 The Alliance is organized to harmonize between Allies so that their contributions enhance military effectiveness and political cohesion between like-minded democracies. We argue that these effectiveness-centric mechanisms likewise empower NATO to exert its influence in technology governance. More specifically, this entails the Alliance helping steward technological development for a more predictable strategic environment and enhanced democratic clout around the exploitation of technology reinforcing rule of law. For NATO, we focus on strategic and policy planning, as well as standards and certification because they reflect the Alliance’s particular strengths and interests in S&T. These practices are relevant to governance insofar as they exemplify an institution’s power to shape the trajectory of technological development—but this selection is by no means exhaustive.47

Instead, our aim is to explore how these mechanisms are operationalized at the Alliance level. In this vein, Table 69.1 dissects the role that its various bodies play in managing technology, promulgating and operationalizing standards, and leading change through policy. The role of NATO in this equation is largely shaped by its members’ own approaches to technology, and member-state-driven processes are complemented by “policy entrepreneurs” and technical experts in the International Staff and related bodies.48 Table 69.1 does not list the ways that AI affects the various functions of NATO, but rather spotlights the entities that together operationalize AI governance through cumulative processes on policy and standardization.

Table 69.1: NATO entities relevant to AI governance

NATO Entity

General Mission

Relevance to AI Governance

Allied Command Transformation (ACT)

One of two strategic commands (other being Allied Command Operations); leading adaptation and transformation efforts related to operational concepts, structures, and interoperability

Significant focus on ways AI impacts future military context from 2014 onward; developed Emerging and Disruptive Technologies Roadmap and led away day for North Atlantic Council to help create momentum for NATO AI agenda in 2018

Command, Control and Consultation Board (C3B)

Senior multinational policy body reporting to North Atlantic Council and Defence Planning Committee on information sharing and interoperability, including cyber defense, information assurance, and joint ISR

Coordination of AI-related policies, including TEVV frameworks and responsible development processes outside of NATO’s remit

Communications and Information Agency (NCI Agency)

Agency focused on Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) technology and communications capabilities for decision-making and mission support

Acquisition and experimentation of software and AI systems

Conference of National Armaments Directors (CNAD)

Senior committee of armaments directors who meet biannually to promote armaments cooperation and harmonize military requirements

Align acquisition policies and processes that focus on procurement and sustainment of AI systems (for instance, if procurement guidance includes legal and ethical reviews or stage-gating)

Defence Innovation Accelerator for the North Atlantic (DIANA)

Civil-military technology accelerator announced in June 2021 to be stood up by 2023

[NB: NATO Innovation Fund also announced; governance of venture capital-styled fund not yet clear]

Fund and coordinate activities on TEVV for emerging and disruptive technologies, including AI

Innovation Board

Board composed by senior staff and chaired by Deputy Secretary General to enable NATO staff to understand implications of new technology and innovation

[NB: not a decision-making committee]

Disseminate NATO responsibilities in AI governance to stakeholders across the Alliance so they can understand new risks, implications, and opportunities

Innovation Unit and Data Policy Unit

Established in late 2019/early 2020 in the Emerging Security Challenges Division at NATO HQ to facilitate innovation ecosystem internally and externally

Implementation of forthcoming NATO AI Strategy and Emerging and Disruptive Technologies Roadmap, Data Exploitation Framework Policy, forthcoming AI Strategy

North Atlantic Council (NAC)

Political decision-making body overseeing all political and military processes;

Defence Policy and Planning Committee also responsible for defense planning on behalf of the NAC, including coordination of the NATO Defence Planning Process

Main forum for member states to address AI governance priorities, including setting ambition on ethical and legal basis for AI exploitation in military affairs and setting agenda for operationalization of principles, norms, and standards

Office of Legal Affairs (OLA)

Provide legal advice to Secretary General and International Staff on policy issues, legal defense of Alliance’s interests, and ensure compliance for multinational operations

Allied compliance with relevant international legal regimes for AI (including international humanitarian law), coalition legal interoperability, national and international litigation

Science & Technology Board (STB) and Science and Technology Organization (STO)

STB: “promote NATO-wide coherence of NATO S&T”

STO: entity with large network focused on maintaining “NATO’s scientific and technological advantage by generating, sharing, and utilizing advanced scientific knowledge, technological developments and innovation to support the Alliance’s core tasks,” as per 2018 S&T Strategy

Complement ACT’s work from 2014–2018 to provide technical baseline for impact of AI on future military operations and strategic context

[Subset: Centre for Maritime Research and Experimentation—which is working on C2 systems for unmanned systems a basis for more automated or AI-enabled C2 developments—also under STO auspices]

NATO Standardization Office (NSO)

Independent office that leads standardization activities under the Committee for Standardization and support NATO Defence Planning Process

Promulgation of standards on practices related to documentation, safety, security, and ethics for training data and models, related to governance practices, etc.

Strategic and policy planning

NATO structures around strategic and policy planning both set Allied ambitions and priorities and have the competency to implement them through its many consultative bodies, coordination formats, and albeit to a lesser extent, technology foresight capacities. NATO has facilitative power among Allies, both for defense planning and for the conduct of operations. A cornerstone in modern architecture of international security is coalition warfare—or, more broadly, joint operations. Working with military partners has become a critical feature of modern security policy, where there is more power in enhancing numbers, but also in having allies that lend political and practical legitimacy to deterrence and operations.49 NATO is vital to that effort for many reasons, but also because NATO’s facilitative power is significant to promote coordination and cooperation. Simply put, partners and allies are a necessary feature of modern military behavior, and strategic and policy planning are necessary functions to encourage and underpin cohesion in alliance settings. This is important for AI governance because the nature of AI poses new strategic challenges and will require multilateral approaches and some degree of cohesion to effectively incorporate RRI frameworks in policy planning. As such, the necessity of working with security partners extends to the AI-policy frontier.

A number of NATO entities carry out strategic and policy planning, recognizing the importance of policy alignment to sustain political strength and military effectiveness. As relates to S&T, allies’ representations to NATO, defense ministries, and policy entrepreneurs from the relevant entities summarized in Table 69.1 support and negotiate how the Alliance approaches EDTs. NATO’s strategic documentation and forward-looking policy analysis incorporates hints of technological determinism, including noting how technological change inevitably shapes the future strategic and operating environment. Further, the connections between technology and competitive advantage over adversaries and competitors are embodied in the Alliance’s desire to maintain its “technological edge” as the “foundation upon which NATO’s ability to deter and defend against potential threats ultimately rests.”50 This places technology squarely within NATO’s core purpose of deterrence and defense—and while this signals NATO’s express commitment to technology through these channels, this reliance on technology also obscures whether NATO’s governance capacity will be adaptive, anticipatory, or participatory. This position of technological determinism may result in more limitations for AI governance.

Standards and certification

To maintain its relevance in a security architecture increasingly concerned with the way that technology shifts power dynamics and scales threats to international security, NATO has an incentive to foster cooperation, promote standards of practice, and incentivize Allied AI harmonization. It is strategically salient to facilitate a dialogue and engagement among Allies on AI, but it is practically important to use NATO’s position to facilitate Allied cooperation regarding standards to project the Alliance’s ability to interoperate in future operations. NATO standards aim to enhance interoperability among partners and successful implementation of strategy.

More specifically, standards and certification are used to establish and implement requirements aligned with safe development and responsible use of technology. In addition to purely technical standards, NATO has operational standards that specify “conceptual, organizational or methodological requirements to enable materiel, installations, organizations or forces to fulfil their functions or missions.”51 In line with the definitions from STS and military innovation scholarship, standards can thus be seen as a mechanism to translate responsibility-derived state and organizational AI policy into actionable functions. In fact, NATO has set certain standards for the Allies and these standards subsequently become the norm.

Within NATO, it is the NATO Standardization Office (NSO) that coordinates thousands of experts to align technological development with military requirements that can help enhance effectiveness, interoperability, and cohesion.52 While the NSO is primarily responsible for setting standards, other NATO entities—including in the NATO Science and Technology Organization (STO)—play important roles in implementing them and coordinating between national approaches.53 Certification frameworks and the promulgation of best practices can similarly help incentivize the transposition of RRI into military organizations, even if standardization is by no means a purely military governance tool.

Both mechanisms, strategic policy planning and standards and certification, provide options for NATO to participate in AI governance regimes focusing on international security. NATO’s operationalization of these tools may hold important implications while implementing successful AI governmental regimes for Allies and other defense stakeholders. In the next section, we consider each mechanism within foundational issues, or pillars, to illustrate NATO’s role in AI governance.

NATO and Technological Change: Three Pillars of AI Governance

This section considers three pillars where NATO has procedures and competency to operationalize AI governance through both mechanisms of policy alignment and standards, and enhance security in the international environment. The pillars reflect foundational issue areas constitutive of governance but are also issue areas where previous scholars have cautioned as particularly challenging in the AI governance space. The three pillars—(1) ethics and values, (2) legal norms, and (3) safety and security—are meant to illustrate three conditions for NATO to facilitate policy and standards harmonization. Importantly, these pillars are not exhaustive areas in which NATO will need to consider governance structures to responsibly implement AI technology, but rather highlight particular issues that researchers and analysts acknowledge as significant hurdles in navigating AI governance (see Table 69.2).54

Table 69.2: Cross-tabulating NATO’s governance mechanisms with pillars of AI governance

Ethics and Values

Legal Norms

Safety and Security

NATO Policy and Strategic Planning

Core, shared values at foundation of Alliance’s political cohesion that informs civilian oversight of operations and overall institutional effectiveness; included in principles

Alignment between differing legal interpretations between Allies, particularly as affects the ability of forces to communicate and interoperate in dynamic contexts; constant calibration of policies based on legal interpretations

Strategic planning for maintaining integrity of information in military operations and transparency measures that reinforce democratic accountability; setting priorities on defensive systems and countermeasures to protect from motivated attacks and intentional failure modes of AI-enabled weapon systems

NATO Standardization and Certification

Basis of standards reinforcing responsible state behavior (see legal norms) and responsible engineering practices (see safety and security); human-centric views of responsibility and accountability also embedded in technical adoption measures

Precedents of legal standards dealing with international humanitarian law, including detention standards and training publications

Certification frameworks and technical, human, and procedural standards to prevent emergent behavior and enhance robustness and resilience of systems to behave predictably in conflict environments

The first pillar considers NATO’s role in the evolution of ethical and values-driven AI. One ongoing debate regarding AI as a military technology is the ethical implications and baseline values the Allies, and others, want infused in the development and adoption of AI. The Allies themselves lack uniform consensus on numerous, substantial ethical questions on the use of AI, as most clearly seen in the adjacent area of the ethics of autonomy in weapons systems including lethal autonomous weapon systems (LAWS). In this discussion, we spotlight NATO’s role in facilitating and shaping ethical harmonization as an operational requirement to ensure successful future missions.

The second pillar examines legal norms as a domain wherein legal uncertainty regarding AI has tangible implications for Allied legal interoperability, a subset of larger coalition interoperability. Thus far, the legal debate regarding AI has been largely fixed on the issue of a treaty banning the use of LAWS. In this section, we advocate for a more nuanced legal picture in which NATO can facilitate legal coordination and tackle some of the foundational legal issues which will prevent successful legal interoperability in future operations.

The third pillar identifies safety and security of AI systems as prerequisite to trustworthy and responsible AI in any context, but especially so for the conduct of military activity. At the NATO level, Allied forces must ensure their systems interoperate safely and predictably both to ensure effective command and control (C2) internally, and to prevent disruptions from attacks. It is a foundational facet of coordination that shows the overlap between NATO interests in military effectiveness and incentivization for responsible innovation.

Ethics and values

One of the vital aspects of AI which has garnered significant global attention is the ethical implications of artificial intelligence as a military technology—an issue that has divided much of the global community, including NATO member states. As a starting point, researchers and analysts have considered the implications of emerging military technology in terms of ethical responsibility and regulation, especially as states and organizations continue to release AI ethical principles, guidelines, and standards.55

We explore how NATO can operationalize the debate around ethics and values of military AI to garner coordination and continue progress of EDT harmonization among partners. Building on the theoretical discussion from STS and military innovation literature above, the adoption of technologies that reinforce values serves the strategic interest of NATO to shape technological innovation against current waves of illiberalism.

Additionally, infusing AI development with certain ethical principles and values can have operational advantages and benefits, and NATO can, in particular, promote the ethical principles as operational standards for the Allies. A common critique within the ethics debate is that approaching new technology with an ethical or democratic values-driven perspective translates into comparative military disadvantage. Essentially, if your adversary develops technology without the constraints of ethical principles then there will be diminished effectiveness on the battlefield.56 We find this critique unfounded because it assumes there is a false trade-off between ethics and effectiveness; instead, we argue ethical foundations are built into the architecture of modern warfare.57 As such, ethics is a background condition for battlefield effectiveness, which is already infused in military decision-making and helping to guide the boundaries of international humanitarian law. As such, ethical guidelines do not have to detract from a military’s capacity or competency to devise means and methods of warfare that will serve their national or coalition interest.58 If anything, a first-mover advantage can incentivize an ethical and values-driven AI to establish the threshold of technological standards globally.59

The political dimension of the Alliance rests on the bedrock of a shared commitment to the “principles of democracy, individual liberty and the rule of law,” as enshrined in the foundational North Atlantic Treaty of 1949.60 Shared values are important for NATO operations because they help constitute their legitimacy. In addition to the North Atlantic Council exerting civilian oversight over NATO operations, legitimacy also includes respect for international legal principles including the core principles of international humanitarian law, or the laws of armed conflict, distinction, proportionality, and necessity. Without political oversight and legitimacy, NATO’s military power would be less effective at shaping norms and promoting stability in the international system. The introduction of AI means that NATO has the moral and strategic imperative to adopt technologies that confer legitimacy and responsible innovation.61 Acting on a shared commitment to democratic values is vital to the political cohesion of the NATO Alliance, just as much as it is a determinant of military effectiveness in a predictable security environment.

Put simply, shared values are important to both political and operational coherence between Allies. In its 2018 Framework for Future Alliance Operations, the strategic command Allied Command Transformation urged discussion of the legal and ethical dimensions of technological advancement to both know how it would impact NATO decision-making and how the Alliance would be prepared to address adversaries who do not share in that vision.62 As such, NATO is contending with the ways that ethical AI impacts its own cohesion internally and how differences between allies may project outward in the face of competitors whose ethical frameworks and commitment to the rule of law differ. Internally, there is a strong national government commitment to responsible AI. Recently, transatlantic cooperation has initiated partnerships of largely NATO states committed to advancing responsible AI with goals towards data sharing and future interoperability.63 AI defense partnerships are not restricted to military innovation but rather aim to facilitate civilian innovation cooperation for defense purposes.

Externally, as AI-enabled autonomous systems enter the arsenals of more technologically advanced countries, uncoordinated ethical frameworks between Allies could pose operational risks. Without wider alignment, AI systems will have “varying technical specifications based on the legal and policy decisions made by individual governments when answering the key questions.”64 Further, although one motivation of autonomous systems is the increased safety of military personnel by removing them from dangerous situations, the lack of alignment could lead some Allies to perceive other capitals’ deployments of unmanned forces as a lack of commitment to put lives on the line, therein posing credibility risks for Allies to assure one another.65 These credibility risks can be mitigated by accountability and verification standards and procedures that NATO can implement for multinational operations, and efforts to institutionalize these procedures for AI are underway.66 While the NATO AI Strategy is expected to create a common foundation for the Alliance’s pursuit of AI, it is the implementation of principles for safe, ethical, legal, and interoperable AI that will reveal how coherent different national perspectives are. As of August 2021, only the United States and France have publicly issued their military AI strategies.67 Other allies, including Canada and the United Kingdom, have emerging views on responsible military AI, but little official information about how they implement their ethical risk assessments is publicly available.68

NATO’s influence in the functioning of joint operations and multinational military operations situates the Alliance to coordinate between how Allies implement ethical principles in their own national AI development. Specifically, NATO is well-situated to advocate for transparency, accountability, and data governance, which are also adoption factors that can translate into operational benefits, among other values.69 For example, these factors can promote coordination among Allies on ethical guidelines on the development and use of AI, as this will be a necessary foundation in any future joint operation that uses this technology. “The transatlantic partnership must focus on coordinating these core principles and systematic governance to ensure AI systems development aligns with the rule of law and democracy. In particular, this must ensure answering questions about human dignity, human control, and accountability … NATO remains the organization that can bring these two (U.S. and EU) together and establishes the ethical bottom line.”70 The issues of transparency and accountability will define the scope of future implementation.

Many remaining questions and uncertainty will be addressed in NATO’s forthcoming AI ethical principles guidelines. But the guidelines adopted in 2021 do not address every ethical dilemma. Regarding accountability, especially, likely major questions will continue to affect the Alliance. As Assistant Secretary-General for Emerging Security Challenges David van Weel recently clarified, NATO will offer a framework of responsible use for the Allies—but the question of accountability for member states, as opposed to civilian technology manufacturers for example, is one principle that will not have an easy solution.71

International legal norms

In certain respects, the legal debate mirrors much of the ethical debate surrounding AI as the two address many of the same issues. International law is a values-based system embedded in certain principles and practices agreed-upon within the international system. This section certainly identifies the complementarity of the ethical discussion surrounding AI, but it also illustrates where the legal debate can depart from the ethical considerations to address different sorts of legal challenges that face the Alliance.

Lawyers, researchers, and civil society grapple with existing legal regimes relevant to military operations and the uncertainty and ambiguity surrounding automated decision-making, particularly in lethal decision-making. Thus far, the legal dialogue has been heavily anchored in the applicability of international humanitarian law (IHL), and other relevant legal regimes, to lethal autonomous weapons systems.72 IHL, also known as the laws of war or the laws of armed conflict, regulate the means and methods of warfare and, as such, is pivotal to the emergence of military technology and how existing legal structures are disrupted. The legal debate often revolves around the prospect of a “treaty ban” of LAWS.73

But the legal debate is much more nuanced than the likelihood of international treaties banning any particular weapon system. Especially because NATO is not a regulatory body, it cannot institute measures to regulate emerging technology for the Allies. Instead, NATO’s function in the legal domain may be more effective outside the traditional legal debates around emerging military technology and more embedded in fostering cooperation and coordination among military partners.

Other avenues of legal regulation may fall short of an international convention or prohibition, but nevertheless factor significantly in regulating and/or delineating state policies. Additionally, non-lethal applications of AI, as well as applications of AI that do not figure into autonomous systems, also raise important legal questions under international law. Arguably, norms around non-lethal applications are more urgent as their development is more advanced, harder to define, and less controversial in integration.74 Ultimately, NATO’s facilitative power can help ensure that integration of EDTs like AI into military capabilities and into multinational coalition operations is consistent with member states’ legal obligations.

One vital and unique contribution for NATO is facilitating legal interoperability among the Allies to resolve some of the most pressing legal barriers for AI implementation in future Allied operations. Legal interoperability, a subset of larger coalition interoperability, refers to the operational coordination around partner legal obligations and interpretations.75 It ensures “that within a military alliance, military operations can be conducted effectively consistent with the legal obligations of each nation.”76 Legal interoperability is a critical component of multilateral operations that has thus far been under-examined, despite its centrality to successful military operations. This is largely because “legal factors have a bearing on everything in alliances and coalition operations—from determining basic ‘troop-to-task’ considerations to decisions regarding the targets to be engaged—and the types of ordinances that may be used.”77

To enhance legal interoperability, NATO can exert its influence on how Allies can develop and deploy AI consistent with their legal obligations through its unique standardization capacities. Historically, NATO has taken significant steps to bridge the legal gap between Allies on critical procedures that bridge responsible state behavior with such “troop-to-task” considerations. One instructive example from past operations is detention policies in non-international armed conflicts.78 The promulgation of detention standards illustrates the operational significance of NATO’s common legal procedures, even for coalitions of the willing that formally operate outside NATO structures. By way of background, the U.S.-led coalition in Afghanistan had internal debates regarding the 96-hour security detention time period.79 The United States advocated extending the 96-hour rule, where coalition partners insisted adhering to the NATO standard, even though it was not a NATO operation.80 Generally the detention example illustrates NATO legal standards providing clarity to non-NATO operations; in some cases, Allies adopt NATO standards as accepted thresholds that continue to inform coalition policies beyond NATO structures and operations.

Implementing AI in future military operations will almost certainly complicate legal interoperability as there is a lack of uniform standards, as in the detention example. Even some of the more basic implementation measures will garner legal uncertainty and Allies will inevitably navigate with minimal legal clarity and no standard procedures. Despite the roots of the legal debate stemming from the question of lethality, the most pressing (and urgent) legal issues will address the integration of necessary AI-enablers, such as data gathering and sharing.

Furthermore, NATO has coordinated initiatives to promote awareness of Allies’ legal obligations and has a dedicated office focusing on legality. This centralizes the institutional capacity to focus on alignment not only between the policies of NATO Allies, but coherence with the international community more broadly. Among others, the NATO Legal Practitioners’ Workshop and inter-organizational dialogue between NATO, the UN, and the International Committee of the Red Cross (ICRC), the latter of which has a delegation to NATO that provides legal training and education to practitioners.81 The NATO Office of Legal Affairs (OLA) itself can also play a central role in navigating the challenges to legal interoperability. As the example of detention standards illustrates, NATO has been successful in implementing legal standards which translated into operational clarity and coalition policy outside NATO operations.

As part of its focus on responsibility in its EDT agenda, NATO has opportunities to facilitate AI legal standard-setting and coalition policies to ensure safer and responsible use of AI in Allied operations.

Safety and security

For humans to meet ethical and legal commitments when developing and deploying AI, the systems themselves must be safe, secure, and reliable. More simply put, if humans and institutions interacting with AI do not have confidence that the systems will perform as expected, then they cannot assure that its development and deployment are responsible. This makes safety and security a key pillar of responsible AI governance for any actor.82 As this section explores for NATO in particular, safety and security are indispensable to the Alliance’s stated goals to focus its approach to EDTs in the areas of “deterrence and defense, capability development, legal and ethical norms, and arms control aspects.”83

Politically, democratic militaries using AI cannot be accountable to their citizenries nor their coalition partners if they lack mechanisms to trace and explain how their systems are reliable. Accidents and interference with AI systems could likewise create political risks for the Alliance. For example, if deepfakes and micro-targeted information attacks compromise confidence in the integrity of information used to build a common operating picture, then the operational difficulties could also erode political trust between Allies in a few key ways. In the North Atlantic Council, disagreement about the integrity of information could slow the decision-making body’s ability to react to fast-changing operational realities.84 Further, compromised AI systems may not only make it harder for forces to prevent harm to non-combatants, but also to prevent friendly fire. In this way, coalition forces arguably face even higher obligations to coordinate on the reliability of their systems, relative to adversaries and near-peer competitors that tend to operate alone. As such, responsible AI governance is not purely technical; policy alignment and strategic planning are likewise necessary to draw attention to risk management above the tactical level.

Even without being attacked, governability of AI in a NATO context also means understanding how AI-enabled and autonomous systems developed by the 30 Allies—and other partners—will interact with one another. NATO has expressed interest in governability as a principle of AI “to disengage or deactivate in case of unintended behavior,”85 which echoes the U.S. Department of Defense definition of governable AI.86 Disengaging adversaries is important to maintain de-escalation measures in conflict. For NATO, interoperability between systems also relates to governable AI because allies must also consider how the interactions between the 30 Allies’ own AI-enabled and autonomous systems may result in unintended or emergent behavior.87 This means that NATO has a responsibility to coordinate activities—be they technical exchanges, standardization efforts, or training and exercises—to build confidence that the systems perform as humans intend.88 Without this coordination, the lack of interoperability of allied systems could lead to accidents, and separately, the potential loss of operational effectiveness also presents vulnerabilities for adversaries to exploit.

In addition to governability, NATO and its Allies are assessing the risks that bias, attacks, and lack of interpretability can introduce in relation to the anticipated uses of a given AI system.89 In security and defense, new and heightened risks include poisoning of the information environment, deception systems and techniques, uncertainty about the performance of systems in new and unknown environments, and the possibility that tensions or accidents escalate at a faster tempo than humans and institutions can process, among others. These risks can manifest either in motivated attacks or unintentional failure modes.90 In both cases, assuring and certifying that military assets are safe and secure is important given the inherently high risk in operational environments. These operational environments include the presumption that an adversary is disrupting one’s own systems, be it by directly attacking the AI systems themselves, or disrupting the broader command, control, and communications systems under which the AI systems are operating.91

Mitigating these types of risks is typically done in testing, evaluation, validation and verification (TEVV) and in experimentation activities.92 Yet AI cannot be validated and verified the way traditional software systems are because there is no guarantee that an AI system will perform in the real world as it does in a testing environment, and because lifelong-learning systems will perform differently over their lifecycle. Having robust assurance and TEVV processes in place are also important for operators to build trust in the systems they are meant to use, as well as for citizenries and coalition partners at large to see that accountability procedures still apply. As such, building institutional procedures to govern AI safety and security is necessary to build trust in the use of the technology—as well as to develop countermeasures and defensive systems that protect against adversarial threats.

NATO thus has an institutional responsibility to prevent and mitigate these intentional and unintentional failures if using AI in operations and mission support.93 As Table 69.1 shows, the Alliance also has a range of relevant entities to coordinate national approaches to AI safety and security, as well as facilitate safety measures as part of responsible use in the Alliance-wide ecosystem.

NATO has an important role to play in military standardization and Allied policy planning for safe, secure, and interoperable AI. This includes the coordinating role of the Conference of National Armaments Directors and the Command, Control and Consultation Board to implement complementary acquisition processes that fuse AI adoption measures with safety responsibilities. Furthermore, entities including STO and NSO have a significant role setting the technical baseline and promulgating materiel standards that provide the technical framework for safety and security. Although their staffs are themselves small, they both convene hundreds, if not thousands, of subject matter experts in working groups. As such they both offer unique technical networks to help shape safety and security in a way that minimize risk in operations. NATO’s resources and leadership are vital to using standards and coalition policy to instill safe and secure technological development, a necessary condition to interoperable and successful future operations.

Conclusion

At the core, this chapter argues that NATO is well positioned to steward the development of military AI and institute governance mechanisms towards coalition inclusion of responsible AI while simultaneously maintaining incentives for comparative advantage. Using the three pillars—ethics and values, legal norms, and safety and security—as issue areas which present AI governance challenges, we show that NATO has space to emerge as a leader in AI governance and contribute to responsible adoption of EDTs in the international security environment. This builds on foundations that derive NATO’s responsibilities to govern AI according to its values, legal obligations, and institutional interests. These foundations from both STS and military innovation studies offer ways that the Alliance can activate its existing governance mechanisms to exert influence in new ways. Not only is this influence important for the Alliance to bolster its institutional relevance in an evolving international security architecture, but it also dovetails with its capacity to shore up military effectiveness and interoperability as Allies modernize their arsenals and associated concepts into the frontier of AI.

Importantly, we do not argue NATO is the only—or even the most important—actor shaping AI governance in international security. Other contributors in this Handbook impressively detail efforts at both state and regional levels. Our aim has been to convince sceptics that NATO has a role that is not replicated by other stakeholders in the international security environment. NATO has particular influence, procedures, and the competency to institute certain governance mechanisms—namely standardization and policy planning—that it can build on without needing to expend time building new institutions from scratch. Beyond just a role, NATO is incentivized to emerge as a steward of AI governance and use these mechanisms for future operations, should the Alliance wish to maintain its unique position as a leader encouraging policy alignment, defense planning, and military standardization.

More broadly, this chapter illustrates that regional and international organizations have high stakes for military AI governance. As development, procurement, and implementation of AI is accelerating, it is imperative that international organizations facilitate cooperation among states and industry partners to guide responsible military AI implementation aligned with core values and legal obligations. The convening and coordinating power of international organizations, among other governance tools, is a necessary step for state cooperation and policy alignment. How exactly NATO interacts with other international organizations in the security architecture, including the UN and EU, is a political topic that will also have important implications for the composition of international technology governance regimes, and is a subject for further research.

On that note NATO, or any other international organization, is not exempt from these political hurdles. As EDTs increasingly become a focal point in the geopolitical space, any approach of AI governance in the international security environment will have global political undertones. This will undoubtedly be a significant hurdle for NATO as it balances responsible AI development and Allied coordination and cooperation in a changing geopolitical landscape. And certainly, the political realities may well represent the greatest challenge and disincentivize NATO to emerge as a leader in responsible military AI. Nevertheless, the three pillars indicate that NATO is an institution with considerable opportunity to shape responsible AI governance. More specifically, this entails urging and facilitating Allied standards and policies to establish foundations for emerging military technology built on informed and ethical principles and enhance the international security environment.

In any discussion of AI as an emerging military technology, it is necessary to strike a balance between acknowledging the transformative potential of AI in the security environment, while simultaneously recognizing the “hype” that may, thus far, be unfounded. But some conclusions are clear. The risks and opportunities of military AI can pose significant challenges for future military operations, and this necessarily means there are many stakeholders with a vested interest in developing, promoting, and implementing responsible military AI. As multinational coalitions and military operations are a foundational security policy for much of the world, this means NATO is also a stakeholder with a vital interest in promoting safe and secure technology among its partners, both traditional and non-traditional. As the international security environment continues to shift, there is space for NATO to pursue its agenda to maintain technological superiority not just to protect and defend its way of life, but also to build on its pillars of AI governance to steward military innovation on a responsible trajectory.

Acknowledgments

Zoe Stanley-Lockman was previously an Associate Research Fellow at the Institute of Defence and Strategic Studies at the S. Rajaratnam School of International Studies (RSIS) and contributed to this chapter in a personal capacity. Lena Trabucco is a Postdoctoral Researcher at the Centre for Military Studies at the University of Copenhagen. Both authors contributed equally to this chapter. The authors wish to thank Matthijs Maas, Joanna van der Merwe, and Simona Soare for their helpful comments.

Notes:

(1) The three core tasks, as set out in the 2010 Strategic Concept, are collective defense, crisis management, and cooperative security. NATO is currently updating its Strategic Concept to be prepared for adoption in 2022. It has announced it is updating its Strategic Concept in the near future given significant changes to the strategic environment in intervening years, including. The forthcoming concept will contend with changes to the strategic environment, including adaptation toward increasing technology-related threats and opportunities, as well as the rise of China.

(2) Gilli, Andrea. (2021, February). “NATO, technological superiority, and emerging and disruptive technologies.” In Thierry Tardy (Ed.), NATO 2030: new technologies, new conflicts, new partnerships (p. 6). NATO Defense College, Rome.

(3) Sprenger, Sebastien. (2020, October 9). NATO chief seeks technology gains in alliance reform push. Defense News.

(4) The Economist. (2019, November 7). Emmanuel Macron warns Europe: NATO is becoming brain-dead.” The Economist. https://www.economist.com/europe/2019/11/07/emmanuel-macron-warns-europe-nato-is-becoming-brain-dead.

(5) See for example, Williams, Michael John. (2013). Enduring, but irrelevant? Britain, NATO and the future of the Atlantic alliance. International Politics 50; Hellmann, Gunther, & Wolf, Reinhard. (1993). Neorealism, neoliberal institutionalism, and the future of NATO. Security Studies 3(1).

(6) For more on AI as a disruptive technology, see Liu, Hin-Yan, et al. (2020). Artificial intelligence and legal disruption: A new model for analysis. Law, Innovation and Technology 12(2); Sprenger, Sebastien. (2020, October 9). NATO chief seeks technology gains in alliance reform push. Defense News. https://www.defensenews.com/global/europe/2020/10/09/nato-chief-seeks-technology-gains-in-alliance-reform-push/.

(7) There is a host of literature that looks at the ways that AI enables and scales up risks in the security and defense environment, including in the field of existential risk. Other chapters of this Handbook introduce how AI is an instrument of national power and what concomitant risks this introduces into international politics. For more information on AI-related risks in security and defense, see Kreps, Sarah, & Arsenault, Amelia. (in press). AI, international politics, and governance.” In Justin Bullock (Ed.), The Oxford Handbook of AI governance. Oxford University Press; Horowitz, Michael, & Pindyck, Shira. (in press). AI, national security strategy, and the international balance of power. In Justin Bullock (Ed.), The Oxford Handbook of AI governance. Oxford University Press.

(8) Lazar, Seth. (in press). Power in political philosophy: Nature and justification. In Justin Bullock (Ed.), The Oxford Handbook of AI governance. Oxford University Press.

(9) Biermann, Frank, et al. (2009). The fragmentation of global governance architectures: A framework for analysis. Global Environmental Politics 9(4), 15.

(10) Further, while beyond the scope of this chapter, it is worth noting that the OECD plays an important role in the international technology governance regime. The OECD AI Principles have been adopted by several transatlantic countries, including the United States, and Franco-Canadian leadership in both the OECD and the G7 have spurred important initiatives on AI governance (including the establishment of the Global Partnership on AI, or GPAI). Influences of these other stakeholders on NATO’s adoption and governance of AI may ultimately be similar to the way it imports characteristics of nation-state and EU approaches to governance, but they are not discussed at length because they do not focus on defense. See Franke, Ulrike. (2021, June 21). Artificial intelligence diplomacy: Artificial intelligence governance as a new European Union external policy tool. European Parliament Policy Department for Economic, Scientific and Quality of Life Policies Directorate-General for Internal Policies. https://www.europarl.europa.eu/thinktank/en/document.html?reference=IPOL_STU%282021%29662926; Boulanin, Vincent, Brockmann, Kolja, & Richards, Luke. (2020, November). Responsible artificial intelligence research and innovation for international peace and security. Stockholm International Peace Research Institute. https://www.sipri.org/publications/2020/other-publications/responsible-artificial-intelligence-research-and-innovation-international-peace-and-security.

(11) A notable exception is Gilli, Andrea. (2020). “NATO-Mation” strategies for leading in the age of artificial intelligence. NATO Defense College Research Paper 15.

(12) Use is one, but not the only, phase of adoption. We intentionally focus on use here to echo NATO’s own language on responsible use, understanding that the Alliance has more authority in encouraging and coordinating activities than it does in development. Regulation is an important mechanism that is beyond the scope of this chapter because NATO is not a regulatory body.

(13) Drezner, Daniel W. (2019). Technological change and international relations. International Relations 22(2); Horowitz, Michael. (2018). Artificial intelligence, international competition, and the balance of power. Texas National Security Review 1(3); Milner, Helen. (2006). The digital divide: The role of political institutions in technology diffusion. Comparative Political Studies 39(2); Drezner, Daniel W. (2004). The global governance of the internet: Bringing the state back in. Political Science Quarterly 119(3).

(14) Roland, Alex. (2016). War and technology: A very short introduction (p. 5). Oxford University Press.

(15) Dafoe, Allan. (2015). On technological determinism: A typology, scope and conditions, and a mechanism. Science, Technology & Human Values 40(6).

(16) MacKenzie, Donald, & Wajcman, Judy. (1999). The social shaping of technology. Open University Press; Stilgoe, Jack, Owen, Richard, & Macnaghten, Phil. (2013). Developing a framework for responsible innovation. Research Policy 42, 1568–1580; Arthur, Brian W. (2009). The nature of technology: What it is and how it evolves. Penguin Books.

(17) Dignum, Virginia, Muller, Catelijne, & Theodorou, Andreas. (2020, February). First analysis of the EU whitepaper on AI. ALLAI. http://allai.nl/first-analysis-of-the-eu-whitepaper-on-ai/ and Hwang, Tim. (2018, March 23). Computational power and the social impact of artificial intelligence. SSRN, p. 2. https://ssrn.com/abstract=3147971.

(18) Baxter, Gordon, & Sommerville, Ian. (2011). Socio-technical systems: From design methods to systems engineering. Interacting with Computers 23(1), 4–17.

(19) Boulanin et al. (2020); Genus, Audley, & Stirling, Andy. (2018). Collingridge and the dilemma of control: Towards responsible and accountable innovation. Research Policy 47(1), 61–69; see also Verbruggen, Maaike. (2019). The role of civilian innovation in the development of lethal autonomous weapon systems. Global Policy 10(3).

(20) Owen, Richard, Stilgoe, Jack, Macnaghten, Phil, Gorman, Mike, Fisher, Erik, & Guston, Dave. (2013). A framework for responsible innovation. In Maggy Heintz, J. Bessant, & Richard Owen (Eds.), Responsible Innovation: Managing the responsible emergence of science and innovation in society (p. 36). John Wiley & Sons.

(21) Military adoption of commercially driven technologies is sometimes referred to as a “spin on” process. Here, the legal, societal, and cultural elements that impact the development of a dual-use technology would similarly spin on, or be imported, to the military adopter. For more on the term “spin on,” see Stowsky, Jay. (1991, May). From spin-off to spin-on: Redefining the military’s role in technology development. Working Paper.

(22) Stilgoe, Jack, Owen, Richard, & Macnaghten, Phil. (2013). Developing a framework for responsible innovation. Research Policy 42, 1572.

(23) Collingridge wrote: “When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming.” See Collingridge, David. (1980). The social control of technology (p. 11). St. Martin’s Press.

(26) Stanley-Lockman, Zoe. (2021, March 10). Emerging AI governance for international security: The stakes and stakeholders of responsibility. Azure Forum. https://www.azureforum.org/the-azure-forum-strategic-insights/emerging-ai-governance-for-international-security-the-stakes-and-stakeholders-of-responsibility/.

(27) Dafoe (2015), 1048–1049.

(28) Case studies typically focus on military services or non-state insurgents, rather than supra-national organizations, in the literature on adaptation of doctrine, structures, tactics, techniques, and procedures.

(29) Horowitz, Michael. (2010). The diffusion of military power: Causes and consequences for international politics. Princeton University Press; Kier, Elizabeth. (1997). Imagining war: French and British military doctrine between the wars. Princeton University Press.

(30) Shimshoni, Jonathan. (1990–1991). Technology, military advantage, and World War I: A case for military entrepreneurship. International Security 15(3), 187–215.

(31) Mahnken, Thomas. (2008). Technology and the American way of war since 1945 (p. 219). Columbia University Press.

(33) For more on AI governance more broadly, see Cihon, Peter, Maas, Matthijs M., & Kemp, Luke. (2020). Fragmentation and the future: Investigating architectures for international AI governance. Global Policy 11(5).

(34) Citing Barnett, Correlli. (1963). The swordbearers: Studies in supreme command in the First World War (p. 11). Eyre & Spottiswoode. Grissom, Adam. (2006). The future of military innovation studies. Journal of Strategic Studies 29(5), 907.

(35) Correlli (1963), 11; Grissom (2006), 907.

(37) Raudzens, George. (1990). War-winning weapons: The measurement of technological determinism in military history. Journal of Military History 54(4), 403–33; Phillips, Gervase. (2002). The obsolescence of the Arme Blanche and technological determinism in British military history. War in History 9(1), 39–59; Roland, Alex. (2010). Was the nuclear arms race deterministic? Technology and Culture 51(2), 444–461; Rogers, Clifford J. (2011). The development of the longbow in late medieval England and “technological determinism”. Journal of Medieval History 37(3), 321–341; Pavelec, Sterling Michael. (2012). The inevitability of the weaponization of space: Technological constructivism versus determinism. Astropolitics 10(1), 39–48; Kuo, Kendrick. (2021). Military innovation and technological determinism: British and U.S. ways of carrier warfare, 1919–1945. Journal of Global Security Studies 6(3), 1–19.

(38) Hughes, Thomas. (1987). The evolution of large technological systems. In W. Bijker, T. Hughes, & T. Pinch (Eds.), The Social Construction of Technological Systems. MIT Press; Fino, Steven. (2015, January.). All the missiles work: Technological dislocations and military innovation: A case study in U.S. Air Force air-to-air armament post-World War II through Operation Rolling Thunder. Air University School of Advanced Air and Space Studies, 22–23.

(39) Fino (2015), 41.

(40) Dafoe (2015), 1049.

(41) Fino (2015), 45.

(42) Dafoe (2015), 1069.

(43) The authors thank Maaike Verbruggen for pointing us toward the NATO Advanced Research workshop on “Social Responses to Large Technical Systems: Regulation, Management, or Anticipation” that took place in Berkeley, California from October 17–21, 1989 as one example of this engagement.

(44) Examples include the socio-technical “NATO Human View” framework as part of the 2010 technical report “Human Systems Integration for Network Centric Warfare,” as well as the “Futures Assessed alongside socio-Technical Evolutions (FATE) Method” developed from 2018–2021.

(45) Leonardi, Paul M., & Barley, Stephen R. (2010). What’s under construction here? Social action, materiality, and power in constructivist studies of technology and organizing. The Academy of Management Annals 4(1), 3.

(46) Christie, Edward Hunter. (2020, November 24). Artificial intelligence at NATO: Dynamic adoption, responsible use. NATO Review. https://www.nato.int/docu/review/articles/2020/11/24/artificial-intelligence-at-nato-dynamic-adoption-responsible-use/index.html.

(47) The Organisation for Economic Co-operation and Development (OECD) has identified 10 instruments in its work on the “ethics of emerging technologies” policy theme, which offers a broader list of possible technology governance mechanisms. They are regulatory oversight and ethical advice bodies, formal consultation of stakeholders or experts, national strategies/agendas/plans, emerging technology regulation, policy intelligence (e.g., evaluations, benchmarking, and forecasts), networking and collaborative platforms, creation or reform of governance structure or public body, standards and certification for technology development and adoption, information services and access to datasets, and grants for business R&D and innovation. See Organisation for Economic Co-operation and Development. (2021). Technology governance (accessed April 29, 2021). https://www.oecd.org/sti/science-technology-innovation-outlook/technology-governance/.

(48) Deni, John R. (2020). Security threats, American pressure, and the role of key personnel: How NATO’s defence planning process is alleviating burden sharing. U.S. Army War College Press.

(49) Lawrence, Christie, & Cordey, Sean. (2020, August). The case for increased transatlantic cooperation on artificial intelligence. Harvard Kennedy School Belfer Center for Science and International Affairs, 85. https://www.belfercenter.org/publication/case-increased-transatlantic-cooperation-artificial-intelligence.

(50) NATO. (2020, November). NATO 2030: United for a New Era: Analysis and Recommendations of the Reflection Group Appointed by the NATO Secretary General, p. 29. https://www.nato.int/nato_static_fl2014/assets/pdf/2020/12/pdf/201201-Reflection-Group-Final-Report-Uni.pdf.

(51) NATO Standardization Office. (2004, January 13). NATOTerm (definition). https://nso.nato.int/natoterm/Web.mvc.

(52) Beckley, Paul. (2020, July). Revitalizing NATO’s once robust standardization programme. NATO Defense College, 3–4.

(53) See, for instance, the NATO Modelling and Simulations Group within STO focuses on standards related to testing and experimentation, among other functions. See Stanley-Lockman, Zoe. (2021, August). Military AI cooperation toolbox: Modernizing defense science and technology partnerships for the digital age. Center for Security and Emerging Technology, 36–37.

(54) For example, arms control is a vital pillar of AI governance that we do not address explicitly in this chapter.

(55) Jobin, Anna, Ienca, Marcello, & Vayena, Effy. (2019, September). The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 389–399.

(56) This argument about ethical constraints putting democratic forces at a competitive disadvantage is usually part of a broader framing of an AI arms race, and often specifically focuses on autonomy in weapons rather than just AI ethics. See Boudreaux, Benjamin. (2019, January 11). Does the U.S. face an AI ethics gap? Real Clear Defense. https://www.realcleardefense.com/articles/2019/01/11/does_the_us_face_an_ai_ethics_gap_114095.html; Thornton, Rod. (2019, June 17). One to ponder: The UK’s ethical stance on the use of Artificial Intelligence in weapons systems. Defence in Depth. https://defenceindepth.co/2019/06/17/one-to-ponder-the-uks-ethical-stance-on-the-use-of-artificial-intelligence-in-weapons-systems/; Roper, Will (2020, October 24). There’s no turning back on AI in the military. Wired. https://www.wired.com/story/opinion-theres-no-turning-back-on-ai-in-the-military/; Morgan, Forrest E., Boudreaux, Benjamin, Lohn, Andrew J., Ashby, Mark, Curriden, Christian, Klima, Kelly, & Grossman, Derek. (2020). Military Applications of Artificial Intelligence Ethical Concerns in an Uncertain World, RAND Corporation, 2(11–12), 41. https://www.rand.org/pubs/research_reports/RR3139-1.html.

(57) The National Security Committee on Artificial Intelligence (NSCAI) acknowledged this point in their 2019 report, “Everyone desires safe, robust, and reliable AI systems free of unwanted bias, and recognizes today’s technical limitations. Everyone wants to establish thresholds for testing and deploying AI systems worthy of human trust and to ensure that humans remain responsible for the outcomes of their use. Some disagreements will remain, but the Commission is concerned that debate will paralyze AI development. Seen through the lens of national security concerns, inaction on AI development raises as many ethical challenges as AI deployment. There is an ethical imperative to accelerate the field of safe, reliable, and secure AI systems that can be demonstrated to protect the American people, minimize operational dangers to U.S. service members, and make warfare more discriminating, which could reduce civilian casualties.” National Security Commission on Artificial Intelligence. (2019, November). Interim Report. Washington, D.C., 16–17.

(58) We thank Kate Devitt for raising this point.

(59) See Horowitz (2018) for more on AI and first-mover advantage. See also Liivoja, R., & McCormack, T., (Eds). (2016). Routledge Handbook of the Law of Armed Conflict. Routledge.

(60) North Atlantic Treaty (April 4, 1949): “founded on the principles of democracy, individual liberty and the rule of law” and “promoting conditions of stability and well-being.”

(61) Gilli, Andrea, & Stanley-Lockman, Zoe. (2020). Ethical purpose: Ethics and values. In NATO-Mation: Strategies for Leading in the Age of Artificial Intelligence (pp. 29–30). NDC Research Paper.

(62) Allied Command Transformation. (2018). Framework for future Alliance operations, 15–6.

(64) van der Merwe, Joanna. (2021). Establishing NATO ethical AI principles is the first step toward both technical and political alignment. CEPA. https://cepa.org/nato-leadership-on-ethical-ai-is-key-to-future-interoperability/.

(65) Wong, Yuna Huh, Yurchak, John M., et al. (2020). Deterrence in the age of thinking machines. RAND Corporation 6, 60.

(66) For more on governance tools such as accountability and verification procedures, see Salamon, Lester. (2002). The tools of government: A guide to the new governance. Oxford University Press.

(67) Hill, Steven. (2020). AI’s impact on multilateral military cooperation: Experience from NATO. Presented at Symposium: How will Artificial Intelligence Affect International Law? (p. 148); Stanley-Lockman, Zoe. (2021, August). Responsible and ethical military AI: Allies and Allied perspectives. Center for Security and Emerging Technology. https://cset.georgetown.edu/publication/responsible-and-ethical-military-ai/.

(69) See for example Gilli & Stanley-Lockman (2020); Gilli, A., Pellegrino, M., & Kelly, R. (2019). Intelligence machines and the growing importance of ethics. In A. Gilli (Ed.), The Brain and the processor (pp. 45–54). NATO; Tiell, S. C., & Mertcalf, B. (2016). Universal principles of data ethics: 12 guidelines for developing ethics codes. Accenture; Roa, A., Palaci, F., & Chow, W. (2019). A practical guide to responsible artificial intelligence PricewaterhouseCoopers.

(71) Sprenger, Sebastian. (2021, April 27). NATO tees up negotiations on artificial intelligence in weapons. C4ISRnet. https://www.c4isrnet.com/artificial-intelligence/2021/04/27/nato-tees-up-negotiations-on-artificial-intelligence-in-weapons/.

(72) While the debate has been heavily focused on IHL applicability issues, there is significant uncertainty surrounding international human rights law and domestic legal obligations. See for example, Donahoe, Eileen, & Metzger, Megan MacDuffee. (2019). Artificial intelligence and human rights. Journal of Democracy 30(2); Raso, Filippo, Hilligoss, Hannah, Krishnamurthy, Vivek, Bavitz, Christopher, & Kim, Levin Yerin. (2018, September 25). Artificial intelligence & human rights: Opportunities & risks. Berkman Klein Center Research Publication No. 2018-6. https://ssrn.com/abstract=3259344 and http://dx.doi.org/10.2139/ssrn.3259344.

(73) Proponents of an international treaty that prohibits LAWS argue that a multilateral ban would effectively remove ambiguity and uncertainty around AI in lethality questions and set clear guidelines and expectations to the limits of autonomous weapon systems. However, skeptics of an international treaty ban point to the political hurdles inherent to any international treaty, let alone one for technology that is difficult to define, verify, and enforce. Importantly, skeptics are not necessarily against the regulation of autonomous lethal decision-making but believe that a blanket ban could also preclude positive outcomes associated with technology adoption, including the reduction of human error. For academic commentary, see Garcia, D. (2015). Killer robots: Why the U.S. should lead the ban. Global Policy 6(1); Crootof, Rebecca. (2014–2015). The killer robots are here: Legal and policy implications. Cardozo Law Review 36; Goose, Stephen, & Wareham, Mary. (2016). The growing international movement against killer robots. Harvard International Review 37(4).

(74) For instance, the International Committee of the Red Cross position paper on AI and machine learning in armed conflict identifies three main categories for conflict parties to use AI that pose risk from a humanitarian perspective. They are increasing autonomy, new means of cyber and information warfare, and the changing nature of decision-making in conflict. See International Committee of the Red Cross. (2021, March 1). Artificial Intelligence and machine learning in armed conflict: A human-centered approach. https://international-review.icrc.org/articles/ai-and-machine-learning-in-armed-conflict-a-human-centred-approach-913.

(75) Coalition interoperability refers to, “the ability to act together coherently, effectively and efficiently, to achieve Allied tactical, operational, and strategic objectives” https://nso.nato.int/natoterm/Web.mvc. Another definition from the North Atlantic Treaty Organization, Allied Joint Doctrine for Air and Space Operation, P 2.2.1, AJP-3.3 Ed. B Version 1 (April 2016) describes interoperability as, “the effectiveness of Allied forces in peace, crisis or conflict, depends on the ability of forces provided to operate together coherently effectively and efficiently.”

(76) Hill, Steven, & Lemétayer, David. (2016). Legal issues of multinational military operations: An Alliance perspective. Military Law and the Law of War Review 55, 13.

(77) Kelly, Col. Michael. (2005). Legal factors in military planning for coalition warfare and military interoperability: Some implications of the Australian Defence Force. Australian Army Journal 2(2), 690.

(78) Hill, Steven, & Holzer, Leonard. (2019). Detention operations in non-international armed conflicts between international humanitarian law, human rights law and national standards: A NATO perspective. Israel Yearbook on Human Rights 49.

(79) For more on legal issues regarding NATO’s detention standards, see NATO Legal Gazette. (2019, May). Significant Issues for the NATO Legal Community 39. https://www.act.nato.int/application/files/7616/0999/3871/legal_gazette_39.pdf. This illustration is not to minimize the legal uncertainties that did exist for the detention regime. Legal complexities such as the legal basis for detention and the status of the armed conflict were central legal issues for the coalition. Rather, to illustrate the NATO standard helped provide a standard procedure for implementing a detention regime, even when faced with legal uncertainty about the conflict.

(80) Trabucco, Lena. (2020). Judges on the Battlefield? Judicial Observer Effects in US and UK National Security Policies. PhD Diss. Northwestern University (p. 257).

(81) Hill, Steven, & Lemétayer, David. (2016). Legal issues of multinational military operations: An Alliance perspective. Military Law and the Law of War Review 55, 13.

(82) For a categorization of AI safety problems that has been influential in the field, see Amodei, Dario, Olah, Chris, Steinhardt, Jacob, Christiano, Paul, Schulman, John, & Mané, Dan. (2016). Concrete problems in AI safety. arXiv preprint, arxiv:1606.06565. https://arxiv.org/abs/1606.06565.

(83) North Atlantic Treaty Organization. (2019, November 29). NATO: Ready for the Future adapting the Alliance (2018–2019), p. 17. https://www.nato.int/nato_static_fl2014/assets/pdf/pdf_2019_11/20191129_191129-adaptation_2018_2019_en.pdf.

(84) Valášek, Tomáš. (2017, August 31). How artificial intelligence could disrupt alliances. Carnegie Europe. https://carnegieeurope.eu/strategiceurope/72966.

(86) According to the United States, “DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.” See Defense Innovation Board. (2019, October 31). AI principles: Recommendations on the ethical use of artificial intelligence by the Department of Defense: Supporting document, p. 38.

(87) Reding and Eaton (2020), 66; Konaev, Margarita, Chahal, Husanjot, Fedasiuk, Ryan, Huang, Tina, & Rahkovsky, Ilya. (2020, October). U.S. military investments in autonomy and AI. Center for Security and Emerging Technology, p. 15. See also Soare, Simona. (2021). What if … the military AI of NATO and EU states is not interoperable? In Flourence Gaub (Ed.), What if … not? The cost of inaction. European Union Institute for Security Studies.

(88) This assumes that the humans designing the system accurately transpose their intended outcome with the reward signals, which is not necessarily a given, but is also beyond the NATO remit as these are national decisions.

(89) Stanley-Lockman, Zoe. (in press). Responsible military AI: Allies and allied perspectives. Center for Security and Emerging Technology.

(90) Technical failure can occur unintentionally, such as when performance is technically correct but produces unsafe consequences, or when ethics and safety considerations are not sufficiently built into the design phase and sustained in the life cycle of the system. They can also occur intentionally, when motivated actors attack the system to “misclassify the result, infer private training data, or to steal the underlying algorithm.” See Shankar, Ram, Kumar, Siva, O’Brien, David, Snover, Jeffrey, Albert, Kendra, & Viljoen, Salome. (2019, November 11). Failure modes in machine learning. Microsoft Corporation. https://docs.microsoft.com/en-us/security/engineering/failure-modes-in-machine-learning.

(91) French Ministry of Armed Forces. (2019, September). Artificial intelligence in support of defence, p. 9.

(92) See for example, Flournoy, Michèle A., Haines, Avril, & Chefitz, Gabrielle. (2020, October). Building trust through testing: Adapting DOD’s test & evaluation, validation & verification (TEVV) enterprise for machine learning systems, including deep learning systems. WestExec Advisors.