Artificial Intelligence

Artificial intelligence (AI) is rapidly reshaping military decision-making, accelerating operations and expanding the role of machine-driven analysis on the battlefield. As AI systems influence choices once reserved for humans, the risks of miscalculation, unintended escalation and legal or ethical violations grow. Understanding these dynamics is essential to strengthen governance and prevent destabilising outcomes.

AI in military decision-making: The global governance challenge

The race to integrate AI into military command systems is accelerating — and changing how warfare is conducted. International law and norms governing the use of force and war are being ignored. The global community remains unprepared to address serious threats to international security. Although efforts to develop global governance continue, they are too slow and fragmented to keep up with the rapid technological advancements.

riskreport2026_overlay_4
Denise Garcia, Advisor Global Challenges Foundation
Denise Garcia

Denise Garcia is a Professor of Political Science and International Affairs at Northeastern University and a founding faculty member of its Experiential Robotics Institute. She is also a Commissioner at the Global Commission on Responsible AI in the Military.

The current state of play

Across the world’s major military powers, AI is rapidly moving from experimental laboratories into operational command-and-control systems. Since 2017, advances in machine learning and other computational techniques, along with many countries’ decisions to incorporate AI into their military operations, have accelerated the militarisation of AI. This has led to the gradual integration of Decision Support Systems (DSS) into the battle-field, with many already active in military missions. This is guided by the goal of improving situational awareness to gain a strategic military advantage.

The promise is tempting: AI systems can quickly analyse large amounts of battle-field data, identify patterns invisible to human analysts and allow commanders to act faster than enemies. Supporters say this could lower casualties and improve targeting accuracy. Critics argue that the speed and automation pose serious new risks to peace, diplomacy and international stability by undermining long-standing ethical principles and con-duct norms, resulting in blatant violations of international law.

AI-driven tools, especially Decision Support Systems,  often operate with limited predictability and transparency, making it difficult for users to understand  and trust their analyses and outputs.

Data to decision-making — AI’s expanding role in the battlefield

As AI increasingly shapes decision-making in conflict, its rapid integration into military systems raises profound challenges for safety, accountability and ethics. AI-driven tools, especially DSS, often operate with limited predictability and transparency, making it difficult for users to understand and trust their analyses and outputs. The competitive drive among states and actors to adopt these technologies risks premature deployment before they are sufficiently tested, potentially leading to grave operational and humanitarian consequences. Moreover, as machine learning systems take on roles traditionally held by humans, they risk eroding human judgment — the foundation of ethical and legal accountability in warfare.

Ultimately, determining responsibility for battlefield decisions must remain a human function, grounded in contextual understand-ing rather than technical indicators alone, to ensure compliance with international human-itarian law (IHL) and the preservation of moral agency in war.

The heightened escalation dynamics and nuclear AI dangers

The integration of AI into military deci-sion-making creates a dangerous paradox: while militarily advanced countries adopt these systems to reduce uncertainty on the battlefield, they simultaneously introduce new sources of unpredictability stemming from data vulnerabilities and the brittleness of algorithmic systems. This may lead to manipulation by adversaries and accidents. The gravest risk arises from the integration of AI into the command and control of nuclear arsenals and poses a governance challenge. The integration of AI in early warning systems, intelligence analysis and missile defense could threaten nuclear assets, creating multiple pathways for miscalculation and crisis instability as well as lowering their thresholds for nuclear use during a conflict.

Moreover, the speed at which AI systems operate compresses decision timelines. In a crisis scenario involving nuclear-armed states, AI-en-abled systems might accelerate the tempo of operations to a pace where human leaders feel compelled to preemptively authorise responses before fully understanding the situation.

Accountability gaps and human oversight

IHL requires that human actors foresee, govern and constrain the use of weaponry. Yet, as AI systems evolve in sophistication and operate at unprecedented speeds, the scope for genuine human oversight diminishes significantly. Traditional legal frameworks presume human moral agency and deliberate decision-making; however, when an AI system is involved, assigning accountability becomes far more complex, compounding the risks of automation bias and over-reliance on AI-generated outputs.

The complexity of this issue is exacerbated by the inherent black box nature of many advanced machine learning systems. Despite their strong performance in testing environments, their underlying reasoning remains largely opaque. This lack of transparency in AI decision-making processes compromises the crucial human oversight required to uphold legal and ethical standards in military operations.

Military AI systems inherently depend on vast amounts of data for training, real-time operation and continuous learning. This dependence creates multiple vulnerabilities that adversaries can exploit. Consequently, an AI system that performs robustly in controlled testing environments may behave unpredictably in operational settings when confronted with manipulated or adversarial inputs. Data bias represents another critical concern. If AI systems are trained predominantly on data from specific operational environments or on particular adversary signatures, they may fail catastrophically when confronted with novel situations.

In sum, what is at risk is the erosion of moral and legal boundaries that limit the use of force, widening the gap between human accountability and emerging AI-driven military systems and creating destabilising effects.

The private sector: Blurring civilian-military boundaries

Military AI is primarily created by the private tech sector. Leading companies have made significant breakthroughs with both civilian and military applications. Private companies are creating sophisticated systems that the military then adapts for its needs. The dual-use and distributed nature of AI technology creates new challenges for establishing global governance.

The global nature of the AI industry further complicates governance and is leading to the militarisation of civilian AI research, potentially limiting academic freedom and international cooperation. These private companies control the development and deployment of AI, which could significantly alter global power dynamics. Power disparities between the advanced North and the developing South are likely to widen, as the vast majority of developing countries lack resources to compete for AI leadership or power to play a role in setting inclusive, just and fair rules for all.

The global nature of the AI industry further complicates governance and is leading to the militarisation of civilian AI research, potentially limiting academic freedom and international cooperation

Global governance: Significant gaps and concrete pathways forward

There are no universal rules or norms regard-ing the use of AI in military applications. However, efforts to regulate AI in the military began in 2017, following significant breakthroughs in machine learning and deep learning. There are three ongoing diplomatic processes. The first is state-led and focused on creating a new treaty on autonomous weapons at the UN in Geneva that involves all the major military powers. However, talks remain mired in definitional disputes and geopolitical tensions. The process is by consensus, so breakthroughs are hard to achieve. Two key questions that remain unresolved: (1) what constitutes meaningful human control over AI-enabled weapons?; and (2) how should IHL apply to AI decision-support systems? These talks could continue at the UN General Assembly which allows for a more inclusive process and require a two-thirds majority, but this approach may fail to get the major military powers’ buy-in.

The second is led by middle power, small-state coalitions calling for the responsible use of AI in the military in two summits in 2023 and 2024. This process presents an innovative opportunity to forge new global governance that counts on the voices of more actors.

The third is the first resolution on autonomous weapons, a breakthrough event at the UN in New York in December 2023. The resolution received 164 votes in favour. Subsequently, on November 6, 2024, the second resolution, Resolution 79/239 Artificial intelligence in the military domain and its implications for international peace and security, received overwhelm-ing support from UN Member States: 165 in favour and only two against. Middle powers and small states are likely to continue leading international efforts to develop norms.

However, several governance gaps remain unaddressed. First, there is no universally accepted risk framework for AI in military contexts. Second, confidence-building measures remain underdeveloped. Third, transparency around military AI development is severely limited. Nations keep their AI capabilities as closely guarded secrets, making it impossible for others to assess intentions or adjust their own responses. This opacity fuels worst-case assumptions and promotes destabilising military race dynamics.

Pathways forward

Effective governance of AI in military decision-making requires a comprehensive approach across multiple domains and actors. Creating permanent institutional mechanisms to support global cooperation and permanent multi-stakeholder dialogue would foster trust through confidence-building measures and allow for lessons learned from high-stakes military AI applications and risk mitigation strategies. All of this could be guided by a responsibility by design framework that integrates ethical and legal compliance from the earliest development stages through the entire system lifecycle and into the socio-technical institutions where AI is used, while protecting human dignity.

The concrete governance framework for military AI should involve international confidence-building measures, transparency, legal accountability, technical safety safeguards and multi-stakeholder oversight. These steps aim to manage AI risks, prevent escalation, assign accountability and promote responsible development and deployment.

The concrete governance framework for military AI should involve international confidence-building measures, transparency, legal accountability, technical safety safeguards and multi-stakeholder oversight.

Conclusion

The integration of AI into military decision-making offers significant benefits, such as faster responses and fewer casualties, but also poses serious risks to stability and legal principles. The global community’s current governance systems are inadequate to manage these rapid technological advances, creating a troubling gap between AI development and regulatory frameworks.

Closing this gap requires sustained political will, creative institutional innovation and coordinated cooperation among nations with divergent interests and values. The stakes could not be higher. Left ungoverned, military AI could lower thresholds for conflict, compress decision timelines beyond human comprehension, blur the boundaries between peace and war, and ultimately undermine the institutions that have helped prevent great power war for eight decades.

Denise Garcia, Advisor Global Challenges Foundation
Denise Garcia

Denise Garcia is a Professor of Political Science and International Affairs at Northeastern University and a founding faculty member of its Experiential Robotics Institute. She is also a Commissioner at the Global Commission on Responsible AI in the Military.

Download report

Download the entire report or specific sections below

Back to top