How is AI integrated in military decision-making?
Several countries now incorporate AI into their military operations through decision support systems (DSS) on the battlefield. It is no longer a distant or hypothetical future. Major military powers use AI-DSS to achieve military dominance.
The risk is that accidental escalation could happen if systems fail, do not know how to respond in interactions or changing circumstances, or are hacked. As a result, AI integration raises the likelihood of miscalculation and misperception between adversaries. AI-DSS analyses intelligence data and provides commanders with recommendations.
What is at stake?
AI integration in the military increases the risk of escalation and may lead to responses that spiral out of control, as machine learn-ing systems are inherently unreliable. Unlike traditional software that follows explicit rules, AI systems trained on data can produce unpredictable outputs when they encounter situations outside their training data. The risk of violating ethical principles and disregarding existing rules of international law, especially international humanitarian law (IHL), is high. Lowering the threshold for going to war or starting a conflict also poses a serious risk.
What are the potential risks?
AI enables military forces to process information and coordinate responses quicker than ever before, pushing adversaries to develop similar capabilities to avoid falling behind in the AI military race. The reduction in decision-making time enables faster action but also shortens the window for human judgement, which is especially important when careful consideration could prevent errors in crisis situations where every second counts. Commanders might rely more on AI recommendations rather than wait for a detailed analysis. Therefore, the risks of mismanagement are significant.
The most serious long-term risk involves integrating AI into nuclear command, control and communications (NC3) systems. This integration poses significant risks, including weakening deterrence by eroding second-strike confidence, introducing vulnerabilities in command and control through cyber threats, as well as increasing the likelihood of false alarms in early warning systems. These problems could destabilise nuclear stability and lead to increased unintended confrontations.
AI enables military forces to process information and coordinate responses quicker than ever before, pushing adversaries to develop similar capabilities to avoid falling behind in the AI military race.
At the deepest level, AI in military decision-making has already transformed the character of warfare and international relations. The speed of AI-enabled operations is making human judgment increasingly peripheral to conflict dynamics. Wars might be fought and decided at algorithmic speed, with humans relegated to observers of machine-driven escalation spirals they initiated but cannot control.
Cumulatively, these automation processes shift decision-making authority from humans to machines in ways that may not be intention-ally chosen or fully understood. Additionally, AI systems are only as reliable as their training data, and military applications face specific data challenges. Bias in training data — such as datasets that over-represent certain regions, demographic groups or conflict types — can lead to distorted predictions. Integrating AI into military decision-making creates an accountability gap — the challenge of assigning responsibility when AI systems cause harm.

What is being done to govern this risk and the gaps?
The technology’s dual-use and distributed nature across many segments of society complicates governance. AI capabilities developed for civilian use — such as facial recognition, natural language processing and predictive analytics — can easily be adapted for military applications.
IHL requires that warring parties carefully apply principles of distinction, proportionality and precaution to minimise civilian harm. These require contextual understanding, ethical judgment and flexible interpretation, which current AI systems lack. As militaries increasingly deploy AI in such contexts, the gap between legal obligations and actual practice grows while eroding legal and ethical barriers.
While new initiatives are beginning to address parts of the challenge, there are no universal norms or agreed-upon rules guiding how AI can or should be used in warfare. In particular, the absence of a shared risk framework leaves states without common standards for assessing and managing the dangers that military AI systems pose.
Current governance efforts mainly focus on broader AI applications in the civilian domain with initiatives across various multilateral forums, including the United Nations. Governance in the military domain remains fragmented, with limited coordination across different bodies: the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems has met periodically since 2017 but has not produced any binding agreements. The Political Declaration on Responsible Military Use of AI and Autonomy proposal, endorsed by over 50 states, and the Responsible AI in Military Domain (REAIM) Summits outcome documents establish voluntary principles with-out enforcement mechanisms. While voluntary principles can help build consensus, countries face no penalties for non-compliance. The absence of a single authoritative body to oversee governance efforts allows inconsistencies and gaps to persist, with some applications covered by multiple frameworks and others left unaddressed.
The window for effective AI governance in military systems is closing due to increasing integration and institutional entrenchment, which make reversal efforts difficult. Success depends on recognising that AI governance is primarily a political issue that requires international cooperation.