Application of Artificial Intelligence in Military Decision-Making: Seizing Advantages and Mitigating Risks
This report focuses on the application of artificial intelligence in decision support systems at the operational level, constructing an evaluation framework that includes three major dimensions: scope of application, data foundation, and human-machine interaction. It also provides commanders with five specific risk mitigation recommendations.
Detail
Published
22/12/2025
Key Chapter Title List
- Executive Summary
- Introduction
- Historical Background and Efforts in Military Decision-Making Cognition and Prediction
- Global Adoption of AI for Military Decision Support: Current Status
- Types of Decision Support
- A Simplified Framework for Commander Evaluation of AI Military Decision Support Systems
- Scope of Applicability Considerations
- Data Considerations
- Human-Machine Interaction Considerations
- Recommendations
- Conclusion
- Appendix
Document Introduction
With the accelerating integration of artificial intelligence technology into the global military domain, its potential to enhance the quality and speed of decision-making at the operational level has garnered significant attention. Operational commanders face the formidable challenge of making life-and-death decisions amidst vast, rapidly changing, and often incomplete information flows. Artificial Intelligence Decision Support Systems (AI-DSS) are seen as a key tool to pierce this fog of war. However, enthusiasm for this technology must be balanced with its actual capabilities and limitations to ensure its appropriate and effective deployment. This report aims to systematically examine the current state, opportunities, and risks of AI military decision support, and to provide military decision-makers with a practical framework for assessment and operation.
The report first traces the historical lineage of military decision support tools, from ancient reliance on oracles to modern complex campaign models and early warning systems, revealing humanity's enduring effort to enhance situational awareness and predictive capability amidst uncertainty. Currently, major global military powers represented by the United States, China, Russia, and NATO have publicly expressed high expectations for AI-DSS and invested in research and development resources. The market has also seen the emergence of numerous commercial and military systems with diverse functionalities, covering a wide range of tasks from situational awareness and planning execution to predictive analysis. These systems blur the boundaries between tactical, operational, and strategic decision-making, adding complexity for commanders in selection and use.
To address this challenge, this report proposes a simplified three-dimensional evaluation framework for commanders to systematically review when considering the deployment of an AI-DSS. First is the scope of applicability: Is the system operating within a clearly defined and understood context? Commanders must be vigilant about context shift (i.e., inconsistency between training and operational environments), distinguish between predictions based on physical laws and those involving human behavior, manage the risks posed by flexible but poorly bounded systems, and fundamentally accept the irreducible uncertainty inherent in military decision-making. Second is the data foundation: Can the training data support the system's conclusions? Acquiring high-quality, high-fidelity data is particularly difficult in the military domain. Issues of data bias (stemming from sensor limitations, enemy deception, etc.) and data scarcity (especially regarding actual combat) can severely impact the reliability of AI system outputs. Finally, human-machine interaction: What are the capabilities and limitations of the human-machine system as a whole in a specific environment? The report specifically highlights the risks associated with unrealistic expectations of Large Language Models (LLMs), human cognitive biases (such as automation bias), and organizational biases (such as an excessive pursuit of speed and resource savings).
Based on the above analysis, the report proposes five personnel- and process-centric risk mitigation recommendations for military organizations: First, establish context- and risk-based deployment standards to ensure clear and reversible conditions of use. Second, implement rigorous training and qualification certification for AI-DSS operators, particularly those involved in lethal decision-making. Third, establish a continuous certification cycle to periodically assess system effectiveness and unit proficiency. Fourth, appoint a Responsible AI Officer within military units to enhance AI literacy, report incidents, and facilitate communication. Fifth, systematically document AI system failures and human errors, establishing a central knowledge repository to promote experience sharing, prevent repeated mistakes, and build transparency.
The report ultimately emphasizes that while AI can enhance the quality and speed of battlefield decision-making, it cannot replace human judgment. Commanders and their teams must deeply understand the strengths and weaknesses of AI-DSS. Only through deliberate human-machine integration, guided by a clear framework, can they effectively harness this technology to seize its strategic advantages while strictly controlling the accompanying risks.