U.S. Department of Homeland Security: Report on Mitigating Cross-Cutting Risks at the Intersection of Artificial Intelligence and CBRN Threats
Based on the requirements of Executive Order No. of the year, focusing on the risks of misuse and defensive applications of artificial intelligence in the field of chemical and biological threats, a cross-departmental collaborative governance framework and policy recommendations are proposed.
Detail
Published
23/12/2025
Key Chapter Title List
- Executive Summary
- Introduction
- Background: Artificial Intelligence Development Trends
- Misuse of Artificial Intelligence in the Research, Development, and Production of Chemical, Biological, Radiological, and Nuclear Threats
- Benefits and Applications of Artificial Intelligence in Countering Chemical, Biological, Radiological, and Nuclear Threats
- Applications of Artificial Intelligence in Physical and Life Sciences
- Artificial Intelligence Governance and Oversight Trends
- Key Findings and Corresponding Policy Recommendations
- List of Abbreviations
Document Introduction
On October 30, 2023, the Biden administration signed Executive Order 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," which identified the intersectional risks of artificial intelligence and Chemical, Biological, Radiological, and Nuclear (CBRN) threats as a national security priority. It mandated the Department of Homeland Security to lead the assessment of related risks and propose governance solutions. This report, primarily compiled by the Department of Homeland Security's Countering Weapons of Mass Destruction Office (CWMD), is a direct response to the requirements of Section 4.4 of that Executive Order. It focuses on the dual impact of artificial intelligence in the fields of chemical and biological threats.
The report systematically reviews the current state of artificial intelligence technology development, including trends in generative AI, foundation models, and Bio-Design Tools (BDTs). It analyzes the innovative value and potential risks of these technologies in physical and life sciences research. The study clarifies that while AI can accelerate progress in beneficial areas such as drug development and precision agriculture, it may also be misused by non-state actors (such as extremists) and state actors for the research, development, and manufacturing of CBRN weapons. This misuse could occur by lowering technical barriers and democratizing access to dangerous knowledge.
To ensure the comprehensiveness and authority of the analysis, the report's compilation process fully integrated cross-departmental resources of the U.S. government. It also extensively consulted experts from the Department of Energy, private AI laboratories, academia, think tanks, and third-party model evaluation agencies, forming a risk assessment and solution set based on multi-source perspectives. The report focuses on analyzing the threat pathways of AI misuse, identifying risk points across the entire chain from conceptualization and material acquisition to weaponization and attack execution, proposing risk mitigation ideas centered on pathway disruption.
The report's core contains 9 key findings, covering central issues such as the lack of cross-agency risk consensus, security challenges posed by the proliferation of open-source models, limitations of existing regulatory systems, and the necessity of international collaboration. Based on these findings, the report proposes a series of targeted policy recommendations. These include strengthening cross-departmental intelligence sharing, developing safety guidelines for AI model releases, establishing vulnerability reporting mechanisms, improving laboratory safety oversight, and advancing international governance coordination. These recommendations provide a concrete path for building an adaptive, iterative AI safety governance framework.
This report is not only a significant policy document for the U.S. government in addressing the intersectional risks of AI and national security but also serves as a reference blueprint for balancing AI innovation and safety governance globally. It holds substantial decision-support and academic reference value for defense researchers, policymakers, intelligence practitioners, and scholars in related fields.