Files / South Korea

Practical Guide on Artificial Intelligence for Preventing and Countering Violent Extremism

This guide, developed by the United Nations Office of Counter-Terrorism in collaboration with the Government of the Republic of Korea, aims to provide policymakers and practitioners with a framework, opportunity analysis, risk assessment, and implementation pathways for the responsible use of artificial intelligence technologies to address the challenges of violent extremism.

Detail

Published

07/03/2026

Key Chapter Title List

  1. Introduction: Artificial Intelligence and the Preventing and Countering Violent Extremism Program
  2. Understanding Artificial Intelligence in the Context of Preventing and Countering Violent Extremism
  3. The Intersection of Artificial Intelligence and Violent Extremism
  4. Current State of Artificial Intelligence Adoption in Preventing and Countering Violent Extremism
  5. Challenges and Limitations of Adopting Artificial Intelligence in Preventing and Countering Violent Extremism
  6. Opportunities and Challenges: How Artificial Intelligence Can Enhance Preventing and Countering Violent Extremism Operations
  7. Risk Mitigation Strategies for Responsible Artificial Intelligence in Preventing and Countering Violent Extremism
  8. Human Rights and Ethical Concerns in Applying Artificial Intelligence to Preventing and Countering Violent Extremism
  9. Implementation and Capacity Building
  10. Conclusions, Recommendations, and Good Practices

Document Overview

Artificial intelligence is reshaping society, the economy, and the security landscape in unprecedented ways, presenting a complex dual impact on the field of preventing and countering violent extremism that fuels terrorism. On one hand, AI technologies may be exploited by terrorist and violent extremist actors to generate vast amounts of disinformation, manipulate public opinion, and accelerate violent processes; on the other hand, it also holds immense potential to assist the preventing and countering violent extremism sector in addressing these emerging and long-standing challenges. This practical guide is developed precisely to respond to this rapidly evolving landscape, aiming to provide relevant practitioners with the tools to responsibly navigate the potential and limitations of AI.

This guide is based on a survey conducted by the United Nations Office of Counter-Terrorism's Global Preventing and Countering Violent Extremism Program, involving 120 practitioners and policymakers from 45 countries. The survey revealed that currently, less than 25% of respondents use AI in their preventing and countering violent extremism interventions. The low adoption rate is attributed to multiple risks, including concerns about technological unreliability, bias, privacy, lack of trust, and insufficient transparency. Other obstacles include insufficient organizational readiness, limited resources or capacity, and a lack of relevant training—with less than one-third of respondents having received any AI-related training. However, over two-thirds of respondents who currently do not use AI indicated they would consider using it in the future, and the vast majority expressed a desire for training on mitigating human rights, ethical, and legal issues, using AI tools, and applying AI to preventing and countering violent extremism.

The report systematically outlines the application prospects and risk management of AI in the field of preventing and countering violent extremism. Regarding opportunities, the guide provides a detailed analysis of how AI can enhance several key operational areas, including monitoring and evaluation, training and education optimization, research and analysis, online public opinion monitoring, detection of AI-generated media such as deepfakes, crisis communication, positive and alternative messaging, behavioral pattern recognition and predictive analytics for tertiary support, and direct engagement with high-risk individuals. Simultaneously, the report systematically points out the operational challenges (such as resource constraints, skills gaps, organizational culture resistance) and technical challenges (such as data quality, model bias, algorithm transparency) faced in applying AI.

To ensure that technological applications align with international human rights standards and core UN values such as "do no harm," the guide dedicates a chapter to discussing human rights and ethical concerns. It emphasizes the necessity of adopting a human rights-based approach, focusing on safeguarding the rights to privacy, freedom of expression, and equality and non-discrimination. Furthermore, the report proposes key principles such as transparency, accountability, explainability, and human oversight and control, and explores specific ethical issues including climate impact, intellectual property theft, and the authenticity of preventing and countering violent extremism information/messengers.

To achieve responsible and effective application, the guide provides a complete framework from organizational preparation to concrete implementation. This includes building organizational readiness and institutional capacity, defining the core competencies required for practitioners, conducting AI literacy and technical training, and outlining a full implementation pathway from strategic assessment and stakeholder mapping to pilot interventions and scaling up with continuous performance monitoring. Finally, the report offers specific conclusions, recommendations, and good practices for the United Nations and other international organizations, national authorities, donors, the technology sector, and all stakeholders, accompanied by a practical workbook containing resource guides, risk assessment templates, and checklists to support translating principles into concrete action.