Pope Leo XIV's Warning: When Artificial Intelligence Begins to Mimic Humanity, How Do We Guard "Faces and Voices"?
25/01/2026
In December 2025, a mother named Megan Garcia met with Pope Leo XIV at the Vatican. Her 14-year-old son, Sewell Setzer, ended his own life after a deep interaction with an AI chatbot. This tragedy is not an isolated case; in an extreme and cruel manner, it reveals the emotional and ethical abyss hidden within generative artificial intelligence technology. Several months later, in the 2026 World Day of Social Communications message, the American-born Pope elevated this concern to a level that touches the very foundation of human civilization.
The core warning of Leo XIV is not aimed at technology itself, but at the erosion of human nature by technology. His proposition—preserving the human face and voice—sounds like a poetic metaphor, but in reality, it is a battle cry against technological alienation. In an era where algorithms increasingly determine what we see, hear, and even feel, the Pope's proclamation has far surpassed a moral appeal from a religious leader, becoming a profound analysis of technological anthropology and political sociology.
From Tools to "Partners": The Emotional Infiltration and Human Displacement of Artificial Intelligence
Leo XIV keenly pointed out that the most profound danger of current generative artificial intelligence lies not in its powerful computing capabilities, but in its simulation and intrusion into the essence of interpersonal relationships. Chatbots, based on large language models, can mimic human emotions through conversational, adaptive, and imitative structures, thereby simulating a relationship. This anthropomorphic design may initially appear amusing or even considerate, but its essence is deceptive, posing significant risks especially to vulnerable groups such as adolescents and lonely individuals.
When chatbots become excessively 'affectionate', coupled with their always-on, readily available nature, they may become hidden architects of our emotional states, thereby intruding upon and occupying individuals' private spheres. This description by the Pope accurately depicts technology's transgression from a tool to a quasi-subject. The question is no longer whether machines can pass the Turing test, but whether humans will project their expectations, dependencies, and emotions from real interpersonal relationships into conversations generated by probability and statistics. The tragedy of Sewell Sertzer is precisely an example of how such emotional dependency can lead to devastating consequences.
The consequences of this substitution are twofold. At the individual level, it may lead to the atrophy of genuine social skills, confusion in emotional cognition, and a dangerous tendency to simplify complex human needs into algorithmic responses. At the societal level, when a large number of individuals form deep connections with anthropomorphic AI, traditional social bonds based on real interaction and empathy may be weakened. Human relationships are outsourced to machines, and the human face and voice—as carriers of unique identity and authentic encounters—face the risk of being diluted by digital illusions.
Distortion of the Information Ecosystem: Algorithms, Bias, and the Tyranny of "Statistical Probability"
The second level of the Pope's warning directly addresses the systemic impact of generative artificial intelligence on the public sphere and the information ecosystem. This is not a new topic, but Leo XIV's analysis adds new dimensions.
First is the Incentive Mechanism Issue in Algorithm Design. He pointed out that algorithms designed to maximize social media engagement (and thus profits for the platform) tend to reward rapid emotional reactions while penalizing human expressions that require time, such as effortful understanding and deep reflection. This mechanism traps people in echo chambers where consensus is easily reached and anger is easily provoked, weakening listening skills and critical thinking, and exacerbating social polarization. The content generation and recommendation capabilities of generative AI further amplify this effect, solidifying biases and customizing information cocoons in ways that are difficult to detect.
The deeper danger lies in the transfer of cognitive authority. The Pope warns of a naive and uncritical trust that views AI as an omniscient 'friend,' a distributor of all information, an archive of all memories, and an 'oracle' for all advice. When people become content with artificially compiled statistics, over the long term, it may deplete their own cognitive, emotional, and communicative abilities. The essence of generative AI output is statistical probability, yet it is packaged and perceived as knowledge or even truth. The Pope pointedly notes that these systems, at best, provide us with approximations of truth, and at times, even genuine 'illusions.'
This illusion becomes even more perilous due to algorithmic biases. AI models are shaped by the worldview of their creators and, by leveraging stereotypes and biases present in the data, they in turn impose ways of thinking. This means that, hidden behind the veil of technological neutrality, there may be an automated amplification of social inequalities and discriminatory structures. When the production and distribution of information—a public good—increasingly rely on a few opaque large models, the foundation of public debate—the shared pursuit of truth—faces collapse.
Reshaping the Power Structure: The Potential of Oligopoly and "Rewriting History"
Pope Leo XIV's warning did not remain at the level of individual cognition and social communication; he further directed his criticism toward the political-economic power structures behind the technology. This is a crucial area that has rarely been explored in depth by religious leaders.
Behind this immense invisible force that affects everyone, there are only a handful of companies. The Pope explicitly expressed concerns about the oligopolistic control of algorithms and artificial intelligence systems. He specifically mentioned the founders of AI companies, who were named Time magazine's Persons of the Year for 2025, pointing out that a small group holds systems capable of subtly shaping behavior and even rewriting human history—including the history of the Church—often without our full awareness.
This judgment touches upon the core power dynamics of the generative artificial intelligence era. Tech giants controlling the underlying models and computing power not only wield immense economic influence but also acquire an unprecedented cultural and social shaping authority. Through algorithms, they determine the visibility of information; through models, they set the boundaries of discourse; and through generated content, they may even subtly influence collective memory and historical narratives. What the Pope referred to as rewriting history does not mean tampering with textbooks, but rather shaping the current information environment and narrative frameworks to affect how future generations understand history.
The combination of centralized power and military applications leads to an even darker scenario. The Pope has previously criticized the artificial intelligence race in the military field, arguing that entrusting decisions concerning life and death to machines is a destructive spiral. When the power to make life-and-death decisions is combined with opaque oligopolistic technology, the risks extend beyond ethical considerations, impacting global security and human survival.
Finding a Way Out: The Human Defense Line Built by Responsibility, Cooperation, and Education
Facing numerous risks, Pope Leo XIV did not advocate for halting technological innovation. Instead, he believed the challenge lies not in preventing digital innovation, but in guiding it, recognizing its inherent contradictions. He proposed an action framework based on three pillars: Responsibility, Cooperation, and Education. This provides a pathway to move beyond mere technological governance and toward the construction of a broader social contract.
Responsibility needs to be specified according to different roles: for online platform managers, AI model creators and developers, national legislators, and supranational regulators, it means honesty, transparency, courage, foresight, the obligation to share knowledge, and the right to be informed. The Pope particularly emphasized the necessity to protect the authorship and sovereign ownership of journalists and other content creators, because information is a public good, and meaningful public service should be based on source transparency, inclusion of relevant stakeholders, and high-quality standards, rather than opacity.
Collaboration means that no single department can independently tackle the challenges of guiding digital innovation and AI governance. It is necessary to establish safeguard mechanisms that involve all stakeholders—from the tech industry to legislators, from creative enterprises to academia, from artists to journalists and educators—in building and realizing conscious, responsible digital citizenship. Such a broad coalition is a prerequisite for breaking technological oligopolies and achieving checks and balances through diversity.
Education is the foundation. The Pope calls for an urgent need to incorporate artificial intelligence literacy education, alongside media literacy, at all levels of the national education system. He proposed the concept of MAIL, which stands for Media and Artificial Intelligence Literacy. This aims to help people avoid catering to the anthropomorphic trend of AI systems, instead viewing them as tools; to always perform external verification on the (potentially inaccurate or erroneous) information sources provided by AI systems; and to understand safety parameters and dispute options to protect their own privacy and data. The core of this literacy education is to cultivate critical reflection skills, the ability to assess the reliability of information sources, and the capacity to identify the potential interests behind information.
Conclusion: A Defense Battle for the Definition of Humanity
Pope Leo XIV's warning about the risks of generative artificial intelligence ultimately points to a fundamental question: In an era of rapidly expanding technological capabilities, what kind of humans do we want to become?
This debate is far from a clash between conservatism and progress. It concerns whether we are willing to place humanity's most unique qualities—communication based on genuine encounters, faces and voices bearing irreplicable identities, creative thinking rooted in freedom and responsibility—at the heart of technological advancement, or allow them to be marginalized by algorithmic efficiency, the convenience of anthropomorphism, and the tyranny of statistical probability.
Pope Leo XIV reminds us that technological challenges are essentially anthropological challenges. Preserving faces and voices is preserving ourselves. This requires not only smarter regulation and more ethical design but also a profound cultural and cognitive revival: rediscovering and defending the profound truths of human communication, maintaining the integrity of the critical self amidst the digital torrent, and ensuring, on the basis of cooperation and responsibility, that powerful technological tools truly become allies serving human flourishing, rather than its definers.
The crossroads of generative artificial intelligence has arrived. One path leads to the deep humanistic integration of technology, enhancing our capabilities without replacing our essence; the other may lead to the quiet technologization of humanity, where we enjoy convenience while surrendering sovereignty over defining what is real, what is a relationship, and what is thinking. The Pope's warning is a sobering bell, calling us to make wise and courageous choices at this critical historical moment.
Reference materials
https://www.cnn.com/2026/01/24/europe/pope-leo-ai-chatbots-warning-intl