After Musk's acquisition, the increase in hate speech on the platform reached %.
Annual Platform Data Analysis: Trends in Homophobic, Racist, and Transphobic Speech and Assessment of Fake Account Activity
Detail
Published
23/12/2025
Key Chapter Title List
- Research Background and Core Problem Statement
- Association Between Online Hate Speech and Offline Hate Crimes
- Timeline of Musk's Acquisition of Platform X and His Commitments
- Measurement Methods for English Hate Speech
- Evaluation Criteria for Fake Accounts and Bot-like Accounts
- Comparison of Weekly Hate Speech Incidence Before and After the Acquisition
- Trends in Specific Types of Hate Speech (Homophobia, Racism, Transphobia)
- Analysis of Changes in Likes on Hate Speech Posts
- Evolution of Fake Account Activity
- Comparison of Research Findings with Platform X's Public Statements
- Research Limitations and Cautions Regarding Causality
- Recommendations for Online Platform Safety and Content Moderation Policies
File Introduction
As the influence of social media platforms in the public sphere becomes increasingly prominent, the spread of online hate speech and misinformation has become a critical issue concerning social safety and public interest. Previous research has confirmed a link between online hate speech and offline hate crimes, while bots and bot-like accounts can cause multiple harms through spreading misinformation and spam, including facilitating fraud, interfering with real-world election processes, and hindering public health campaigns.
On October 27, 2022, Elon Musk completed the acquisition of the former Twitter platform and assumed the role of CEO. Despite his promise to reduce bot activity on the platform, existing studies have shown an initial increase in hate speech after the acquisition, with no reduction in fake accounts. However, whether this trend persisted until Musk stepped down as CEO in June 2023 had not been conclusively determined, which constitutes the core focus of this study.
To address this research gap, Daniel Hickey and colleagues from the University of California, Berkeley, employed validated research methods to conduct a systematic analysis of English hate speech and fake account activity on Platform X between 2022 and 2023. Through quantitative statistics, the study focused on key metrics such as the incidence of hate speech before and after the acquisition, changes in specific types of hate speech, the reach of related posts, and the evolution of fake account activity.
The research results indicate that the surge in hate speech observed on Platform X prior to Musk's acquisition continued until May 2023. Compared to the months preceding the acquisition, the weekly incidence of hate speech on the platform increased by approximately 50% after the acquisition. The frequency of specific types of hate speech, such as homophobia, racism, and transphobia, saw a significant rise. Concurrently, the average number of likes on hate speech posts increased by 70%, indicating greater user exposure to such harmful content. Notably, the number of bot accounts and other fake accounts on the platform did not decrease and may have actually increased.
These findings are inconsistent with Platform X's public claim that user exposure to hate speech declined after the acquisition. The researchers note that, due to a lack of information regarding specific internal policy changes at Platform X, a clear causal relationship between Musk's acquisition and the study's findings cannot be established. However, based on the results, they express concern about the safety of online platforms, call for Platform X to strengthen its content moderation efforts, and recommend further research to fully understand the patterns of harmful content dissemination on social media platforms.
The study authors emphasize that current policies aimed at reducing user exposure to harmful content appear insufficiently effective. This conclusion provides important empirical reference for subsequent platform governance and policy formulation. The research findings were published in the open-access journal PLOS One on February 12, 2025. The study holds significant academic value and practical implications for understanding social media platform governance, harmful content control, and digital space safety.