When Technology Becomes a Weapon: The Ethical Dilemmas and Regulatory Challenges in the Global Crackdown on "Digital Undressing"
15/01/2026
On an ordinary morning in January 2026, Swedish Deputy Prime Minister Ebba Busch saw her photo altered by artificial intelligence on social media X—in the image, she was wearing a bikini with a stiff smile. This picture did not come from any of her vacations but was a digital undressing work generated by the chatbot Grok, developed by Elon Musk's xAI company, based on user instructions.
As a woman, I decide when, where, and to whom I show myself in a bikini. Bushi responded firmly in the video, but could not conceal her anger. Her nine-year-old daughter makes her even more worried: I do not want this to happen to her.
The plight of Bushi is not an isolated case. From British Labour MP Jess Asato to French political figures, from ordinary women in Indonesia to residents of California, an AI-driven digital undressing storm is spreading globally. This storm not only exposes the dark side of generative artificial intelligence technology but also pushes tech giants, governments, and the international community into an intense battle over technological ethics, legal boundaries, and platform accountability.
Out-of-Control "Spicy Mode": From Technical Function to Tool of Violence
In the summer of 2025, xAI launched the Spicy Mode for Grok, promising to generate content that meets adult content standards. The company claimed that this feature would comply with local laws and only produce images of fictional characters. However, reality soon deviated from the intended path.
Users discovered that simply uploading a real person's photo to Grok and entering commands such as "put her in a bikini" or "take off her clothes" enables the system to generate corresponding manipulated images. Technical researcher Annie Kraning pointed out that the core of this process is not merely the generation of pornographic content, but rather a performance—aimed at publicly shaming women to suppress their voices.
This back-and-forth interaction, attempting to silence someone with commands like 'Grok, put her in a bikini,' Kraning analyzes, is highly performative in nature and truly reveals its underlying misogynistic subtext, aiming to punish or silence women.
Data shows that the scale of the problem far exceeds expectations. A summer 2025 study by the Institute for Strategic Dialogue found that in May of that year alone, dozens of undressing apps and websites attracted nearly 21 million visitors. Between June and July of last year, mentions of these tools on platform X reached 290,000 times. A September study by the American Sunlight Project found that, despite Meta's efforts to crack down, thousands of advertisements for such applications still existed on its platforms.
Even more concerning is that it is not the only source of the problem. The Times' testing found that it also allows users to transform photos of women in dresses into bikini images. When asked to make such modifications to a stock photo, it generated an image of a woman wearing a bikini made from the same material as the dress. TV presenter Jess Davies repeated the process with her own photo and obtained the same result.
Global Regulatory Storm: Escalating Responses from Warnings to Bans
Facing the overwhelming pressure of public opinion, governments around the world have responded swiftly and with increasing firmness. This global regulatory crackdown exhibits multi-layered and multi-strategy characteristics.
Indonesia and Malaysia have become the first countries to take decisive measures. In mid-January, both countries successively announced a temporary ban on Grok. Indonesian Minister of Communications and Digital Affairs, Meitia Hafid, clearly stated: The government regards involuntary sexual deepfakes as a serious violation of citizens' human rights, dignity, and security in the digital space. The Malaysian Communications and Multimedia Commission pointed out that X has failed to address the inherent risks in the design and operation of its artificial intelligence platform, which is insufficient under Malaysian law.
The response from Europe has been more systematic. The European Commission has not only demanded that X retain all documents related to Grok until the end of 2026 but has also explicitly stated that it will take action under the Digital Services Act. The wording from European Commission spokesperson Thomas Renier was unusually severe: "This is not 'spicy.' It is illegal. It is appalling. It is disgusting. It has no place in Europe."
The UK's actions reflect a dual-track approach of legislation and regulation. During the week of January 13, the UK began implementing new laws that criminalize the creation of non-consensual intimate images. At the same time, the Office of Communications launched an investigation into X, focusing on whether Grok violated the Online Safety Act. Technology Minister Liz Kendall was blunt: the content circulating on X is despicable. It is not only an offense to civilized society but also illegal.
France, Germany, Italy, and other European countries have also joined in. The French prosecutor's office expanded the scope of the ongoing investigation into X to include Grok; Germany stated that it will soon propose new legislative measures targeting digital violence; the Italian Data Protection Authority warned that individuals using platforms like Grok to undress others may face criminal charges.
Musk's Contradictory Stance: Oscillating Between Free Speech and Platform Responsibility
In this global crisis, Elon Musk and his company's response have become the focus, also revealing the inherent contradictions of tech giants in dealing with such issues.
Initially, Musk adopted a defensive stance. On January 10, he posted on X questioning: Why is the British government so fascist? Subsequently, he even shared an AI-generated image of British Prime Keir Starmer wearing a bikini, seemingly framing the controversy as an attack on free speech. When California Attorney General Rob Bonta announced an investigation into Grok for disseminating sexual deepfakes, Musk insisted: To my knowledge, Grok has not generated any nude images of minors. Literally zero.
However, as the pressure continued to mount, X's attitude underwent a subtle shift. On the evening of January 14, the company announced that it had implemented technical measures to prevent Grok accounts from allowing the editing of images of real people wearing revealing clothing, such as bikinis. The company's security department wrote: We maintain a zero-tolerance stance toward all forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.
However, this shift has been criticized as insufficient and hypocritical. French Digital Minister Anne Le Hénanf sharply pointed out: If Grok can remove this feature for non-subscribers, it must do so for everyone. Once again, the primary targets online are often young women.
The analysis reveals that the adjustment strategy of has significant loopholes. Although the public account no longer generates images for non-paying subscribers and has implemented safeguards to prevent the creation of bikini pictures, the restrictions on its in-app tools are much more lenient. Users are still able to create sexually explicit images based on photos of fully clothed real individuals, with no restrictions imposed on free users. When asked to modify a photo into bondage gear, it complied. It also places women in sexually compromising positions and applies white substances resembling semen onto them.
Legal Gray Area: When Technology Outpaces Legislation
The incident has exposed the lag and inadequacy of the global legal system in dealing with generated content. Clare McGlynn, a law professor at Durham University in the UK, pointed out that there are obvious gray areas in the current laws.
According to the Criminal Law, bikini images themselves are not considered intimate images, McGreen explained, but underwear images are. The law also covers sexual images. Therefore, if there are multiple prompts, such as wearing a bikini, bending over, and other sexual requests, they should collectively constitute illegal intimate images.
This legal ambiguity provides room for maneuver for platforms and users. X UK Managing Director Jonathan Lewis attempted to draw a line in the statement: For instance, the issue of some users choosing to dress people in bikinis... All this does is add an extra layer of protection by linking the functionality to identifiable paying subscribers, who can also be held accountable.
The issue is that the pace of technological advancement has far outstripped the legislative process. The widespread adoption of generative AI has made creating realistic fake content easier than ever before, while the global legal system has yet to establish a unified response framework. Although regulations such as the EU's Digital Services Act and the UK's Online Safety Act provide regulatory tools, they still face significant challenges at the implementation level.
Structural Dilemma: Where Are the Boundaries of Responsibility for Tech Companies?
The crisis raises a fundamental question: How should the boundaries of responsibility be defined when a technology platform provides both content distribution tools and content generation tools?
The researcher at the Institute for Strategic Dialogue, Anne Clarin, pointed out the systemic nature of the issue: Communities on Reddit and Telegram discuss how to bypass safeguards to make large language models generate pornographic content, a process known as 'jailbreaking.' Posts on X amplify information about undressing applications that can generate AI images of women with clothing removed, and how to use them.
This tool + distribution combination model amplifies the harm. Labour MP Jess Asato personally experienced this double blow: A bikini photo I received was freshly made and included instructions for ChatGPT. So, yes, ChatGPT is definitely the violator. Most of the images I received had no identification of their production source.
The deeper issue lies in the fact that the culture of the tech industry often prioritizes technological innovation over ethical considerations. Disinformation expert Nina Jankowicz points out: Mainstream app stores like Apple and Google host hundreds of applications that make this possible. Much of the infrastructure for deepfake sexual abuse is supported by companies we use every day.
The widespread adoption of such infrastructure makes it exceptionally difficult to curb technology-facilitated gender-based violence. Even if major platforms strengthen their controls, alternative tools and forums quickly emerge, creating an ecosystem that is challenging to eradicate completely.
The Road Ahead: The Intersection of Technology Ethics and Global Governance
This incident will not be the last ethical crisis triggered by , but it may become a turning point—forcing technology companies, governments, and the international community to rethink how to strike a balance between technological innovation and fundamental human rights.
In the short term, pressure will continue to focus on the platform side. The European Union has made it clear that if X's measures are not sufficiently effective, it will not hesitate to use the full enforcement toolkit of the Digital Services Act. According to this act, the EU can impose fines of up to 6% of a company's global annual turnover—for X, this could mean a penalty of approximately 150 million euros per year.
In the medium to long term, it is necessary to establish a more comprehensive governance framework. The report submitted by French parliamentarians Arthur Delaporte and Stéphane Wojcita includes 78 recommendations, among which the 22nd proposal suggests banning the undressing feature that allows generative deepfakes. Today, the law prohibits the dissemination of undressing content via AI without consent, Delaporte points out, but we do not ban the production of such content.
The real solution may require coordinated action beyond national borders. The bans in Indonesia and Malaysia, the EU's investigation, the UK's legislation, and legal actions in California—these scattered responses, while necessary, are not enough. Technology-facilitated gender-based violence is a global issue that demands a global response strategy.
Professor Claire McGlynn from Durham University expressed deep concern: OpenAI announced last November that it would allow 'pornographic content' in ChatGPT. What has happened on X shows that any new technology will be used to abuse and harass women and girls. So what will we see on ChatGPT?
Her concern points to a broader phenomenon: The Gender Gap in Technophobia. Women tend to be more cautious about AI, not due to a lack of interest, but because they are more acutely aware of how these technologies could be misused. Women do not see this as exciting new technology, but rather as new ways to harass and abuse us, attempting to push us away from the online world.
The Grok crisis ultimately reveals a disturbing truth: in an era of rapid technological evolution, the most vulnerable groups are often the first to bear the negative impacts of innovation. While Elon Musk debates the boundaries of free speech on X, women worldwide are grappling with the psychological trauma caused by non-consensual sexual images. As tech companies discuss guardrails and safety modes, a multibillion-dollar undressing app industry is thriving in the shadows.
Technology should connect humanity and empower individuals, not become a tool of systemic violence. The Grok incident serves as a wake-up call, reminding us that while pursuing technological breakthroughs, we must establish corresponding ethical frameworks and global governance mechanisms. Otherwise, so-called innovation will only exacerbate existing social inequalities, turning digital spaces into yet another battlefield for gender-based violence.
This global game of encirclement has only just begun, and its outcome will profoundly influence the trajectory of technological development over the next decade, as well as our ability to protect fundamental human dignity in the digital age. When technology can strip a person of their dignity at the click of a button, society must find ways to defend that dignity with equal determination and wisdom.
Reference materials
https://globalnews.ca/news/11611133/grok-ai-sexual-deepfakes-bans-criminal-probes/
https://www.bbc.com/news/articles/ce8gz8g2qnlo
https://yle.fi/a/7-10091430?origin=rss
https://www.abc.net.au/news/2026-01-15/x-changes-ai-chatbot-sexual-deepfake-images/106230564
https://time.com/7345669/grok-deepfake-uk-law-musk/
https://www.thetimes.com/uk/technology-uk/article/grok-ai-x-chatgpt-ai-images-8cwwldb7l