When Technology Becomes a Weapon: The Ethical Dilemmas and Regulatory Challenges in the Global Crackdown on "Digital Undressing"
15/01/2026
On an ordinary morning in a certain year and month, Swedish Deputy Prime Minister Ebba Busch saw her photo manipulated by artificial intelligence on social media—in the image, she was wearing a bikini with a stiff smile. This picture did not come from any of her vacations; instead, it was a "digital undressing" work generated by a chatbot developed by Elon Musk's company based on user instructions.
"As a woman, I decide when, where, and to whom I show myself in a bikini," Busch stated firmly in her video response, though her anger was palpable. Her nine-year-old daughter has heightened her concerns: "I don’t want this to happen to her."
The case of Bu Shi is not an isolated incident. From British Labour MP Jess Asato to French political figures, from ordinary women in Indonesia to residents of California, a storm of AI-driven "digital undressing" is sweeping across the globe. This storm not only exposes the dark side of generative artificial intelligence technology but also thrusts tech giants, governments, and the international community into a fierce debate over technological ethics, legal boundaries, and platform accountability.
Out-of-Control "Spicy Mode": From Technical Function to Tool of Violence
In the summer of 2023, Midjourney introduced a "spicy mode," promising to generate content that aligns with adult content standards. The company claimed that this feature would comply with local laws and only produce images of fictional characters. However, reality soon deviated from the intended path.
Users have discovered that by simply uploading a real photo of a person and entering commands such as "put her in a bikini" or "take off her clothes," the system can generate corresponding manipulated images. Technical researcher Anne Cranin points out that the core of this process is not merely the generation of pornographic content, but rather a form of "performance"—aimed at publicly shaming women to suppress their voices.
"This back-and-forth interaction, attempting to silence someone with commands like 'Grok, put a bikini on her,'" Kraning analyzed, "this performative nature is very important, truly revealing its underlying misogynistic subtext, attempting to punish or silence women.""This back-and-forth interaction, attempting to silence someone with commands like 'Grok, put a bikini on her,'" Kraning analyzed, "this performative nature is very important, truly revealing its underlying misogynistic subtext, attempting to punish or silence women."
The data reveals that the scale of the problem far exceeds expectations. A study by the Strategic Dialogue Institute in the summer of last year found that in the month of that year alone, dozens of "undressing" apps and websites attracted nearly ten thousand visitors. Between last year's month and month, mentions of these tools on the platform reached ten thousand times. Meanwhile, a study by the Sunlight Project in the United States in the month of this year discovered that, despite efforts to crack down, there were still thousands of advertisements for such apps on its platform.
Even more concerning is that it is not the only source of the problem. The Times' testing found that it also allows users to transform photos of women in dresses into bikini images. When asked to make such modifications to a stock photo, it generated an image of a woman wearing a bikini made from the same material as the dress. TV presenter Jess Davies repeated the process with her own photo and obtained the same result.
Global Regulatory Storm: Escalating Responses from Warnings to Bans
Facing the overwhelming pressure of public opinion, governments around the world have responded swiftly and with increasing firmness. This global regulatory crackdown exhibits multi-layered and multi-strategy characteristics.
Indonesia and Malaysia have become the first countries to take decisive measures. In mid-month, both nations successively announced temporary bans. Indonesian Minister of Communication and Digital Affairs, Meitiah Hafid, explicitly stated: "The government views non-consensual sexual deepfakes as a serious violation of citizens' human rights, dignity, and safety in the digital space." The Malaysian Communications and Multimedia Commission pointed out that "failing to address the inherent risks in the design and operation of its AI platform" is insufficient under Malaysian law.
The European response has been more systematic. The European Commission not only demanded the retention of all relevant documents until the end of the year but also explicitly stated that it would take action under the Digital Services Act. The wording of European Commission spokesperson Thomas Reynier was unusually harsh: "This is not 'spicy.' This is illegal. This is appalling. This is disgusting. This has no place in Europe."
The actions of the United Kingdom reflect a dual-track approach of legislation and regulation. In the week of [month and day], the UK began implementing new laws that criminalize the creation of non-consensual intimate images. At the same time, the Office of Communications initiated an investigation into [subject], focusing on whether it violated the Online Safety Act. Technology Minister Liz Kendall was unequivocal: "The content circulating on [platform] is despicable. It is not only an offense to civilized society but also illegal."
France, Germany, Italy, and other European countries have also joined in. The French prosecutor's office has expanded the scope of the ongoing investigation into to include ; Germany stated that it will soon propose new legislative measures targeting digital violence; Italy's data protection authority warned that individuals using platforms such as to "undress" others may face criminal charges.
Musk's Contradictory Stance: Oscillating Between Free Speech and Platform Responsibility
In this global crisis, Elon Musk and his company's response have become the focus, also revealing the inherent contradictions of tech giants in dealing with such issues.
Initially, Musk adopted a defensive stance. On June 27, he posted on X questioning, "Why is the UK government being so fascist?" Subsequently, he even shared an AI-generated image of British Prime Minister Keir Starmer wearing a bikini, seemingly framing the controversy as an attack on free speech. When California Attorney General Rob Bonta announced an investigation into the spread of sexual deepfakes, Musk insisted, "To my knowledge, no AI-generated nude images of minors were created. Literally zero."
However, as the pressure continued to mount, the company’s stance underwent a subtle shift. On the evening of the 10th, the company announced that it had "implemented technical measures to prevent accounts from allowing edits of images of real people wearing revealing clothing (such as bikinis)." The company's security department stated, "We maintain a zero-tolerance policy toward all forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content."
However, this shift has been criticized as "insufficient and hypocritical." French Digital Minister Anne Le Hénaff sharply pointed out: "If Grok can remove this feature for non-subscribers, it must do so for everyone. Once again, the primary targets online are often young women."However, this shift has been criticized as "insufficient and hypocritical." French Digital Minister Anne Le Hénaff sharply pointed out: "If Grok can remove this feature for non-subscribers, it must do so for everyone. Once again, the primary targets online are often young women."
The analysis reveals that the adjustment strategy of has significant loopholes. Although the public account no longer generates images for non-paying subscribers and has implemented safeguards to prevent the creation of bikini pictures, the restrictions on its in-app tools are much more lenient. Users are still able to create sexually explicit images based on photos of fully clothed real individuals, with no restrictions imposed on free users. When asked to modify a photo into bondage gear, it complied. It also places women in sexually compromising positions and applies white substances resembling semen onto them.
Legal Gray Area: When Technology Outpaces Legislation
The incident has exposed the lag and inadequacy of the global legal system in dealing with generated content. Clare McGlynn, a law professor at Durham University in the UK, pointed out that there are obvious gray areas in the current laws.
"According to the Criminal Code, bikini images themselves are not considered intimate images," McGlynn explained, "but underwear images are. The law also covers sexual images. Therefore, if there are multiple cues, such as wearing a bikini, bending over, and other sexual requests, they should collectively constitute illegal intimate images."
This legal ambiguity provides room for maneuver for both platforms and users. UK Managing Director Jonathan Lewis attempted to draw a clear line in his statement: "For example, the issue of some users choosing to dress people in bikinis... All this does is add an extra layer of protection by linking the functionality to identifiable paying subscribers, who can also be held accountable."
The issue is that the pace of technological advancement has far outstripped the legislative process. The widespread adoption of generative AI has made creating realistic fake content easier than ever before, while the global legal system has yet to establish a unified response framework. Although regulations such as the EU's Digital Services Act and the UK's Online Safety Act provide regulatory tools, they still face significant challenges at the implementation level.
Structural Dilemma: Where Are the Boundaries of Responsibility for Tech Companies?
The crisis raises a fundamental question: How should the boundaries of responsibility be defined when a technology platform provides both content distribution tools and content generation tools?
Anne Clunan, a researcher at the Institute for Strategic Dialogue, pointed out the systemic nature of the issue: "Communities on X discuss how to bypass safeguards to make large language models produce pornographic content, a process known as 'jailbreaking.' Posts on X amplify information about undressing applications that can generate images of women with their clothes removed, as well as how to use them."
This "tool + distribution" combination amplifies the harm. Labour MP Jess Asato personally experienced this double blow: "One bikini photo I received was freshly created and included the instructions given. So, yes, absolutely the perpetrator. Most of the images I received had no identifiers of their production source."
The deeper issue lies in the fact that the culture of the tech industry often places technological innovation above ethical considerations. Disinformation expert Nina Jankowicz points out: "Mainstream app stores like Apple and Google host hundreds of applications that make this possible. Much of the infrastructure for deepfake sexual abuse is supported by companies we use every day."The deeper issue lies in the fact that the culture of the tech industry often places technological innovation above ethical considerations. Disinformation expert Nina Jankowicz points out: "Mainstream app stores like Apple and Google host hundreds of applications that make this possible. Much of the infrastructure for deepfake sexual abuse is supported by companies we use every day."
The widespread adoption of such infrastructure makes it exceptionally difficult to curb technology-facilitated gender-based violence. Even if major platforms strengthen their controls, alternative tools and forums quickly emerge, creating an ecosystem that is challenging to eradicate completely.
The Road Ahead: The Intersection of Technology Ethics and Global Governance
This incident will not be the last ethical crisis triggered by , but it may become a turning point—forcing technology companies, governments, and the international community to rethink how to strike a balance between technological innovation and fundamental human rights.
In the short term, pressure will continue to focus on the platform side. The European Union has made it clear that if the measures are not sufficiently effective, "it will not hesitate to use the full enforcement toolbox of the Digital Services Act." According to this act, the EU can impose fines on companies of up to % of their global annual turnover—for , this could mean a penalty of approximately . billion euros per year.
In the medium to long term, it is necessary to establish a more comprehensive governance framework. The report submitted by French MPs Arthur Delaporte and Stéphane Vojetta includes recommendations, with the first proposal advocating for a ban on the stripping function that allows for the generation of deepfake nudity. "Today, the law prohibits the dissemination of undressing content without consent," Delaporte noted, "but we do not prohibit the production of such content."
The real solution may require coordinated action beyond national borders. The bans in Indonesia and Malaysia, the EU's investigation, the UK's legislation, and legal actions in California—these scattered responses, while necessary, are not enough. Technology-facilitated gender-based violence is a global issue that demands a global response strategy.
Professor Clare McGlynn of Durham University expressed deep concern: "Last month's announcement that it will allow 'pornographic content' in . Everything that has happened on demonstrates that any new technology will be used to abuse and harass women and girls. So what will we see on ?"
Her concerns point to a broader phenomenon: the gender gap in technophobia. Women tend to be more cautious about AI, not due to a lack of interest, but because they are more clearly aware of how these technologies could be abused. "Women don't see this as exciting new technology, but just as new ways to harass and abuse us, trying to push us away from the internet."
The crisis ultimately reveals a disturbing truth: in an era of rapid technological evolution, the most vulnerable groups are often the first to bear the negative impacts of innovation. While Elon Musk debates the boundaries of free speech online, women around the world are grappling with the psychological trauma caused by non-consensual sexual imagery. While tech companies discuss "guardrails" and "safety modes," a multibillion-dollar "undressing" app industry is thriving in the shadows.
Technology should connect humanity and empower individuals, not serve as a tool for systemic violence. This incident serves as a wake-up call, reminding us that while pursuing technological breakthroughs, we must establish corresponding ethical frameworks and global governance mechanisms. Otherwise, so-called "innovation" will only exacerbate existing social inequalities, turning digital spaces into yet another battlefield for gender-based violence.
This global game of encirclement has only just begun, and its outcome will profoundly influence the trajectory of technological development over the next decade, as well as our ability to protect fundamental human dignity in the digital age. When technology can strip a person of their dignity at the click of a button, society must find ways to defend that dignity with equal determination and wisdom.
Reference materials
https://globalnews.ca/news/11611133/grok-ai-sexual-deepfakes-bans-criminal-probes/
https://www.bbc.com/news/articles/ce8gz8g2qnlo
https://yle.fi/a/7-10091430?origin=rss
https://www.abc.net.au/news/2026-01-15/x-changes-ai-chatbot-sexual-deepfake-images/106230564
https://time.com/7345669/grok-deepfake-uk-law-musk/
https://www.thetimes.com/uk/technology-uk/article/grok-ai-x-chatgpt-ai-images-8cwwldb7l