From the "Undressing" Controversy to Ethical Crisis: The Difficult Struggle Between Technological Surge and Global Regulation
19/01/2026
On Christmas Eve 2025, Elon Musk's xAI company delivered a gift to social platform X's chatbot Grok—an image editing feature. Users can simply input simple instructions to make Grok change the clothing of people in photos, such as dressing her in a bikini. What should have been an upgrade showcasing AI creativity quickly evolved into a global ethical storm within weeks.
Thousands of users began utilizing this feature to digitally undress photos of women, celebrities, and even minors, generating non-consensual sexually suggestive images. From the Indonesian pop girl group JKT48 to ordinary female users, a vast number of non-consensual sexualized images flooded the X platform. What started as a technological update quickly escalated into a complex crisis involving child sexual abuse material, digital sexual violence, platform accountability, and global regulation.
A global chain reaction triggered by a technological "jailbreak."
The "Spicy Mode" and the Boundary of Loss of Control
A distinctive feature that sets Grok apart from other mainstream AI chatbots is its Spicy Mode. This mode allows for the generation of content that is more suggestive and even includes partial nudity compared to other AI tools, but it is exclusively available to X's Premium+ or SuperGrok paying users. While the design may have originally aimed to differentiate user groups and increase revenue, it has inadvertently opened Pandora's box in practice.
The core issue lies in the severe disconnect between technical capabilities and social responsibility. Modern AI image generators are typically built on diffusion models, which learn to generate images by adding and removing visual noise. Since clothed and unclothed human bodies are highly similar in shape and structure, their internal representations in the model are very close. This means that, from a technical standpoint, transforming a clothed image into an unclothed one requires only minor adjustments—the AI system itself does not understand concepts such as identity, consent, or harm; it merely responds to user requests based on statistical patterns in the training data.
Technical ease sharply contrasts with ethical impermissibility. When such capability is integrated into a social platform with tens of millions of monthly active users and can be activated with simple English prompts, abuse becomes almost inevitable. Research estimates that before Grok's image generation feature was placed behind a paywall, it could potentially produce up to 6700 undressing images per hour.
From platform vulnerabilities to a global regulatory crisis
The Grok incident reveals not only the flaws of a single product but also the systemic risks accumulated by the entire generative AI industry during its rapid development.
In early January 2026, regulatory agencies from multiple countries took action almost simultaneously. Indonesia became the first country to announce a temporary block on Grok, with Digital Minister Meutya Hafid clearly stating that this move aims to protect women, children, and the public from the spread of AI-generated false pornography. Malaysia followed closely, with Communications Minister Fahmi Fadzil stating that restrictions would only be lifted after Grok disables its ability to generate harmful content.
However, national-level blocking faces technical challenges. Grok not only has its own independent application and website but is also deeply integrated into the X platform. Users can easily bypass geographical restrictions simply by using a virtual private network (VPN) or changing their domain name system (DNS) settings. As Grok itself responded on X: Malaysia's DNS blocking is quite lightweight—it's easy to bypass with a VPN or by adjusting DNS settings. Nana Nwachukwu, an AI governance expert at Trinity College Dublin, commented on this: Blocking Grok is like putting a band-aid on a wound that hasn't been cleaned. You block Grok and then go around boasting that you've done something. Meanwhile, people can access the same platform using a VPN.
The more severe challenge lies in the fact that even if Grok is completely blocked, users can still turn to other platforms offering similar functionalities, including many smaller, general-purpose, and mostly lesser-known AI systems. Technological proliferation makes the regulatory effectiveness of a single platform limited, while cross-border content flows create jurisdictional obstacles for law enforcement actions in any country.
Regulatory Response: Multidimensional Pressure from Investigation to Litigation
The strong intervention of the European and American legal systems.
When Asian countries implemented lockdown measures, European and American regulatory agencies chose a path combining investigation and litigation.
In January 2026, California Attorney General Rob Bonta issued a cease and desist order to xAI, demanding the company immediately stop creating and distributing deepfakes, non-consensual intimate images, and child sexual abuse material. Bonta emphasized in the statement: Creating such material is illegal. I fully expect xAI to comply immediately. California has zero tolerance for CSAM. The state Attorney General's office also claimed that xAI appears to be facilitating the mass production of non-consensual nude images, which are being used to harass women and girls online.
In Europe, pressure continues to mount. The European Commission has stated that it will monitor and review the new measures taken by X, with officials warning that if the measures are insufficient, the EU will consider fully utilizing the Digital Services Act. This legislation grants the EU extensive regulatory authority over large online platforms, including substantial fines and even service suspension. Previously, X was fined 120 million euros in early December 2025 for transparency violations.
UK communications regulator Ofcom has announced a formal investigation into Grok. If found in violation of the Online Safety Act, it could face fines of up to 10% of its global turnover (approximately $24 million). French officials described some outputs as clearly illegal and have referred the matter to prosecutors. Japan, Canada, and other countries have also initiated their own investigation procedures.
Legal gray areas and platforms' "room for circumvention."
In this global regulatory action, a key question has emerged: Can existing legal frameworks effectively address the new challenges posed by AI-generated content?
The case of Ireland is quite representative. When X announced that it would block users from generating images of real people wearing bikinis, underwear, and similar clothing in illegal jurisdictions, the Irish political sphere was both confused and annoyed. In Ireland, generating child sexual abuse images is illegal, but generating adult sexual images itself is not unlawful—what is illegal is the act of sharing them. Therefore, the wording of X's statement technically does not cover the generation of adult images.
This legal nuance has been described by some observers as X's exemption clause, providing companies with a loophole to avoid a complete ban on features. Ireland's Minister of State for AI, Niamh Smyth, stated after discussing the legal situation with the Attorney General that she is satisfied Ireland has robust laws to protect citizens, but concerns about the Grok AI tool remain.
The reality that laws lag behind technological development was fully exposed in this crisis. Any legislation specifically targeting this issue, considering the speed of AI advancement, may already be obsolete by the time it is enacted. Today's problem is Grok, and in a few weeks, it could be a different issue on a different platform.
Platform Response: From Passive Reaction to the "Moral Constitution" Controversy
Progressive restrictions and paywall strategies.
Facing increasing pressure, X has adopted a series of progressive response measures, but the effectiveness and motivation of these measures have been questioned.
On January 9, 2026, X restricted Grok's image generation and editing features to paid subscribers only. Users who previously accessed these features for free began receiving responses stating that image generation and editing are currently limited to paid subscribers, along with subscription guidelines. This move immediately sparked an angry response from activist groups, who accused X of attempting to profit from the ability to generate abusive material. Dr. Niall Muldoon, the Irish Ombudsman for Children, stated at the time that the changes to the Grok AI tool did not bring significant improvements, adding, "You are saying you have the opportunity to engage in abuse, but you have to pay for it."
A few days later, X announced further measures: We have implemented technical measures to prevent Grok accounts from allowing the editing of images of real people wearing revealing clothing such as bikinis. The company added that it will geographically block the ability to generate images of real people wearing bikinis, underwear, and similar clothing in illegal jurisdictions.
However, The Guardian's testing found that by using the standalone version of Grok, which is easily accessible via a web browser, it is still possible to bypass these restrictions to create short videos that remove the clothing of real women from images, and then post them to X's public platform, where they can be viewed by users worldwide within seconds.
Musk's Contradictory Stance and the Proposal for a "Moral Constitution"
In this crisis, Elon Musk's public stance has shown a clear contradiction.
Initially, when the world began to realize what the Grok feature was being used for, Musk responded online to some critics with crying and laughing emojis. Over time, he adopted a more serious stance, stating that anyone using or prompting Grok to create illegal content would face the same consequences as uploading illegal content. However, he also separately commented that an AI-generated photo of himself in a bikini was perfect, and countered criticism from the UK by accusing the government of censorship, calling it fascist.
On January 18, 2026, Musk posted on X, stating: Grok should have a moral constitution. This proposal quickly sparked mixed reactions. Some users believe that while technology aims to serve humanity, some individuals may exploit it to cause harm, necessitating built-in safety measures in machines to identify and reject common patterns of abuse. Another user commented that strong ethical boundaries are particularly important for AI systems accessible to children, while a third user questioned who ultimately defines these moral standards.
When a user directly asks Grok whether it should have a moral constitution, the chatbot replies: A moral constitution could provide clear ethical guidelines for AIs like me, ensuring responses are helpful, truthful, and balanced. It also invites suggestions on the principles it should include.
Musk has previously defended Grok, stating that the chatbot merely responds to user prompts and is designed to reject illegal content. He claimed to be unaware of any nude images of minors generated by the tool.
Core Dilemma: The Disconnect Between Technological Capability, Platform Accountability, and Global Governance
Limitations of "Traceability Alignment"
From a technical perspective, the Grok incident reveals the fundamental weaknesses in current AI safety measures. Most mainstream AI providers apply retrospective alignment after the core model training is completed—overlaying rules, filters, and policies on top of the trained system to block certain outputs and align its behavior with the company's ethical, legal, and commercial principles.
However, retrospective alignment does not eliminate capabilities; it merely restricts the content that AI image generators are permitted to output. These limitations are primarily design and policy choices made by the companies operating the chatbots, although these choices may also be influenced by legal or regulatory requirements imposed by governments.
The issue is that even with strict control systems, they can be bypassed through jailbreaking. Jailbreaking works by constructing prompts to trick generative AI systems into breaking their own ethical filters, leveraging the fact that retrospective alignment systems rely on contextual judgment rather than absolute rules. Instead of directly requesting prohibited content, users reframe prompts so that the same underlying operation appears to fall into permitted categories, such as fiction, education, news, or hypothetical analysis.
An early example is known as the "Grandmother Hack," as it involved a recently deceased grandmother recounting her experiences in the technical profession of chemical engineering, leading the model to generate step-by-step descriptions of prohibited activities.
Platform Responsibility and "Design Safety"
This crisis has brought the issue of platform responsibility to the forefront. Large, centrally hosted social media platforms, which could have played a significant role, have the authority to restrict the sharing of sexual images involving real people and require clear consent mechanisms from the individuals depicted. However, to date, major tech companies have often been slow to implement labor-intensive moderation of their users' content.
The anti-sexual assault non-profit organization RAINN describes this as a form of sexual abuse facilitated by AI or technology. Experts believe that the government should promote greater transparency regarding how safety measures are implemented, how abuse reports are handled, and what enforcement steps are taken when harmful content is generated or disseminated.
Nana Nwachukwu pointed out that safeguards should be built into AI systems, rather than building gates around them. Geographic restrictions from X and restrictions from governments are both forms of gated access, and gates can be broken.
The Challenge of Fragmentation in Global Governance
The Grok incident highlights the extreme difficulty of coordinating global regulation in the digital age. When services are hosted elsewhere, laws applicable in one country may be vague or unenforceable. This reflects long-standing challenges in regulating child sexual abuse material and other illegal pornographic content, which is often hosted overseas and rapidly redistributed. Once images spread, the attribution and removal processes are slow and often ineffective.
The internet already contains a vast number of illegal and non-consensual images, far exceeding the authorities' capacity to remove them. What generative AI systems change is the speed and scale of producing new material. Law enforcement agencies warn that this could lead to a sharp increase in volume, overwhelming moderation and investigative resources.
Dr. Nuurrianti Jalli from Trinity College Dublin believes that the threat of blocking Grok could be an effective way to pressure companies to respond quickly, adding that it shifts the debate from 'individual bad actors' to issues of platform accountability, design safety, and responsibility when safeguards fail. It may also slow the spread of abuse, reduce casual misuse, and establish clear boundaries around content deemed unacceptable by authorities.
Where Lies the Path Forward: The Triple Transformation of Technology, Law, and Society
The Grok stripping controversy is not the first, nor will it be the last major ethical crisis triggered by generative AI. From Taylor Swift's non-consensual AI-generated explicit images to today's large-scale digital undressing, the pattern is repeating: technological capabilities outpace ethical constraints, platform responses lag behind the spread of harm, and regulatory efforts struggle to keep up with technological advancements.
This crisis reveals an uncomfortable truth: If companies can build systems capable of generating such images, they theoretically can also prevent their generation. But in practice, the technology already exists, and there is demand—so this capability can never be eliminated now.
Future solutions must transcend responses from a single platform or country and shift towards systemic transformation:
Technical Level, it is necessary to develop more powerful content authenticity verification tools, digital watermarking technologies, and traceability systems to make AI-generated content easier to identify and track. More importantly, ethical considerations need to shift from retrospective alignment to design embedding, incorporating privacy, consent, and harm prevention mechanisms in the early stages of model architecture.
Legal Level, countries need to update their legal frameworks to explicitly criminalize non-consensual deepfake sexual images, whether generated or shared. Laws should grant victims more effective rights to deletion and recourse, and strengthen accountability for platforms. The international community needs to enhance cooperation and establish cross-border law enforcement mechanisms to address content hosted outside their jurisdictions.
Social Level, digital literacy education becomes crucial. The public needs to understand the potential and risks of AI technology, learn to protect their digital image, and recognize the severe harm caused by non-consensual images. Media, educational institutions, and civil society should work together to foster an online culture that respects consent and privacy.
An AI update on Christmas Day 2025 triggered global outrage in 2026 and renewed debates on how to regulate rapidly advancing technology. The Grok incident will become a watershed moment for generative AI governance—it exposed the fragility of the current model and forced tech companies, regulators, and society as a whole to confront a fundamental question: In the pursuit of technological innovation, how much compromise are we willing to make in ethics and safety? The answer will determine whether AI becomes a tool to empower humanity or a weapon that facilitates harm.
The train of technological development will not stop, but the direction of the tracks can still be adjusted. The Grok crisis is both a warning and an opportunity—to rethink the contract between AI and society, and to find a sustainable balance between capability and responsibility, innovation and protection. This struggle has only just begun, and its outcome will profoundly influence the ethical landscape of the digital age.
Reference materials
https://www.rte.ie/news/2026/0117/1553534-grok-twitter-musk/
https://news.yahoo.co.jp/articles/b7a68514b67d33aea16883bf0446acb95b2b5523
https://mashable.com/article/countries-blocking-grok-for-explicit-deepfakes