article / Global politics

UK Revises the Crime and Policing Bill: Restructuring Tech Platform Hourly Private Image Deletion Directives

19/02/2026

The UK requires tech companies to delete intimate images within hours.

On February 18, the UK government announced that it will amend the "Crime and Policing Bill" to introduce a new statutory duty: major technology platforms must remove non-consensually shared intimate images within 48 hours after receiving a report. If companies fail to comply, they could face fines of up to 10% of their global annual revenue, and their services may even be blocked in the UK.

This amendment, promoted by Prime Minister Keir Starmer, represents a significant move by the UK in governing online content, particularly targeting online violence against women and girls. It directly addresses the growing issue of intimate image abuse within the UK and may also influence how other countries regulate digital platforms.

From Repeated Reporting to One-Time Resolution

The core of the amendment is to change the current situation of victims. In the UK, publishing such images is already illegal, but many victims find it difficult to have this content completely removed from platforms, often requiring repeated reports across different platforms, like playing whack-a-mole. The new regulations aim to break this deadlock through two aspects.

First, it establishes the principle of "report once, remove across the entire network." Victims only need to report to one platform, and that platform has a legal obligation to remove the images from its own services and collaborate with other major platforms to prevent the images from reappearing elsewhere. Technical Secretary Liz Kendall stated: "The days of tech companies having a 'free pass' are over. Women should not be forced to contact different platforms one by one just to remove a single image." This shifts the burden of proof and rights protection from individuals to the platforms, which possess the technology and resources.

Second, the amendment grants regulators greater authority to impose penalties. Fines will be calculated based on the globally qualified revenue defined by the regulator Ofcom, which could be substantial enough to deter major tech companies. Additionally, Ofcom is considering classifying the non-consensual sharing of intimate images at the same severity level as child sexual abuse material and terrorist content. This means platforms may need to use technologies such as hash matching to automatically identify and prevent the re-upload of such content. Ofcom stated it will accelerate relevant decision-making, and the new measures could take effect in the summer of 2026.

The strategy of the Starmer government aligns with global regulatory trends.

Prime Minister Keir Starmer stated in a declaration: The online world is the frontline in the 21st-century fight against violence targeting women and girls. This highlights the importance of the issue in his government’s agenda and links it to national security and civil rights. Starmer, who previously served as the Director of Public Prosecutions for England and Wales, mentioned that he has witnessed the unimaginable, often lifelong suffering caused by such violence, adding a personal dimension to his policy advocacy.

From a broader perspective, this legislation represents the UK's latest effort to take the initiative in the field of technology regulation. In recent years, governments worldwide have been strengthening oversight of tech companies—from the EU's Digital Services Act to Australia's consideration of banning social media for children under 16. While the UK already has the Online Safety Act as a framework, this amendment sets stricter and faster operational standards specifically in the area of intimate image abuse. This appears to be a strategic move: first achieving a breakthrough in an area with high social consensus and clear harm, thereby accumulating experience and legitimacy for future regulation of more complex and contentious areas, such as political disinformation or algorithmic transparency.

The timing of the bill's proposal is also noteworthy. In January 2026, the UK government engaged in a public dispute with Elon Musk's X platform after its built-in Grok AI chatbot was used to create fake nude images of women. This controversy directly prompted the UK to legislate in early February, making non-consensual deepfake images illegal. The recent 48-hour takedown order amendment can be seen as an extension and escalation of that dispute, sending a clear message to tech companies: operating in the UK means complying with the country's increasingly stringent rules.

Execution Challenges and Technical Limitations

Although the legislative intent is clear, there will be many challenges in actual implementation. The first is the issue of enforcement. How are major platforms defined? Which services will be brought under regulation? How will Ofcom verify whether platforms have effectively prevented the re-upload of images globally? All of these require detailed regulatory guidelines. The government has stated that the Department for Science, Innovation and Technology will issue guidance on how internet service providers should block rogue websites hosting such content, but these websites may fall outside the scope of the Online Safety Act. This involves complex content-blocking technologies and could spark debates about net neutrality and excessive censorship.

In terms of technology, hash value matching is not perfect. If an image is slightly edited (such as cropping, color adjustment, or adding a watermark), its hash value will change, allowing it to evade detection. The explosion of AI-generated content has introduced new threats. Deepfake technology makes it easy to create realistic private images, while generative AI tools can produce entirely fictional images that target specific individuals. Whether existing technology can accurately identify and handle such massive and constantly evolving content within 48 hours remains uncertain.

The data also reveals the complexity of the issue. A parliamentary report from May 2025 indicated that reports of intimate image abuse in the UK increased by 20.9% in 2024. Meanwhile, a government report in July 2025 found that young men and boys are primarily targeted for sextortion—being demanded money to prevent the sharing of intimate images. This suggests that the impact of such abuse is not limited to women, and while current legislative and policy discussions mainly focus on protecting women and girls, a more comprehensive response framework may be needed in the future.

Potential Global Impact

The impact of this UK action will not remain confined within its borders. As a globally significant financial, technological, and media hub, the rules established by the UK often produce spillover effects. A fine of up to 10% of global revenue essentially constitutes a form of extraterritorial jurisdiction, compelling multinational tech companies to adjust their global or regional content moderation policies for the UK market. This could lead to an outcome where, to meet one of the strictest standards set by the UK, companies might extend the same practices to other regions with more lenient regulations, thereby inadvertently raising the global threshold for content governance.

From a broader perspective, the United Kingdom is expanding the definition of online safety from traditional child protection and counter-terrorism to explicitly include gender-based online violence. Placing non-consensual intimate images alongside child sexual abuse material and terrorist content sends a strong political signal. It aims to build an international consensus that gender-based violence in the digital age is a public harm as severe as the aforementioned crimes, requiring the same level of technological and regulatory resources to combat. This may encourage allies with similar regulatory inclinations, such as the European Union, Canada, and Australia, to adopt follow-up measures.

Of course, this approach will also encounter resistance. In addition to lobbying from technology companies, civil liberties groups may worry that overly powerful automatic removal and blocking mechanisms could harm freedom of speech and due process. Finding a balance between protecting personal dignity and safeguarding digital rights will be a core challenge for the UK regulator Ofcom in the coming years.

The chimes of the British Parliament sound as a law attempting to reshape behavior in cyberspace advances. 48 hours—this brief timeframe carries a nation's declaration of war against a certain malady of the digital age, and marks the beginning of a global experiment concerning power, technology, and human nature. Its outcome will not only affect the sense of security among women in the UK but may also be recorded in the history of global internet governance.