Privacy

Intimate Images: The UK’s Planned Takedown Rule

Published: Feb. 24, 2026

The UK government plans to pass a law which will force tech companies that allow users to share content to remove intimate images shared without consent within 48 hours of receiving notification. 

The offence of sharing intimate images without consent is already designated as a “priority offence” under the Online Safety Act 2023, requiring regulated platforms to take proactive steps to mitigate the risk of such content appearing.

The plans are to be implemented through an amendment to the Crime and Policing Bill, which remains subject to parliamentary passage before it becomes law.  

For tech companies operating in or serving the UK market, this is more than a content moderation tweak. The plans signal a structural shift in regulatory expectations; one that blends reactive takedown mandates with proactive prevention requirements and meaningful enforcement risk. 

The plans, which share significant similarities with the United States’ TAKE IT DOWN Act, also reflect the emergence of a consensus on the response to online non-consensual intimate imagery. The TAKE IT DOWN law and these UK proposals are designed to apply to both real and digitally manipulated images, and both impose strict obligations on covered platforms who receive reports of such images, including a 48-hour takedown requirement.

From Notice-to-Takedown 

The 48-hour takedown requirement would formalize a strict timeline: once a victim has reported an image and the platform is notified, the platform would be required to remove the non-consensual intimate image within 48 hours. While many major platforms already operate faster than this in practice, the amendment would create a legal compulsion backed by significant fines, if enacted. Furthermore, the proposal would require that once a non-consensual image is reported, platforms must take reasonable steps to ensure that the same image is removed across services and automatically blocked from being re-uploaded. 

Enforcement With Teeth

The non-compliance risks would be substantial, including fines of up to 10% of global revenue and, in certain cases, services being blocked in the UK.

Operational Implications for Platforms

For tech industry professionals, several operational questions arise:

  1. Detection Systems: Are existing moderation tools sufficiently tuned to identify non-consensual intimate imagery, including AI-generated “nudified” images? The rise of generative AI tools significantly complicates classification and enforcement.
  2. Hash-Sharing Infrastructure: The government has signalled support for digital marking systems that prevent re-uploads. Platforms may need to invest in or expand cross-platform hash-sharing partnerships, similar to those used for CSAM detection.
  3. User Reporting Flows: The 48-hour clock starts upon notification. Platforms must ensure reporting mechanisms are accessible, efficient, and capable of triaging cases quickly and accurately.
  4. Cross-Jurisdictional Harmonization: The UK’s proposal aligns with steps being taken by other governments. Regulators in the EU, Australia, and the U.S. are increasingly focused on image-based abuse and AI-facilitated exploitation. Companies should evaluate whether a globally harmonized approach is more sustainable than piecemeal regional compliance.

AI and the Expanding Definition of Harm

Notably, the announcement also contemplates coverage of AI “nudification” tools and chatbots used to generate abusive imagery. This reflects a broader regulatory trend: lawmakers are no longer treating generative AI harms as speculative future risks. They are seeking to mitigate them directly through statutory obligations.

For AI developers and platforms integrating generative capabilities, this reinforces the need for guardrails at the model and product design level, not just at the moderation layer. 

A Broader Policy Signal

At a policy level, this move reflects the UK government’s framing of violence against women and girls as a national priority that extends into digital environments. The comparison of intimate image abuse to other high-severity online harms suggests regulators are recalibrating how they weigh dignity, privacy, and psychological harm against traditional speech considerations. For industry leaders, whilst the detailed statutory language and enforcement mechanisms are still being finalised, the takeaway is clear: intimate image abuse is becoming a top-tier regulatory focus. Reactive moderation alone will not suffice.