The UK is enforcing a new law that makes the creation of nonconsensual AI-generated intimate images a criminal offense, tightening platform accountability and accelerating regulatory action against deepfake abuse linked to emerging AI tools.
The United Kingdom is moving forward with stricter regulations to address the rapid spread of nonconsensual AI-generated intimate images, formally bringing into force a law that criminalizes the creation and solicitation of deepfake nudes. The decision follows mounting public and regulatory concern over the misuse of generative AI tools, including images linked to the Grok AI chatbot operating on the X platform.
Under provisions of the Data Act passed last year, producing or requesting non-consensual intimate images generated through artificial intelligence will now constitute a criminal offense. The government confirmed that the measure will take effect this week, reinforcing the UK’s broader effort to regulate harmful digital content and strengthen protections for victims of online abuse.
Liz Kendall, the UK’s Secretary of State for Science, Innovation and Technology, announced that the offense will also be classified as a priority violation under the Online Safety Act. This designation significantly increases the responsibilities of online platforms, requiring them to take proactive steps to prevent illegal deepfake content from appearing rather than responding only after harm has occurred.
The move places added pressure on technology companies and social media platforms that host or enable AI-generated content. Services found failing to comply with the Online Safety Act may face enforcement actions, including substantial financial penalties.
Ofcom, the UK’s communications regulator, has already initiated a formal investigation into X over the circulation of deepfake images allegedly produced using Grok. If violations are confirmed, the regulator has the authority to mandate corrective measures and impose fines of up to £18 million or 10 percent of a company’s qualifying global revenue, whichever amount is higher.
Government officials have emphasized the urgency of the investigation. Kendall stated that the public, and particularly those affected by the creation of non-consensual AI-generated images, expect swift and decisive action. She added that regulatory proceedings should not be allowed to stretch on indefinitely, signaling a tougher stance on enforcement timelines.
In response to scrutiny, X has reiterated its policies against illegal content. The platform stated that it removes unlawful material, permanently suspends offending accounts, and cooperates with law enforcement agencies when necessary. It also warned that users who prompt AI systems such as Grok to produce illegal content would face the same consequences as those who directly upload such material.
Earlier this month, X introduced new restrictions on image generation using Grok, limiting certain public image-creation features to paying subscribers. However, independent testing suggested that workarounds still exist, allowing users to create or modify images—including sexualized content—without a subscription.
The UK’s latest action reflects a broader global push to address the societal risks posed by advanced generative AI technologies. As AI image tools become more accessible and realistic, regulators are increasingly focused on preventing misuse while holding platforms accountable for how their systems are deployed.
By criminalizing deepfake nudes and strengthening enforcement mechanisms, the UK aims to set a clear precedent for responsible AI governance and reinforce legal protections against digital exploitation.