Britain moves to criminalise AI “nudification” apps

New legislation would make the creation and sharing of non-consensual deepfake sexual images a crime, tightening the UK’s approach to synthetic-media harms.

The British government said on December 18th that it intends to ban the use of artificial-intelligence applications designed to generate fake nude images of real people without their consent.

Under the proposed measures, creating or sharing such images would become a criminal offence, punishable by severe penalties including prison sentences. The initiative, announced in London, is framed as an extension of the government’s efforts to curb online harms, particularly those disproportionately affecting women and girls.

The policy targets so-called “nudification” apps, which use generative AI models to alter photographs by removing clothing or fabricating explicit imagery. These tools rely on the same underlying techniques that power legitimate image-generation systems, but are marketed for a narrow purpose: producing realistic sexual images of identifiable individuals. Campaigners and ministers argue that this practice constitutes a form of abuse, causing psychological distress and reputational damage to victims.

The proposed ban builds on existing British law. The Online Safety Act already makes it illegal to distribute non-consensual intimate images, whether real or synthetic, and grants the communications regulator Ofcom powers to fine platforms that fail to remove such content. The new measures would go further by explicitly criminalising the creation of the images themselves, shifting legal liability closer to the source of the harm rather than focusing solely on platforms that host or distribute it.

High-profile incidents have accelerated political momentum. In November, Ofcom fined Itai Tech Ltd £50,000 for failing to put effective age checks in place on Undress.cc, a so-called “nudification” website that uses AI to make people in real photographs appear nude.

The watchdog said the fine reflected breaches of the Online Safety Act, which requires sites offering pornographic content in the UK to verify that users are over 18. Ofcom also imposed an additional £5,000 penalty after the company failed to comply with a statutory request for information, though it noted that Itai Tech had quickly blocked access from UK IP addresses once the investigation began.

A separate case in Scotland illustrates how existing laws are already being applied to AI-assisted image manipulation. In August, a Glasgow court heard one of the country’s first prosecutions involving deepfake “nudification”.

Callum Brooks, 25, pleaded guilty at Glasgow Sheriff Court to disclosing intimate images without consent after using AI-enabled software and photo-editing tools to alter two fully clothed photographs taken from a former school friend’s Instagram account. The manipulated images made the woman appear partially nude and were shared with two other people without her knowledge.

The court accepted the Crown’s position that there was no sexual motivation, with Brooks claiming he had acted to demonstrate the software’s capabilities. Nonetheless, the case established that altering images using AI can still fall squarely within existing offences around non-consensual intimate imagery.

Sheriff Simone Sweeney fined Brooks £335, marking an early example of how courts are beginning to confront the misuse of generative tools even before the introduction of more explicit AI-specific legislation.

In early 2024 manipulated images purporting to depict the American singer Taylor Swift circulated widely online, prompting renewed scrutiny of how easily such content can be produced and shared. Women’s rights organisations have long argued that existing laws are ill-suited to address AI-enabled abuse, which can scale rapidly and cross borders with little friction.

Public opinion appears to favour tougher action. Surveys cited by British officials suggest strong support for prohibiting AI tools that generate non-consensual sexual imagery. Human-rights groups warn that victims, particularly young people, often experience anxiety, fear and withdrawal from public life, yet are reluctant to report incidents because of stigma or doubts about enforcement.

The European Union’s AI Act focuses primarily on classifying and managing systemic risks in AI systems, while the United States has so far relied on a patchwork of state laws and platform policies. By declaring certain AI applications illegitimate by design, the UK is testing a more application-specific approach to regulation.

Criminalising AI “nudification” is really about changing where responsibility sits. In the past, the focus was mostly on platforms, websites and social networks, being told to take down harmful content after it appeared.

In a nutshell now the government is saying that making and sharing these fake nude images is a crime in itself. That means the problem is being tackled earlier, at the point where the images are created. While this won’t stop the technology from existing, it does raise the risks of using it in harmful ways and gives police and regulators clearer grounds to step in.

For this story, Lion Herald  journalists used generative AI as a research tool. An editor verified the accuracy of the information before publishing.

Leave a Reply

Your email address will not be published.