logo

Nudification Apps Ban Planned as UK Moves to Tackle Online Misogyny

AI Robot

The UK government says that it will ban ‘nudification’ apps following calls to crack down on online misogyny.

The new laws — revealed on Thursday as part of a broader strategy intended to halve violence against women and girls — will criminalize the creation and supply of AI tools that allow users to alter pictures so that someone appears to have had their clothing removed, known as deepfakes.

The new offences would add to current legislation covering sexually explicit deepfakes and the abuse of intimate images, the government said.

“Women and girls should be safe online as they are offline,” said Liz Kendall, the shadow technology secretary.

“It is not right that they are put in a position to be manipulated, we will not wait for technology and tech companies to respond by taking down images or video after the damage has already been done.

Under the Online Safety Act, it is already a crime to produce or share explicit deepfake images of an individual without their consent.

Ms Kendall said the new offence – which bans the manufacture or sale of nudifying apps – would mean “those who profit from them or allow their use will feel the full force of the law”.

NUDIFICATION Nudification, or “de-clothing,” as you might have expected, refers to a group of apps that use generative AI to credibly make it appear as though someone’s clothes has been removed within an image or video.

A confusion of experts had warned against the growing popularity of such apps and possibility for fake nude imagery to cause extensive damage – particularly when used in the creation of child sexual abuse material (CSAM).

In April, the Children’s Commissioner for England Dame Rachel de Souza called for nudification apps to be banned outright.

“The creation of such an image is rightly illegal — the technology behind it should be too,” she said in a report.

The government on Thursday said it would “work with tech companies” to develop solutions for tackling intimate image abuse.

This would involve furthering its collaboration with UK safety tech firm SafeToNet, it said.

The UK-founded company developed AI software that it said could detect and block sexual content, block cameras when they think sexual content is being filmed.

That tech doesn’t already exist, but would be built on existing filters used by platforms including Meta to spot and flag potential nudity in imagery — usually with the intent of stopping children taking or sharing intimate images of themselves.

The news comes after child safety charities previously urged government to crack down on the tech.

The Internet Watch Foundation (IWF) – which runs the Report Remove helpline for under-18s to anonymously report explicit images of themselves online – said 19% of verified reporters said some or all of their imagery had been changed.

Its chief executive Kerry Smith said it was good news.

“We are pleased also to see active movement on a ban which will stop these so-called nudification apps that have no reason for being offered as a product,” she said.

“Apps like these make real children even more vulnerable to abuse, and we know that the images created are being circulated in some of the darkest parts of the internet.”

But children’s charity the NSPCC welcomed the news, although its director of strategy Dr Maria Neophytou said it was “disappointed” not to see a similar “ambition” on requiring protection at device-level.

The charity is also one of a number of groups urging the government to force tech firms to find simpler methods for finding and blocking CSAM on their services, including in private messages.

It announced plans on Thursday to make it “impossible” for children to take, share or view a nude picture on their phones.