Internet charity in Bucks calls for Government to toughen up AI image-modifying regulation

it's as the technology is increasingly used to generate nude deepfake images

Teenager on their phone
Author: Zoe Head-ThomasPublished 22nd Oct 2024

A leading UK online safety organisation, has called on the government to ban "nudifying" AI tools after a new report revealed that over half a million UK teenagers have encountered AI-generated nude deepfakes.

The report, "The New Face of Digital Abuse: Children’s Experiences of Nude Deepfakes," created by Internet Matters, highlights the ease with which explicit, non-consensual images of children and women can be produced and shared using generative AI technologies.

The survey of 2,000 parents and 1,000 children aged 9 to 17 found that 13% of teenagers had an experience involving nude deepfakes, whether by seeing, creating, or sharing one, or knowing someone who has.

This means that four children in every class of 30 are estimated to have had such encounters.

The report also shows that girls are most likely to be the victims of these images, while boys are more often the creators.

Carolyn Bunting, co-CEO of Internet Matters, based in Buckinghamshire, said: "We’ve seen an emerging trend with children and young people being exposed to deepfake technology.

"AI has made it possible to produce highly realistic deepfakes of children with the click of a few buttons.

"Nude deepfakes are a profound invasion of bodily autonomy and dignity, and their impact can be life-shattering."

One of the most alarming findings is that over half of teenagers (55%) believe it would be worse to have a deepfake nude created of them than to have a real image shared.

Many children are deeply fearful of this form of abuse, with Ms Bunting explaining: "What children tell us is they’re really worried about the potential for someone to create a deepfake nude of them.

"To them, it can be worse than creating an image themselves because they’ve lost complete control."

Internet Matters is calling for immediate government action, including banning nudifying apps and tools.

The organisation is also advocating for reforms to the Online Safety Act, including the integration of AI-generated harm, such as deepfake nudes, into its scope.

In addition to legislative action, the report highlights the need for better education about the risks of AI technology.

Only 11% of teenagers have been taught about deepfakes in school, with just 6% learning specifically about nude deepfakes.

Ms Bunting said: "We’d really like to see this included in the classroom to make sure it’s covered as a topic in school.

"Children aren’t really getting any education about this issue."

The report urges both government and tech companies to take stronger action to protect children from this growing form of digital abuse.

Parents are also encouraged to address the concerns with their children and be part of the education process around harmful online content.

What does the Government say?

Minister for Safeguarding and Violence Against Women and Girls, Jess Phillips MP, said:

"This government welcomes the work of Internet Matters, which has provided an important insight into how emerging technologies are being misused.

"The misuse of AI technologies to create child sexual abuse material is an increasingly concerning trend. Online deepfake abuse disproportionately impacts women and girls online, which is why we will work with Internet Matters and other partners to address this as part of our mission to halve violence against women and girls over the next decade.

"Technology companies, including those developing nudifying apps, have a responsibility to ensure their products cannot be misused to create child sexual abuse content or nonconsensual deepfake content."

Hear all the latest news from across the UK on the hour, every hour, on Greatest Hits Radio on DAB, smartspeaker, at greatesthitsradio.co.uk, and on the Rayo app.