Staffordshire AI expert warns parents not to post photos of their children online
It's as concerns grow over new 'nude deepfake' apps and software
Parents across Staffordshire and Cheshire are being warned not to post photographs of their children online to protect their safety.
It's as major concerns grow over AI-powered "nudifying" apps which can create non-consensual explicit images of people, including children.
Internet Matters has called on the Government to strengthen the Online Safety Act to ban them after a study from the group estimated that as many as half a million children have encountered such images online.
It said its research had found a growing fear among young people over the issue, with 55% of teenagers saying it would be worse to have a deepfake nude of them created and shared than a real image.
"It's a real call to action for parents in particular to not post pictures of their children online." said Professor Catherine Flick from the University of Staffordshire, whose expertise includes artificial intelligence and child protection.
"Especially concerning is if there's a long history of posting photos. These artificial intelligence generative models need a lot of data to work from, the more pictures they have access to the more realistic it's going to be, which is absolutely horrifying.
"Posting to Instagram or something like that about going for a walk today, or you're with your kids at the park or at the beach, not only do you get these collection of images but they can be manipulated in ways that you would be absolutely horrified.
"There's a consent issue there too, these children usually aren't consenting for their pictures to be put up online. Imagine having a whole back catalogue of your photographs available, especially 10-20 years down the line.
"You've got to be careful with this technology. It's a Pandora's box and we've opened it now. We're already seeing the lives that it's ruining."
"It absolutely terrifies me" - Professor Flick
Strengthening the new online safety laws and new legislation to ban nudifying tools are necessary because current legislation is not keeping pace, Internet Matters said, arguing that the AI models used to generate sexual images of children are not currently illegal in the UK, despite possession of such an image being a criminal offence.
Earlier this month, online safety watchdog the Internet Watch Foundation (IWF) warned that AI-generated child sexual abuse content is now being increasingly found on the open, public web, rather than hidden away on dark web forums.
Internet Matters said it estimates that 99% of deepfake nudes feature women and girls, and warned the content is being used to facilitate child-on-child sexual abuse, adult perpetrated sexual abuse, and sextortion.
Internet Matters co-chief executive Carolyn Bunting said: "AI has made it possible to produce highly realistic deepfakes of children with the click of a few buttons.
"Nude deepfakes are a profound invasion of bodily autonomy and dignity, and their impact can be life-shattering.
"With nudifying tools largely focused on females, they are having a disproportionate impact on girls.
"Children have told us about the fear they have that this could happen to them without any knowledge and by people they don't know. They see deepfake image abuse as a potentially greater violation because it is beyond their control.
"Deepfake image abuse can happen to anybody, at any time. Parents should not be left alone to deal with this concerning issue.
"It is time for Government and industry to take action to prevent it by cracking down on the companies that produce and promote these tools that are used to abuse children."
The safety organisation's study involved surveying 2,000 parents of children aged three to 17, and 1,000 children aged nine to 17, in the UK.
It found that teenage boys are twice as likely to report an experience with a nude deepfake. However, boys are more likely to be the creators of deepfake nudes, and girls are more likely to be the victims.
The study also indicated support among both children and parents for more education around deepfakes, with 92% of teenagers and 88% of parents saying they believe children should be taught about the risks of the technology in school.
Minister for safeguarding and violence against women and girls Jess Phillips said: "This Government welcomes the work of Internet Matters, which has provided an important insight into how emerging technologies are being misused.
"The misuse of AI technologies to create child sexual abuse material is an increasingly concerning trend.
"Online deepfake abuse disproportionately impacts woman and girls online, which is why we will work with Internet Matters and other partners to address this as part of our mission to halve violence against women and girls over the next decade.
"Technology companies, including those developing nudifying apps, have a responsibility to ensure their products cannot be misused to create child sexual abuse content or non-consensual deepfake content."
First for all the latest news from across the UK every hour on Hits Radio on DAB, at hitsradio.co.uk and on the Rayo app.