Childline: The very real risks to children through the growth and access to AI tools


It’s a concern shared by the public too. More than three-quarters of those surveyed in the North East (79%), said they wanted to see child safety checks built into new generative AI products before they go public, even if that causes a delay in releasing them.
The survey also found most of the public (89% nationally, 87% in the region) have some level of concern that “this type of technology may be unsafe for children”.
Advertisement
Hide AdAdvertisement
Hide AdNSPCC research has already found generative AI is being used to generate sexual abuse images of children, groom children and provide misinformation or harmful advice to young people, and our Childline counsellors have been hearing concerns from children and young people about AI from as early as 2019.
One boy aged 14 told counsellors: “I’m so ashamed of what I’ve done, I didn’t mean for it to go this far. A girl I was talking to was asking for pictures and I didn’t want to share my true identity, so I sent a picture of my friend’s face on an AI body.
“Now she’s put that face on a naked body and is saying she’ll post it online if I don’t pay her £50. I don’t even have a way to send money online, I can’t tell my parents, I don’t know what to do.”
Like the internet, AI can be a brilliant tool if used right. It can help innovate, create and produce all kinds of content that can help young people.
Advertisement
Hide AdAdvertisement
Hide AdBut it can also be used to create material which can have a devastating and corrosive impact on the lives of children.
To have any chance of preventing this kind of horrific abuse, the Government must learn from the mistakes made when dealing with unregulated social media platforms and ensure safeguards are designed in from day one, long before generative AI spirals out of control and damages more young lives.