(CTN News) – A UK NGO has reported that children are utilizing AI picture generators to create explicit photographs of each other.
Despite receiving “a small number of reports” from schools, the UK Safer Internet Centre (UKSIC) urged immediate action to prevent the problem from worsening.
It was mentioned that kids could require some guidance to realize that what they were creating could be seen as child abuse.
The organization’s goal is for parents and educators to collaborate.
It noted that, regardless of whether the photos are real or created by AI, it is always unlawful under UK law to create, possess, or disseminate them, even though young people may be more driven by curiosity than malicious intent.
According to the report, children may become disenchanted with the content and share it online without understanding the gravity of the situation. The possibility of using these photographs for blackmail was also cautioned against.
A recent study by RM Technology, an educational technology company, polled 1,000 students and found that slightly under one-third are using AI “to look at inappropriate things online.”
It also discovered that educators were divided on who should be held accountable for informing students about the dangers of such content: governments, schools, or parents.
The UKSIC advocates for a team effort in which parents and schools collaborate.
“[We] need to see steps being taken now, before schools become overwhelmed and the problem grows,” said David Wright, director of UKSIC.
“When new technology, such as AI generators, become more widely available to the public, we should expect these kinds of damaging behaviors from young people, who aren’t always cognizant of the gravity of their actions.
“An increase in criminal content being made in schools is something we never want to see, and interventions must be made urgently to prevent this from spreading further.”
The head of the Marie Collins Foundation, a nonprofit that assists victims of sexual abuse, Victoria Green, issued a dire warning about the “lifelong” harm that might result.
“While the children may not have intended for the images to cause harm, they run the risk of falling into the wrong hands and ending up on abuse websites once they are shared.
“There is a real risk that the images could be further used by sex offenders to shame and silence victims.”
September saw the debut of an app that gives the appearance of removing someone’s clothing in a photo, demonstrating the potential for artificial intelligence to empower youngsters to produce extreme content.
Risks of AI-Generated Images Falling into Wrong Hands
Twenty girls in Spain, ranging in age from eleven to seventeen, have come forward as victims, alleging that it was used to make false naked photos of them.
They had no idea the pictures were making the rounds on social media. No one has filed charges against the lads who created the images as of yet.
Automated software with artificial intelligence capabilities, generally known as bots, started appearing on social media platforms in 2019 under the name “declothing” programs, particularly on the messaging service Telegram.
While apps like the one used in Spain were formerly somewhat crude, advancements in generative AI have made them capable of producing remarkably lifelike false nude photos.
Nearly 50,000 people have subscribed to the Spanish bot, suggesting that it has a large user base of people who pay to make images, usually after making a few for free.
The bot’s creator declined to comment when the BBC asked for their opinion.
The use of “declothing” apps is on the rise, according to cyber expert Javaad Malik of IT security firm KnowBe4. He told the BBC that the difficulty in distinguishing between actual and AI-generated photos is a contributing factor.
“It’s got mass appeal unfortunately, so the trend is just going up and we’re seeing a lot of revenge porn-type activities where cultural or religious beliefs cause a lot more issues for victims,” according to him.