In the last three years, artificial intelligence tools from text to image They have made a huge leap in the quality of the results they generate, which are increasingly photorealistic. Among the dangers of the ability to create any type of image with a simple text description, misinformation is usually cited as the main one, but Much more serious, harmful and inhumane is the generation of images of child sexual abuse. And as stated by the Internet Watch Foundation (IWF, a British watchdog) this Friday, The number of AI-generated child abuse images found on the internet is reaching a ‘tipping point’.
The IWF works with police forces and technology providers to track the images they find online and helps eliminate hundreds of thousands each year. Now he claims that artificial intelligence is making your job much more difficult.
‘I find it really chilling as I feel like we are at a turning point’said ‘Jeff‘, a senior IWF analyst who uses a false name to protect his identity.
Over the past six months, Jeff and his team have detected more images of child abuse generated by AI than in the entire previous year. Much of the AI-created images you see are very realistic. ‘Before, we could say with certainty what an AI-generated image was, but we’re getting to the point where even a trained analyst would have a hard time knowing if it’s real or not‘ says Jeff.
As the IWF explains, to make AI images so realistic, the software is trained with already existing images of sexual abuse. It is worth remembering that, although generated by AI, explicit images of children They are as illegal as the real thing.
‘AI-generated child sexual abuse material causes horrific harm, not only to those who might see it, but also to survivors who are revictimized every time their images and videos of abuse are mercilessly exploited for the twisted enjoyment of online predators,’ notes Derek Ray-Hillinterim executive director of the IWF.
More worrying is the fact that almost all of the content detected by the IWF It was not hidden on the Dark Web, but on the publicly accessible Internet available to anyone.
IWF analysts upload URLs of web pages containing AI-generated images of child sexual abuse to a list that is shared with the tech industry so they can block the sites. AI images are also given a unique code, a fingerprintso they can be automatically tracked, even if they are deleted and re-uploaded elsewhere.
More than half of the AI-generated content found by the IWF in the last six months was hosted on servers in Russia and the United Stateswith a significant amount also found in Japan and the Netherlands.