Some artificial intelligence (AI) image-generators used thousands of images of child sexual abuse without the parents’ consent, according to a new report, The Associated Press reported.
The report revealed that the images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world.
Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they’ve learned from two separate buckets of online images — adult pornography and benign photos of kids.
But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion.
The watchdog group based at Stanford University worked with the Canadian Center for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement.
On the eve of the release of the Stanford Internet Observatory’s report, LAION told The Associated Press it was temporarily removing its datasets.
LAION said in a statement that it “has a zero-tolerance policy for illegal content and in an abundance of caution, we have taken down the LAION datasets to ensure they are safe before republishing them.”
The Stanford report stressed that any photos of children — even the most benign — should not be fed into AI systems without their family’s consent.