A new law has been introduced that addresses AI-generated child sexual abuse material (CSAM), declaring it harmful even though it doesn’t involve real victims.
The law highlights that all forms of CSAM, including AI-generated content, can be used to groom children by making them believe sexual activity with adults is normal... Continue reading here ▶
However, it places special emphasis on the dangers of AI-generated CSAM due to how AI systems are trained.
According to the law, AI-generated CSAM is harmful because these systems are trained on datasets that include thousands of images of real victims of CSAM.
This process essentially “revictimizes these children by using their likeness to create AI-generated CSAM indefinitely.”
The law defines “artificial intelligence” as a machine-based system that can process input and produce output to influence digital or physical environments, varying in its level of autonomy.
Recently, the Sacramento Valley Internet Crimes Against Children (ICAC) Task Force launched an investigation after receiving a tip from the National Center for Missing and Exploited Children.
The tip led to the discovery of 18 CSAM files shared online, which expanded to reveal a total of 134 videos. Investigators traced these files to a local resident and cartoonist, Darrin Bell.
On Wednesday, police searched Bell’s home and claimed to find “evidence related to the case,” including AI-generated CSAM. Bell was arrested and is being held on a $1 million bail. This marks the first time Sacramento Valley ICAC has charged someone with possessing AI-generated CSAM.
Darrin Bell, a well-known cartoonist, won a Pulitzer Prize in 2019 for his editorial cartoons addressing issues affecting marginalized communities and exposing political corruption. Bell has previously stated that his goal as an artist is to encourage respect for human dignity.
This case has drawn attention due to Bell’s reputation and the implications of using AI to create harmful content. The investigation is ongoing.