The world’s leading AI companies commit to protecting children’s safety online

Leading artificial intelligence companies, including OpenAI, Microsoft, Google, Meta and others, have jointly committed to preventing their AI tools from being used to exploit children and generate child sexual abuse material (CSAM). The initiative was led by child safety group Thorn and All Tech Is Human, a nonprofit focused on responsible technology. Commitments from AI […]

The world’s leading AI companies commit to protecting children’s safety online

Leading artificial intelligence companies, including OpenAI, Microsoft, Google, Meta and others, have jointly committed to preventing their AI tools from being used to exploit children and generate child sexual abuse material (CSAM). The initiative was led by child safety group Thorn and All Tech Is Human, a nonprofit focused on responsible technology.

Commitments from AI companies, Thorn said, “has set a groundbreaking precedent for the industry and represents a significant step forward in efforts to defend children from sexual abuse through the deployment of generative AI.” The aim of this initiative is to prevent the creation of sexually explicit material involving children and to remove it from social media platforms and search engines. More than 104 million records of alleged child sexual abuse materials were reported in the United States in 2023 alone, according to Thorn. In the absence of collective action, generative AI is poised to make this problem worse and overwhelm law enforcement who are already struggling to identify real victims.

On Tuesday, Thorn and All Tech Is Human released a new paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse” which outlines strategies and provides recommendations for companies building AI tools, search engines, social media platforms, hosting companies and developers to take steps to prevent generative AI from being used to harm children.

One of the recommendations, for example, asks companies to carefully choose the datasets used to train AI models and avoid those containing only instances of CSAM but also adult sexual content due to the propensity of generative AI to combine the two concepts. Thorn is also calling on social media platforms and search engines to remove links to websites and apps that allow people to “nudize” images of children, creating new sexual abuse content online on children generated by AI. According to the paper, a flood of AI-generated CSAM will make it harder to identify real victims of child sexual abuse by increasing the “haystack problem” – a reference to the amount of content that forces of the order must currently review.

“This project was about making it very clear that you don’t need to give up,” Rebecca Portnoff, Thorn’s vice president of data science. said THE Wall Street Journal. “We want to be able to change the course of this technology in a way that the existing harms of this technology are brought down to the knees.”

Some companies, Portnoff said, had already agreed to separate images, video and audio involving children from datasets containing adult content to prevent their models from combining the two. Others also add watermarks to identify AI-generated content, but the method is not foolproof: watermarks and metadata can be easily removed.

Teknory