The biggest AI companies agree to crack down on child abuse images

  • 📰 verge
  • ⏱ Reading Time:
  • 19 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 11%
  • Publisher: 67%

España Noticias Noticias

España Últimas Noticias,España Titulares

Companies like Amazon, Google, Meta, Microsoft, and OpenAI commit to a set of principles that aims to remove and avoid problematic images in datasets to train AI models.

Tech companies like Google, Meta, OpenAI, Microsoft, and Amazon committed today to reviewing their AI training data for child sexual abuse material and removing it from use in any future models. The companies signed on to a new set of principles meant to limit the proliferation of CSAM. They promise to ensure training datasets do not contain CSAM, to avoid datasets with a high risk of including CSAM, and to remove CSAM imagery or links to CSAM from data sources.

Stanford researchers released a report in December that found a popular dataset used to train some AI models contained links to CSAM imagery. Researchers also found that a tip line run by the National Center for Missing and Exploited Children , already struggling to handle the volume of reported CSAM content, is quickly being overwhelmed by AI-generated CSAM images.

Hemos resumido esta noticia para que puedas leerla rápidamente. Si estás interesado en la noticia, puedes leer el texto completo aquí. Leer más:

 /  🏆 94. in ES
 

Gracias por tu comentario. Tu comentario será publicado después de ser revisado.

España Últimas Noticias, España Titulares