The biggest AI companies agree to crack down on child abuse images

  • 📰 verge
  • ⏱ Reading Time:
  • 19 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 11%
  • Publisher: 67%

Italia Notizia Notizia

Italia Ultime Notizie,Italia Notizie

Companies like Amazon, Google, Meta, Microsoft, and OpenAI commit to a set of principles that aims to remove and avoid problematic images in datasets to train AI models.

Tech companies like Google, Meta, OpenAI, Microsoft, and Amazon committed today to reviewing their AI training data for child sexual abuse material and removing it from use in any future models. The companies signed on to a new set of principles meant to limit the proliferation of CSAM. They promise to ensure training datasets do not contain CSAM, to avoid datasets with a high risk of including CSAM, and to remove CSAM imagery or links to CSAM from data sources.

Stanford researchers released a report in December that found a popular dataset used to train some AI models contained links to CSAM imagery. Researchers also found that a tip line run by the National Center for Missing and Exploited Children , already struggling to handle the volume of reported CSAM content, is quickly being overwhelmed by AI-generated CSAM images.

Abbiamo riassunto questa notizia in modo che tu possa leggerla velocemente. Se sei interessato alla notizia puoi leggere il testo completo qui. Leggi di più:

 /  🏆 94. in İT
 

Grazie per il tuo commento. Il tuo commento verrà pubblicato dopo essere stato esaminato.

Italia Ultime Notizie, Italia Notizie