The biggest AI companies agree to crack down on child abuse images

  • 📰 verge
  • ⏱ Reading Time:
  • 19 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 11%
  • Publisher: 67%

Indonesia Berita Berita

Indonesia Berita Terbaru,Indonesia Berita utama

Companies like Amazon, Google, Meta, Microsoft, and OpenAI commit to a set of principles that aims to remove and avoid problematic images in datasets to train AI models.

Tech companies like Google, Meta, OpenAI, Microsoft, and Amazon committed today to reviewing their AI training data for child sexual abuse material and removing it from use in any future models. The companies signed on to a new set of principles meant to limit the proliferation of CSAM. They promise to ensure training datasets do not contain CSAM, to avoid datasets with a high risk of including CSAM, and to remove CSAM imagery or links to CSAM from data sources.

Stanford researchers released a report in December that found a popular dataset used to train some AI models contained links to CSAM imagery. Researchers also found that a tip line run by the National Center for Missing and Exploited Children , already struggling to handle the volume of reported CSAM content, is quickly being overwhelmed by AI-generated CSAM images.

 

Terima kasih atas komentar Anda. Komentar Anda akan dipublikasikan setelah ditinjau.
Berita ini telah kami rangkum agar Anda dapat membacanya dengan cepat. Jika Anda tertarik dengan beritanya, Anda dapat membaca teks lengkapnya di sini. Baca lebih lajut:

 /  🏆 94. in İD

Indonesia Berita Terbaru, Indonesia Berita utama