The biggest AI companies agree to crack down on child abuse images

  • 📰 verge
  • ⏱ Reading Time:
  • 19 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 11%
  • Publisher: 67%

Россия Новости Новости

Россия Последние новости,Россия Последние новости

Companies like Amazon, Google, Meta, Microsoft, and OpenAI commit to a set of principles that aims to remove and avoid problematic images in datasets to train AI models.

Tech companies like Google, Meta, OpenAI, Microsoft, and Amazon committed today to reviewing their AI training data for child sexual abuse material and removing it from use in any future models. The companies signed on to a new set of principles meant to limit the proliferation of CSAM. They promise to ensure training datasets do not contain CSAM, to avoid datasets with a high risk of including CSAM, and to remove CSAM imagery or links to CSAM from data sources.

Stanford researchers released a report in December that found a popular dataset used to train some AI models contained links to CSAM imagery. Researchers also found that a tip line run by the National Center for Missing and Exploited Children , already struggling to handle the volume of reported CSAM content, is quickly being overwhelmed by AI-generated CSAM images.

 

Спасибо за ваш комментарий. Ваш комментарий будет опубликован после проверки
Мы обобщили эту новость, чтобы вы могли ее быстро прочитать.Если новость вам интересна, вы можете прочитать полный текст здесь Прочитайте больше:

 /  🏆 94. in RU

Россия Последние новости, Россия Последние новости