startup, has built a service that alerts companies and organizations when their users post offensive content.
To date, many have tasked human moderators with the job of keeping an eye out for slurs, racist attacks, fake news, and other harmful content.
"Now I would never be personally insulted if somebody referred to a vehicle as a paddy wagon," Quinn, who is of Irish descent, said. "But it is externalizing a population and associating a given ethnicity with criminal behavior." "White supremacists have become masters of this in how they post content using code words using certain kinds of punctuation," Quinn said.
I don’t get how this is supposed to help. The reason why it’s “difficult to tackle” is because people can’t agree what should and shouldn’t be offensive. An AI program is still designed by people, therefore it’s still a person that’s determining what’s offensive
Россия Последние новости, Россия Последние новости
Similar News:Вы также можете прочитать подобные новости, которые мы собрали из других источников новостей
Источник: BusinessInsider - 🏆 729. / 51 Прочитайте больше »