startup, has built a service that alerts companies and organizations when their users post offensive content.
To date, many have tasked human moderators with the job of keeping an eye out for slurs, racist attacks, fake news, and other harmful content.
"Now I would never be personally insulted if somebody referred to a vehicle as a paddy wagon," Quinn, who is of Irish descent, said. "But it is externalizing a population and associating a given ethnicity with criminal behavior." "White supremacists have become masters of this in how they post content using code words using certain kinds of punctuation," Quinn said.
I don’t get how this is supposed to help. The reason why it’s “difficult to tackle” is because people can’t agree what should and shouldn’t be offensive. An AI program is still designed by people, therefore it’s still a person that’s determining what’s offensive
日本 最新ニュース, 日本 見出し
Similar News:他のニュース ソースから収集した、これに似たニュース記事を読むこともできます。
ソース: BusinessInsider - 🏆 729. / 51 続きを読む »