Why Tech Companies Keep Making Racist Mistakes With AI

  • 📰 Gizmodo
  • ⏱ Reading Time:
  • 77 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 34%
  • Publisher: 51%

대한민국 뉴스 뉴스

대한민국 최근 뉴스,대한민국 헤드 라인

I Created a Biased AI Algorithm 25 Years Ago—Tech Companies Are Still Making the Same Mistake.

had shown that skin-colored areas of an image could be extracted in real time. So we decided to focus on skin color as an additional cue for the tracker.I used a digital camera – still a rarity at that time – to take a few shots of my own hand and face, and I also snapped the hands and faces of two or three other people who happened to be in the building. It was easy to manually extract some of the skin-colored pixels from these images and construct a statistical model for the skin colors.

In the age of AI, that knapsack needs some new items, such as “AI systems won’t give poor results because of my race.” The invisible knapsack of a white scientist would also need: “I can develop an AI system based on my own appearance, and know it will work well for most of my users.”One suggested remedy for white privilege is to be actively. For the 1998 head-tracking system, it might seem obvious that the anti-racist remedy is to treat all skin colors equally.

Scientists also face a nasty subconscious dilemma when incorporating diversity into machine learning models: Diverse, inclusive models perform worse than narrow models.A simple analogy can explain this. Imagine you are given a choice between two tasks. Task A is to identify one particular type of tree – say, elm trees. Task B is to identify five types of trees: elm, ash, locust, beech and walnut.

In the same way, an algorithm that tracks only white skin will be more accurate than an algorithm that tracks the full range of human skin colors. Even if they are aware of the need for diversity and fairness, scientists can be subconsciously affected by this competing need for accuracy.My creation of a biased algorithm was thoughtless and potentially offensive. Even more concerning, this incident demonstrates how bias can remain concealed deep within an AI system.

The good news is that a great deal of progress on AI fairness has already been made, both in academia and in industry. Microsoft, for example, has a research group known as

이 소식을 빠르게 읽을 수 있도록 요약했습니다. 뉴스에 관심이 있으시면 여기에서 전문을 읽으실 수 있습니다. 더 많은 것을 읽으십시오:

 /  🏆 556. in KR
 

귀하의 의견에 감사드립니다. 귀하의 의견은 검토 후 게시됩니다.

대한민국 최근 뉴스, 대한민국 헤드 라인

Similar News:다른 뉴스 소스에서 수집한 이와 유사한 뉴스 기사를 읽을 수도 있습니다.

EU Crypto Tax Plans Include NFTs, Foreign Companies, Draft Text ShowsLaws set to be agreed next week would require crypto companies to register with tax authorities, even if they’re based outside the bloc or offering non-fungible tokens.
출처: CoinDesk - 🏆 291. / 63 더 많은 것을 읽으십시오 »

These companies will pay their employees to work out'They're getting compensated whatever their pay rate is to be there,' one CEO said of his company's fitness classes.
출처: wrtv - 🏆 598. / 51 더 많은 것을 읽으십시오 »

Just Capital reveals top companies for MomsAccess to child care and paid leave are among the top obstacles working women face, according to Just Capital polling. JBoorstin reports.
출처: CNBC - 🏆 12. / 72 더 많은 것을 읽으십시오 »

Victims’ Families Sue Social Media Companies Over Buffalo MassacreThe lawsuit alleges that several tech giants fueled gunman Payton Gendron with “racist, antisemitic, and white supremacist propaganda.”
출처: thedailybeast - 🏆 307. / 63 더 많은 것을 읽으십시오 »

Loved ones sue social media companies over Buffalo massacreBUFFALO, N.Y. — Loved ones of those killed in the 2022 Buffalo grocery store mass shooting filed a wrongful death lawsuit Friday against a number of social media companies alleging they facilitated the teenage killer's white supremacist radicalization by allowing racist propaganda to fester on their platforms.
출처: WOKVNews - 🏆 247. / 63 더 많은 것을 읽으십시오 »