Google employees call company’s ChatGPT rival a “pathological liar” and “worse than useless”

  • 📰 mybroadband
  • ⏱ Reading Time:
  • 120 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 51%
  • Publisher: 67%

United States News News

United States United States Latest News,United States United States Headlines

Internal messages from Google employees reveal that the company’s ChatGPT rival, Google Bard, is being rushed and is delivering poor quality information.

Shortly before Google introduced Bard, its AI chatbot, to the public in March, it asked employees to test the tool.

The Alphabet Inc.-owned company had pledged in 2021 to double its team studying the ethics of artificial intelligence and to pour more resources into assessing the technology’s potential harms. Google is aiming to revitalize its maturing search business around the cutting-edge technology, which could put generative AI into millions of phones and homes around the world — ideally before OpenAI, with the backing of Microsoft, beats the company to it.

The company was cautious of its power and the ethical considerations that would go hand-in-hand with embedding the technology into search and other marquee products, the employees said. But Google may not have time to wait for perfection in other areas, she advised in the meeting. “‘Fairness’ may not be, we have to get to 99 percent,” Gennai said, referring to its term for reducing bias in products. “On ‘fairness,’ we might be at 80, 85 percent, or something” to be enough for a product launch, she added.

That group then “determined it was appropriate to move forward for a limited experimental launch with continuing pre-training, enhanced guardrails, and appropriate disclaimers,” she said. That means that by their very nature, the products risk regurgitating offensive, harmful or inaccurate speech. The challenge of developing cutting-edge artificial intelligence in an ethical manner has long spurred internal debate. The company has faced high-profile blunders over the past few years, including an embarrassing incident in 2015 when its Photos service mistakenly labeled images of a Black software developer and his friend as “gorillas.”

Samy Bengio, a computer scientist who oversaw Gebru and Mitchell’s work, and several other researchers would end up leaving for competitors in the intervening years. AI research in delicate areas like biometrics, identity features, or kids are given a mandatory “sensitive topics” review by Gennai’s team, but other projects do not necessarily receive ethics reviews, though some employees reach out to the ethical AI team even when not required.

“Can they actually be the leaders and challenge OpenAI at their own game?” Recent developments — like Samsung reportedly consideringGoogle with Microsoft’s Bing, whose tech is powered by ChatGPT, as the search engine on its devices — have underscored the first-mover advantage in the market right now.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 11. in US

United States United States Latest News, United States United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

MARKET WRAP: JSE slides, rand firms on hot CPIWorse than expected inflation paves the way for a further rate hike by the Reserve Bank
Source: BDliveSA - 🏆 12. / 63 Read more »