Former OpenAI Safety Employee Warns Company’s Approach to AI Is ‘Building the Titanic’

  • 📰 BreitbartNews
  • ⏱ Reading Time:
  • 65 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 29%
  • Publisher: 51%

Australia News News

Australia Australia Latest News,Australia Australia Headlines

Source of breaking news and analysis, insightful commentary and original reporting, curated and written specifically for the new generation of independent and conservative thinkers.

A former OpenAI safety employee has raised concerns about the company’s approach to artificial general intelligence , likening it to the ill-fated Titanic’s prioritization of speed over safety.that William Saunders, who worked for three years as a member of OpenAI’s “superalignment” safety team, has voiced his apprehensions about the company’s trajectory in the field of artificial intelligence.

The former safety employee’s concerns stem from what he perceives as a shift in priorities within OpenAI. Saunders explained that during his tenure, he often questioned whether the company’s approach was more akin to the meticulous and risk-aware Apollo space program or the ill-fated Titanic. As time progressed, he felt that OpenAI’s leadership was increasingly making decisions that prioritized “getting out newer, shinier products” over careful risk assessment and mitigation.

The analogy extends beyond mere comparison. Saunders fears that OpenAI might be overly reliant on its current measures and research for AI safety, much like the Titanic’s creators who believed their ship to be unsinkable. “Lots of work went into making the ship safe and building watertight compartments so that they could say that it was unsinkable,” he said. “But at the same time, there weren’t enough lifeboats for everyone. So when disaster struck, a lot of people died.

Saunders’ concerns are not isolated. In early June, a group of former and current employees from Google’s DeepMind and OpenAI, including Saunders himself,The letter calls for a “right to warn about artificial intelligence” and asks for a commitment to four principles around transparency and accountability.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 610. in AU
 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.

Australia Australia Latest News, Australia Australia Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Safety concerns spark changes at the top of OpenAI, one of America's leading AI companiesGrowing fears about the dangers of artificial intelligence have sparked a leadership shake-up at one of America's leading AI companies.
Source: WashTimes - 🏆 235. / 63 Read more »