Former OpenAI Safety Employee Warns Company’s Approach to AI Is ‘Building the Titanic’

  • 📰 BreitbartNews
  • ⏱ Reading Time:
  • 65 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 29%
  • Publisher: 51%

Belgique Nouvelles Nouvelles

Belgique Dernières Nouvelles,Belgique Actualités

Source of breaking news and analysis, insightful commentary and original reporting, curated and written specifically for the new generation of independent and conservative thinkers.

A former OpenAI safety employee has raised concerns about the company’s approach to artificial general intelligence , likening it to the ill-fated Titanic’s prioritization of speed over safety.that William Saunders, who worked for three years as a member of OpenAI’s “superalignment” safety team, has voiced his apprehensions about the company’s trajectory in the field of artificial intelligence.

The former safety employee’s concerns stem from what he perceives as a shift in priorities within OpenAI. Saunders explained that during his tenure, he often questioned whether the company’s approach was more akin to the meticulous and risk-aware Apollo space program or the ill-fated Titanic. As time progressed, he felt that OpenAI’s leadership was increasingly making decisions that prioritized “getting out newer, shinier products” over careful risk assessment and mitigation.

The analogy extends beyond mere comparison. Saunders fears that OpenAI might be overly reliant on its current measures and research for AI safety, much like the Titanic’s creators who believed their ship to be unsinkable. “Lots of work went into making the ship safe and building watertight compartments so that they could say that it was unsinkable,” he said. “But at the same time, there weren’t enough lifeboats for everyone. So when disaster struck, a lot of people died.

Saunders’ concerns are not isolated. In early June, a group of former and current employees from Google’s DeepMind and OpenAI, including Saunders himself,The letter calls for a “right to warn about artificial intelligence” and asks for a commitment to four principles around transparency and accountability.

 

Merci pour votre commentaire. Votre commentaire sera publié après examen.
Nous avons résumé cette actualité afin que vous puissiez la lire rapidement. Si l'actualité vous intéresse, vous pouvez lire le texte intégral ici. Lire la suite:

 /  🏆 610. in BE

Belgique Dernières Nouvelles, Belgique Actualités

Similar News:Vous pouvez également lire des articles d'actualité similaires à celui-ci que nous avons collectés auprès d'autres sources d'information.

Safety concerns spark changes at the top of OpenAI, one of America's leading AI companiesGrowing fears about the dangers of artificial intelligence have sparked a leadership shake-up at one of America's leading AI companies.
La source: WashTimes - 🏆 235. / 63 Lire la suite »