Former OpenAI Safety Employee Warns Company’s Approach to AI Is ‘Building the Titanic’

  • 📰 BreitbartNews
  • ⏱ Reading Time:
  • 65 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 29%
  • Publisher: 51%

المملكة العربية السعودية أخبار أخبار

المملكة العربية السعودية أحدث الأخبار,المملكة العربية السعودية عناوين

Source of breaking news and analysis, insightful commentary and original reporting, curated and written specifically for the new generation of independent and conservative thinkers.

A former OpenAI safety employee has raised concerns about the company’s approach to artificial general intelligence , likening it to the ill-fated Titanic’s prioritization of speed over safety.that William Saunders, who worked for three years as a member of OpenAI’s “superalignment” safety team, has voiced his apprehensions about the company’s trajectory in the field of artificial intelligence.

The former safety employee’s concerns stem from what he perceives as a shift in priorities within OpenAI. Saunders explained that during his tenure, he often questioned whether the company’s approach was more akin to the meticulous and risk-aware Apollo space program or the ill-fated Titanic. As time progressed, he felt that OpenAI’s leadership was increasingly making decisions that prioritized “getting out newer, shinier products” over careful risk assessment and mitigation.

The analogy extends beyond mere comparison. Saunders fears that OpenAI might be overly reliant on its current measures and research for AI safety, much like the Titanic’s creators who believed their ship to be unsinkable. “Lots of work went into making the ship safe and building watertight compartments so that they could say that it was unsinkable,” he said. “But at the same time, there weren’t enough lifeboats for everyone. So when disaster struck, a lot of people died.

Saunders’ concerns are not isolated. In early June, a group of former and current employees from Google’s DeepMind and OpenAI, including Saunders himself,The letter calls for a “right to warn about artificial intelligence” and asks for a commitment to four principles around transparency and accountability.

لقد قمنا بتلخيص هذا الخبر حتى تتمكن من قراءته بسرعة. إذا كنت مهتمًا بالأخبار، يمكنك قراءة النص الكامل هنا. اقرأ أكثر:

 /  🏆 610. in SA
 

شكرًا لك على تعليقك. سيتم نشر تعليقك بعد مراجعته.

المملكة العربية السعودية أحدث الأخبار, المملكة العربية السعودية عناوين

Similar News:يمكنك أيضًا قراءة قصص إخبارية مشابهة لهذه التي قمنا بجمعها من مصادر إخبارية أخرى.

Safety concerns spark changes at the top of OpenAI, one of America's leading AI companiesGrowing fears about the dangers of artificial intelligence have sparked a leadership shake-up at one of America's leading AI companies.
مصدر: WashTimes - 🏆 235. / 63 اقرأ أكثر »