OpenAI Safety Worker Quit Due to Losing Confidence Company 'Would Behave Responsibly Around the Time of AGI'

  • 📰 futurism
  • ⏱ Reading Time:
  • 55 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 25%
  • Publisher: 68%

مصر أخبار أخبار

مصر أحدث الأخبار,مصر عناوين

Science and Technology News and Videos

that he had lost confidence that the Sam Altman-led company will "behave responsibly around the time of ," the theoretical point at which an AI can outperform a human.In several followup posts on the forum LessWrong, Kokotajlo explained his "disillusionment" that led to him quitting, which was related to a growing call to put a pause on research that could eventually lead to the establishment of AGI.of an AI that exceeds the cognitive capabilities of humans.

The Superalignment team, which Saunders was part of at OpenAI for three years, was cofounded by computer scientist and former OpenAI chief scientist Ilya Sutskever and his colleague Jan Leike. It's tasked with ensuring that "AI systems much smarter than humans follow human intent," according to OpenAI's website.

Instead of having a "solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue," the company is hoping that "scientific and technical breakthroughs" could lead to an equally superhuman alignment tool that can keep systems that are "much smarter than us" in check.

The debate surrounding the dangers of an unchecked superintelligent AI may have played a role in the firing and eventual rehiring of CEO Sam Altman last year. Sutskever, who used to sit on the original board of OpenAI's non-profit entity,by experts that AGI is only a matter of years away, there's no guarantee that we'll ever reach a point at which an AI could outperform humans.

لقد قمنا بتلخيص هذا الخبر حتى تتمكن من قراءته بسرعة. إذا كنت مهتمًا بالأخبار، يمكنك قراءة النص الكامل هنا. اقرأ أكثر:

 /  🏆 85. in EG
 

شكرًا لك على تعليقك. سيتم نشر تعليقك بعد مراجعته.

مصر أحدث الأخبار, مصر عناوين