OpenAI Safety Worker Quit Due to Losing Confidence Company 'Would Behave Responsibly Around the Time of AGI'

  • 📰 futurism
  • ⏱ Reading Time:
  • 55 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 25%
  • Publisher: 68%

대한민국 뉴스 뉴스

대한민국 최근 뉴스,대한민국 헤드 라인

Science and Technology News and Videos

that he had lost confidence that the Sam Altman-led company will "behave responsibly around the time of ," the theoretical point at which an AI can outperform a human.In several followup posts on the forum LessWrong, Kokotajlo explained his "disillusionment" that led to him quitting, which was related to a growing call to put a pause on research that could eventually lead to the establishment of AGI.of an AI that exceeds the cognitive capabilities of humans.

The Superalignment team, which Saunders was part of at OpenAI for three years, was cofounded by computer scientist and former OpenAI chief scientist Ilya Sutskever and his colleague Jan Leike. It's tasked with ensuring that "AI systems much smarter than humans follow human intent," according to OpenAI's website.

Instead of having a "solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue," the company is hoping that "scientific and technical breakthroughs" could lead to an equally superhuman alignment tool that can keep systems that are "much smarter than us" in check.

The debate surrounding the dangers of an unchecked superintelligent AI may have played a role in the firing and eventual rehiring of CEO Sam Altman last year. Sutskever, who used to sit on the original board of OpenAI's non-profit entity,by experts that AGI is only a matter of years away, there's no guarantee that we'll ever reach a point at which an AI could outperform humans.

 

귀하의 의견에 감사드립니다. 귀하의 의견은 검토 후 게시됩니다.
이 소식을 빠르게 읽을 수 있도록 요약했습니다. 뉴스에 관심이 있으시면 여기에서 전문을 읽으실 수 있습니다. 더 많은 것을 읽으십시오:

 /  🏆 85. in KR

대한민국 최근 뉴스, 대한민국 헤드 라인