OpenAI Safety Worker Quit Due to Losing Confidence Company 'Would Behave Responsibly Around the Time of AGI'

  • 📰 futurism
  • ⏱ Reading Time:
  • 55 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 25%
  • Publisher: 68%

France Nouvelles Nouvelles

France Dernières Nouvelles,France Actualités

Science and Technology News and Videos

that he had lost confidence that the Sam Altman-led company will "behave responsibly around the time of ," the theoretical point at which an AI can outperform a human.In several followup posts on the forum LessWrong, Kokotajlo explained his "disillusionment" that led to him quitting, which was related to a growing call to put a pause on research that could eventually lead to the establishment of AGI.of an AI that exceeds the cognitive capabilities of humans.

The Superalignment team, which Saunders was part of at OpenAI for three years, was cofounded by computer scientist and former OpenAI chief scientist Ilya Sutskever and his colleague Jan Leike. It's tasked with ensuring that "AI systems much smarter than humans follow human intent," according to OpenAI's website.

Instead of having a "solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue," the company is hoping that "scientific and technical breakthroughs" could lead to an equally superhuman alignment tool that can keep systems that are "much smarter than us" in check.

The debate surrounding the dangers of an unchecked superintelligent AI may have played a role in the firing and eventual rehiring of CEO Sam Altman last year. Sutskever, who used to sit on the original board of OpenAI's non-profit entity,by experts that AGI is only a matter of years away, there's no guarantee that we'll ever reach a point at which an AI could outperform humans.

 

Merci pour votre commentaire. Votre commentaire sera publié après examen.
Nous avons résumé cette actualité afin que vous puissiez la lire rapidement. Si l'actualité vous intéresse, vous pouvez lire le texte intégral ici. Lire la suite:

 /  🏆 85. in FR

France Dernières Nouvelles, France Actualités