After leaving OpenAI under a dark cloud, founding member and former chief scientist Ilya Sutskever is starting his own firm to bring about "safe" artificial superintelligence.Questions abound. Did Sutskever mean a "crack team"? Or his new team "cracked" in some way? Regardless, in an"At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale,” he told the outlet.
Though not stated explicitly, that comment harkens back somewhat to the headline-grabbing Altman sacking that Sutskever led last fall. While it remains unclear exactly why Sutskever and some of his fellow former OpenAI board members turned against Altman in last November's ". With the emphasis on "safety" in Sutskever's new venture making its way into the project's very name, it's easy to see a link between the two.
"Out of all the problems we face," Gross told the outlet, "raising capital is not going to be one of them.", its founders' resumes lend it a certain cachet — and its route to incorporation has been, it seems, paved with some lofty intentions.
Brasil Últimas Notícias, Brasil Manchetes
Similar News:Você também pode ler notícias semelhantes a esta que coletamos de outras fontes de notícias.
Fonte: verge - 🏆 94. / 67 Consulte Mais informação »
Fonte: Investingcom - 🏆 450. / 53 Consulte Mais informação »