After leaving OpenAI under a dark cloud, founding member and former chief scientist Ilya Sutskever is starting his own firm to bring about "safe" artificial superintelligence.Questions abound. Did Sutskever mean a "crack team"? Or his new team "cracked" in some way? Regardless, in an"At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale,” he told the outlet.
Though not stated explicitly, that comment harkens back somewhat to the headline-grabbing Altman sacking that Sutskever led last fall. While it remains unclear exactly why Sutskever and some of his fellow former OpenAI board members turned against Altman in last November's ". With the emphasis on "safety" in Sutskever's new venture making its way into the project's very name, it's easy to see a link between the two.
"Out of all the problems we face," Gross told the outlet, "raising capital is not going to be one of them.", its founders' resumes lend it a certain cachet — and its route to incorporation has been, it seems, paved with some lofty intentions.
ประเทศไทย ข่าวล่าสุด, ประเทศไทย หัวข้อข่าว
Similar News:คุณยังสามารถอ่านข่าวที่คล้ายกันนี้ซึ่งเรารวบรวมจากแหล่งข่าวอื่น ๆ ได้
แหล่ง: verge - 🏆 94. / 67 อ่านเพิ่มเติม »
แหล่ง: Investingcom - 🏆 450. / 53 อ่านเพิ่มเติม »