Former OpenAI Chief Scientist Announces New Safety-Focused Company

  • 📰 TIME
  • ⏱ Reading Time:
  • 47 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 22%
  • Publisher: 53%

ایران اخبار اخبار

ایران آخرین اخبار,ایران سرفصلها

OpenAI co-founder and former chief scientist Ilya Sutskever announced that he’s launching a new venture dubbed Safe Superintelligence Inc.

on Wednesday that he’s launching a new venture dubbed Safe Superintelligence Inc. Sutskever said on X that the new lab will focus solely on building a safe “superintelligence”—an industry term for a hypothetical system that’s smarter than humans.

Sutskever is joined at Safe SuperIntelligence Inc. by co-founders Daniel Gross, an investor and engineer who worked on AI at Apple till 2017, and Daniel Levy, another former OpenAI employee. TheSutskever was one of OpenAI’s founding members, and was chief scientist during the company’s meteoric rise following the release of ChatGPT. In November, Sutskever took part in theSafe Superintelligence Inc. says it will only aim to release one product: the system in its name.

“Our singular focus means no distraction by management overhead or product cycles,” the announcement reads, perhaps subtly taking aim at OpenAI. In May, another senior OpenAI member, Jan Leike, who co-led a safety team with Sutskever,to Leike’s accusations by acknowledging there was more work to be done, saying “we take our role here very seriously and carefully weigh feedback on our actions.”, Sutskever elaborated on Safe Superintelligence Inc.

این خبر را خلاصه کرده ایم تا بتوانید سریع آن را بخوانید. اگر به خبر علاقه مند هستید، می توانید متن کامل را اینجا بخوانید. ادامه مطلب:

 /  🏆 93. in İR
 

از نظر شما متشکرم. نظر شما پس از بررسی منتشر خواهد شد.

ایران آخرین اخبار, ایران سرفصلها