Former OpenAI Chief Scientist Announces New Safety-Focused Company

  • 📰 TIME
  • ⏱ Reading Time:
  • 47 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 22%
  • Publisher: 53%

مصر أخبار أخبار

مصر أحدث الأخبار,مصر عناوين

OpenAI co-founder and former chief scientist Ilya Sutskever announced that he’s launching a new venture dubbed Safe Superintelligence Inc.

on Wednesday that he’s launching a new venture dubbed Safe Superintelligence Inc. Sutskever said on X that the new lab will focus solely on building a safe “superintelligence”—an industry term for a hypothetical system that’s smarter than humans.

Sutskever is joined at Safe SuperIntelligence Inc. by co-founders Daniel Gross, an investor and engineer who worked on AI at Apple till 2017, and Daniel Levy, another former OpenAI employee. TheSutskever was one of OpenAI’s founding members, and was chief scientist during the company’s meteoric rise following the release of ChatGPT. In November, Sutskever took part in theSafe Superintelligence Inc. says it will only aim to release one product: the system in its name.

“Our singular focus means no distraction by management overhead or product cycles,” the announcement reads, perhaps subtly taking aim at OpenAI. In May, another senior OpenAI member, Jan Leike, who co-led a safety team with Sutskever,to Leike’s accusations by acknowledging there was more work to be done, saying “we take our role here very seriously and carefully weigh feedback on our actions.”, Sutskever elaborated on Safe Superintelligence Inc.

لقد قمنا بتلخيص هذا الخبر حتى تتمكن من قراءته بسرعة. إذا كنت مهتمًا بالأخبار، يمكنك قراءة النص الكامل هنا. اقرأ أكثر:

 /  🏆 93. in EG
 

شكرًا لك على تعليقك. سيتم نشر تعليقك بعد مراجعته.

مصر أحدث الأخبار, مصر عناوين