Former OpenAI Chief Scientist Announces New Safety-Focused Company

  • 📰 TIME
  • ⏱ Reading Time:
  • 47 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 22%
  • Publisher: 53%

Malaysia News News

Malaysia Malaysia Latest News,Malaysia Malaysia Headlines

OpenAI co-founder and former chief scientist Ilya Sutskever announced that he’s launching a new venture dubbed Safe Superintelligence Inc.

on Wednesday that he’s launching a new venture dubbed Safe Superintelligence Inc. Sutskever said on X that the new lab will focus solely on building a safe “superintelligence”—an industry term for a hypothetical system that’s smarter than humans.

Sutskever is joined at Safe SuperIntelligence Inc. by co-founders Daniel Gross, an investor and engineer who worked on AI at Apple till 2017, and Daniel Levy, another former OpenAI employee. TheSutskever was one of OpenAI’s founding members, and was chief scientist during the company’s meteoric rise following the release of ChatGPT. In November, Sutskever took part in theSafe Superintelligence Inc. says it will only aim to release one product: the system in its name.

“Our singular focus means no distraction by management overhead or product cycles,” the announcement reads, perhaps subtly taking aim at OpenAI. In May, another senior OpenAI member, Jan Leike, who co-led a safety team with Sutskever,to Leike’s accusations by acknowledging there was more work to be done, saying “we take our role here very seriously and carefully weigh feedback on our actions.”, Sutskever elaborated on Safe Superintelligence Inc.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 93. in MY

Malaysia Malaysia Latest News, Malaysia Malaysia Headlines