Former OpenAI Chief Scientist Announces New Safety-Focused Company

  • 📰 TIME
  • ⏱ Reading Time:
  • 47 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 22%
  • Publisher: 53%

日本 ニュース ニュース

日本 最新ニュース,日本 見出し

OpenAI co-founder and former chief scientist Ilya Sutskever announced that he’s launching a new venture dubbed Safe Superintelligence Inc.

on Wednesday that he’s launching a new venture dubbed Safe Superintelligence Inc. Sutskever said on X that the new lab will focus solely on building a safe “superintelligence”—an industry term for a hypothetical system that’s smarter than humans.

Sutskever is joined at Safe SuperIntelligence Inc. by co-founders Daniel Gross, an investor and engineer who worked on AI at Apple till 2017, and Daniel Levy, another former OpenAI employee. TheSutskever was one of OpenAI’s founding members, and was chief scientist during the company’s meteoric rise following the release of ChatGPT. In November, Sutskever took part in theSafe Superintelligence Inc. says it will only aim to release one product: the system in its name.

“Our singular focus means no distraction by management overhead or product cycles,” the announcement reads, perhaps subtly taking aim at OpenAI. In May, another senior OpenAI member, Jan Leike, who co-led a safety team with Sutskever,to Leike’s accusations by acknowledging there was more work to be done, saying “we take our role here very seriously and carefully weigh feedback on our actions.”, Sutskever elaborated on Safe Superintelligence Inc.

このニュースをすぐに読めるように要約しました。ニュースに興味がある場合は、ここで全文を読むことができます。 続きを読む:

 /  🏆 93. in JP
 

コメントありがとうございます。コメントは審査後に公開されます。

日本 最新ニュース, 日本 見出し