Former OpenAI Chief Scientist Announces New Safety-Focused Company

  • 📰 TIME
  • ⏱ Reading Time:
  • 47 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 22%
  • Publisher: 53%

Italia Notizia Notizia

Italia Ultime Notizie,Italia Notizie

OpenAI co-founder and former chief scientist Ilya Sutskever announced that he’s launching a new venture dubbed Safe Superintelligence Inc.

on Wednesday that he’s launching a new venture dubbed Safe Superintelligence Inc. Sutskever said on X that the new lab will focus solely on building a safe “superintelligence”—an industry term for a hypothetical system that’s smarter than humans.

Sutskever is joined at Safe SuperIntelligence Inc. by co-founders Daniel Gross, an investor and engineer who worked on AI at Apple till 2017, and Daniel Levy, another former OpenAI employee. TheSutskever was one of OpenAI’s founding members, and was chief scientist during the company’s meteoric rise following the release of ChatGPT. In November, Sutskever took part in theSafe Superintelligence Inc. says it will only aim to release one product: the system in its name.

“Our singular focus means no distraction by management overhead or product cycles,” the announcement reads, perhaps subtly taking aim at OpenAI. In May, another senior OpenAI member, Jan Leike, who co-led a safety team with Sutskever,to Leike’s accusations by acknowledging there was more work to be done, saying “we take our role here very seriously and carefully weigh feedback on our actions.”, Sutskever elaborated on Safe Superintelligence Inc.

Abbiamo riassunto questa notizia in modo che tu possa leggerla velocemente. Se sei interessato alla notizia puoi leggere il testo completo qui. Leggi di più:

 /  🏆 93. in İT
 

Grazie per il tuo commento. Il tuo commento verrà pubblicato dopo essere stato esaminato.

Italia Ultime Notizie, Italia Notizie