Former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company

  • 📰 10News
  • ⏱ Reading Time:
  • 35 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 17%
  • Publisher: 50%

ประเทศไทย ข่าว ข่าว

ประเทศไทย ข่าวล่าสุด,ประเทศไทย หัวข้อข่าว

Jan Leike, who resigned earlier this week, said building 'smarter-than-human machines is an inherently dangerous endeavor.'

A former OpenAI leader who resigned from the company earlier this week said Friday that safety has"taken a backseat to shiny products" at the influential artificial intelligence company.

"However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point," wrote Leike, whose last day was Thursday. "OpenAI must become a safety-first AGI company," wrote Leike, using the abbreviated version of artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

The company also confirmed Friday that it had disbanded Leike's Superalignment team, which was launched last year to focus on AI risks, and is integrating the team's members across its research efforts.

 

ขอบคุณสำหรับความคิดเห็นของคุณ ความคิดเห็นของคุณจะถูกเผยแพร่หลังจากได้รับการตรวจสอบแล้ว
เราได้สรุปข่าวนี้มาให้อ่านอย่างรวดเร็ว หากสนใจข่าว สามารถอ่านฉบับเต็มได้ที่นี่ อ่านเพิ่มเติม:

 /  🏆 732. in TH

ประเทศไทย ข่าวล่าสุด, ประเทศไทย หัวข้อข่าว