A former OpenAI safety employee has raised concerns about the company’s approach to artificial general intelligence , likening it to the ill-fated Titanic’s prioritization of speed over safety.that William Saunders, who worked for three years as a member of OpenAI’s “superalignment” safety team, has voiced his apprehensions about the company’s trajectory in the field of artificial intelligence.
The former safety employee’s concerns stem from what he perceives as a shift in priorities within OpenAI. Saunders explained that during his tenure, he often questioned whether the company’s approach was more akin to the meticulous and risk-aware Apollo space program or the ill-fated Titanic. As time progressed, he felt that OpenAI’s leadership was increasingly making decisions that prioritized “getting out newer, shinier products” over careful risk assessment and mitigation.
The analogy extends beyond mere comparison. Saunders fears that OpenAI might be overly reliant on its current measures and research for AI safety, much like the Titanic’s creators who believed their ship to be unsinkable. “Lots of work went into making the ship safe and building watertight compartments so that they could say that it was unsinkable,” he said. “But at the same time, there weren’t enough lifeboats for everyone. So when disaster struck, a lot of people died.
Saunders’ concerns are not isolated. In early June, a group of former and current employees from Google’s DeepMind and OpenAI, including Saunders himself,The letter calls for a “right to warn about artificial intelligence” and asks for a commitment to four principles around transparency and accountability.
ایران آخرین اخبار, ایران سرفصلها
Similar News:همچنین می توانید اخبار مشابهی را که از منابع خبری دیگر جمع آوری کرده ایم، بخوانید.
منبع: WashTimes - 🏆 235. / 63 ادامه مطلب »