FILE - The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. A group of OpenAI's current and former workers is calling on the ChatGPT-maker and other artificial intelligence companies to protect whistleblowing employees who flag safety risks about AI technology.
Ziegler said in an interview Tuesday he didn't fear speaking out internally during his time at OpenAI between 2018 to 2021, in which he helped develop some of the techniques that would later make ChatGPT so successful. But he now worries that the race to rapidly commercialize the technology is putting pressure on OpenAI and its competitors to disregard the risks.
The letter has 13 signatories, most of whom are former employees of OpenAI and two who work or worked for Google's DeepMind. Four are listed as anonymous current employees of OpenAI. The letter asks that companies stop making workers enter into "non-disparagement" agreements that can punish them by taking away a key financial perk — their equity investments — if they criticize the company after they leave.
The letter comes as OpenAI has said it is beginning to develop the next generation of the AI technology behind ChatGPT. It formed a new safety committee just after losing a set of leaders, including co-founder Ilya Sutskever, who were part of a team focused on safely developing the most powerful AI systems.
Business Business Latest News, Business Business Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: fox13seattle - 🏆 328. / 59 Read more »
Source: verge - 🏆 94. / 67 Read more »
Source: verge - 🏆 94. / 67 Read more »