Researchers from Stanford University have published a report ranking how transparent artificial intelligence models are and found them wanting.Foundation Model Transparency IndexAmong the models that Stanford HAI looked at are Stability AI’s image generator Stable Diffusion, Meta’s Llama 2, and OpenAI’s ChatGPT.
Meta’s Llama 2, a generative text model, scored highest with a score of 54 out of 100. However, the researchers note that the score is not close to providing “adequate transparency” which, they say, reveals a “fundamental lack of transparency in the AI industry.
However, Stable Diffusion received 14% in the “Impact” category which looks at the impact that the model has on its users and the policies that govern its use.that none of the models’ creators disclose any information about the technology’s impact on society. This includes where users can complain about privacy, copyright, or biases.
Rishi Bommasani, society lead at the Stanford Center for Research on Foundation Models and one of the researchers in the index, wants the index to provide a benchmark for governments and companies. “What we’re trying to achieve with the index is to make models more transparent and disaggregate that very amorphous concept into more concrete matters that can be measured,” Bommasani tellsis still in the works and could force AI companies to be more open about how their models are built.
Россия Последние новости, Россия Последние новости
Similar News:Вы также можете прочитать подобные новости, которые мы собрали из других источников новостей
Источник: axios - 🏆 302. / 63 Прочитайте больше »