Researchers from Stanford University have published a report ranking how transparent artificial intelligence models are and found them wanting.Foundation Model Transparency IndexAmong the models that Stanford HAI looked at are Stability AI’s image generator Stable Diffusion, Meta’s Llama 2, and OpenAI’s ChatGPT.
Meta’s Llama 2, a generative text model, scored highest with a score of 54 out of 100. However, the researchers note that the score is not close to providing “adequate transparency” which, they say, reveals a “fundamental lack of transparency in the AI industry.
However, Stable Diffusion received 14% in the “Impact” category which looks at the impact that the model has on its users and the policies that govern its use.that none of the models’ creators disclose any information about the technology’s impact on society. This includes where users can complain about privacy, copyright, or biases.
Rishi Bommasani, society lead at the Stanford Center for Research on Foundation Models and one of the researchers in the index, wants the index to provide a benchmark for governments and companies. “What we’re trying to achieve with the index is to make models more transparent and disaggregate that very amorphous concept into more concrete matters that can be measured,” Bommasani tellsis still in the works and could force AI companies to be more open about how their models are built.
日本 最新ニュース, 日本 見出し
Similar News:他のニュース ソースから収集した、これに似たニュース記事を読むこともできます。
ソース: axios - 🏆 302. / 63 続きを読む »