SEOUL — Sixteen companies at the forefront of developing artificial intelligence pledged on Tuesday at a global meeting to develop the technology safely at a time when regulators are scrambling to keep up with rapid innovation and emerging risks.The companies included US leaders Google, Meta, Microsoft and OpenAI, as well as firms from China, South Korea and the United Arab Emirates.
They committed to publishing safety frameworks for measuring risks, to avoid models where risks could not be sufficiently mitigated, and to ensure governance and transparency."It's vital to get international agreement on the 'red lines' where AI development would become unacceptably dangerous to public safety," said Beth Barnes, founder of METR, a group promoting AI model safety, in response to the declaration.
France Dernières Nouvelles, France Actualités
Similar News:Vous pouvez également lire des articles d'actualité similaires à celui-ci que nous avons collectés auprès d'autres sources d'information.
La source: gmanews - 🏆 11. / 68 Lire la suite »