SEOUL — Sixteen companies at the forefront of developing artificial intelligence pledged on Tuesday at a global meeting to develop the technology safely at a time when regulators are scrambling to keep up with rapid innovation and emerging risks.The companies included US leaders Google, Meta, Microsoft and OpenAI, as well as firms from China, South Korea and the United Arab Emirates.
They committed to publishing safety frameworks for measuring risks, to avoid models where risks could not be sufficiently mitigated, and to ensure governance and transparency."It's vital to get international agreement on the 'red lines' where AI development would become unacceptably dangerous to public safety," said Beth Barnes, founder of METR, a group promoting AI model safety, in response to the declaration.
Nederland Laatste Nieuws, Nederland Headlines
Similar News:Je kunt ook nieuwsberichten lezen die vergelijkbaar zijn met deze die we uit andere nieuwsbronnen hebben verzameld.
Bron: gmanews - 🏆 11. / 68 Lees verder »