SEOUL — Sixteen companies at the forefront of developing artificial intelligence pledged on Tuesday at a global meeting to develop the technology safely at a time when regulators are scrambling to keep up with rapid innovation and emerging risks.The companies included US leaders Google, Meta, Microsoft and OpenAI, as well as firms from China, South Korea and the United Arab Emirates.
They committed to publishing safety frameworks for measuring risks, to avoid models where risks could not be sufficiently mitigated, and to ensure governance and transparency."It's vital to get international agreement on the 'red lines' where AI development would become unacceptably dangerous to public safety," said Beth Barnes, founder of METR, a group promoting AI model safety, in response to the declaration.
Business Business Latest News, Business Business Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: gmanews - 🏆 11. / 68 Read more »