As artificial-intelligence tools show promise in transforming healthcare, technology companies, healthcare providers, drugmakers and other players are also focused on its potentially damaging side effects.
“ChatGPT is pretty exciting, but it writes things definitively, and as human beings we’re used to taking definitive statements very seriously,” Cris Ross, chief information officer at the Mayo Clinic, said during an AI-focused panel at the conference, which was organized by Politico. “A lot of the responses from these tools still require human introspection, inspection, evaluation and so on,” he said.
AI advancements have raised a host of questions about ethics and equity in healthcare. Large language models such as ChatGPT, for example, are fed data from books, articles and other sources that may lack diversity and representation in their authorship and subject matter and may only exacerbate existing societal biases, researchers from Indiana University and the University of Houston wrote in a recent report in the journal Health Affairs.
The accuracy of information coming from generative AI is also critical in healthcare, executives said. While generative AI is incredible, “there are also a lot of limitations,” said Palantir’s Jain. “Doing computation and generating something that is factually true is really hard when the generative component is there.”
Россия Последние новости, Россия Последние новости
Similar News:Вы также можете прочитать подобные новости, которые мы собрали из других источников новостей
Источник: VanityFair - 🏆 391. / 55 Прочитайте больше »