As artificial-intelligence tools show promise in transforming healthcare, technology companies, healthcare providers, drugmakers and other players are also focused on its potentially damaging side effects.
“ChatGPT is pretty exciting, but it writes things definitively, and as human beings we’re used to taking definitive statements very seriously,” Cris Ross, chief information officer at the Mayo Clinic, said during an AI-focused panel at the conference, which was organized by Politico. “A lot of the responses from these tools still require human introspection, inspection, evaluation and so on,” he said.
AI advancements have raised a host of questions about ethics and equity in healthcare. Large language models such as ChatGPT, for example, are fed data from books, articles and other sources that may lack diversity and representation in their authorship and subject matter and may only exacerbate existing societal biases, researchers from Indiana University and the University of Houston wrote in a recent report in the journal Health Affairs.
The accuracy of information coming from generative AI is also critical in healthcare, executives said. While generative AI is incredible, “there are also a lot of limitations,” said Palantir’s Jain. “Doing computation and generating something that is factually true is really hard when the generative component is there.”
대한민국 최근 뉴스, 대한민국 헤드 라인
Similar News:다른 뉴스 소스에서 수집한 이와 유사한 뉴스 기사를 읽을 수도 있습니다.
출처: VanityFair - 🏆 391. / 55 더 많은 것을 읽으십시오 »