, one company is using the artificial intelligence to experiment with digital mental health care, shedding light on ethical gray areas around the use of the technology.
ChatGPT is a variant of GPT-3, which creates human-like text based on prompts, both created by OpenAI. , a federal policy which mandates that human subjects provide consent before involvement in research purposes. "Wow I would not admit this publicly," Christian Hesketh, who describes himself on Twitter as a clinical scientist,."The participants should have given informed consent and this should have passed through an IRB [institutional review board]."
He continued:"This imposed no further risk to users, no deception, and we don't collect any personally identifiable information or personal health information ."ChatGPT and the mental health gray area He added that people with mental illness"require special sensitivity in any experiment," including"close review by a research ethics committee or institutional review board prior to, during, and after the intervention"
enfynyty I played with it some. It is no where near full scale deployment. Couldn’t even correctly write a 5th grade book report. It kept getting the chapter names and content all messed up. Big picture intro and conclusion were spot on. 💭Chat GPT is NOT smarter than a 5th grader.
Anything is an improvement. The main issue with healthcare is burnout rate (about 3 months for most nurses and CNA's). AI has a better bedside manner than likely every medical professional has in them.