News from 嘿嘿视频 Health
*Subscription required. Please see full text at end of report.
STAT News 鈥 August 25
-Cristina M. Gonzalez, MD, professor, Department of Medicine, Division of Hospital Medicine, and Department of Population Health
-Yindalon Aphinyanaphongs, MD, PhD, assistant professor, Departments of Population Health, and Medicine, Division of Hospital Medicine
-Paawan V. Punjabi, MD, clinical assistant professor, Department of Medicine, Division of Hospital Medicine
PIX 11 鈥 August 24
-嘿嘿视频 Health
Associated Press 鈥 August 24
-George D. Thurston, PhD, professor, Department of Medicine, Division of Environmental Medicine, and Department of Population Health
This article was picked up by news websites across the country.
CNN 鈥 August 24
-Nieca Goldberg, MD, clinical associate professor, Department of Medicine, the Leon H. Charney Division of Cardiology
Healio Allergy/Asthma 鈥 August 24
-George D. Thurston, PhD, professor, Department of Medicine, Division of Environmental Medicine, and Department of Population Health
(Free log-in required)
Medscape 鈥 August 24
-John A. Carucci, MD, PhD, professor, the Ronald O. Perelman Department of Dermatology
HCP Live 鈥 August 24
-John A. Carucci, MD, PhD, professor, the Ronald O. Perelman Department of Dermatology
HCP Live 鈥 August 24
-Binita Shah, MD, associate professor, Department of Medicine, the Leon H. Charney Division of Cardiology
Medical News Today 鈥 August 24
-嘿嘿视频 Health
SheFinds 鈥 August 25
-Doris Day, MD, clinical associate professor, the Ronald O. Perelman Department of Dermatology
News from 嘿嘿视频 Hospital鈥擫ong Island
The Garden City News 鈥 August 24
-Division of Trauma and Acute Care Surgery, 嘿嘿视频 Hospital鈥擫ong Island
*STAT News, August 25, 2023 - NYU launched private ChatGPT for its health data, and set its staff loose to experiment - A fourth-year medical student, a music therapist, a child psychiatrist, and a physician-researcher stared at their laptops, puzzling over the combination of words that would make a supposedly intelligent system 鈥 嘿嘿视频鈥檚 customized version of ChatGPT 鈥 think about health care problems in a way that was useful to them.
As part of a 鈥減rompt-a-thon鈥 in August at the medical center鈥檚 science building, the group had been charged with analyzing a patient record around the theme of equity using NYU鈥檚 HIPAA-compliant implementation of the buzzy OpenAI technology that can interpret language and generate text based on queries.
After a morning of mini-lectures, participants broke off into assigned groups and dove into 嘿嘿视频鈥檚 newly launched prompting interface. Representatives from Microsoft, which makes the artificial intelligence tool accessible through its cloud services, were on hand to ensure everything ran smoothly as about 70 workshop participants from across the academic medical center put prompts into the system around themes including research, clinical applications, and patient education.
The equity group鈥檚 initial experiments with NYU鈥檚 GPT turned out to be illuminating because of the roadblocks they hit. NYU鈥檚 GPT couldn鈥檛 identify any instances of bias in the text of the patient record, which Cristina Gonzalez, who studies implicit bias in medicine, confirmed after reading it herself. Later the group turned to a research article assessing disparities in outcomes during the Covid-19 pandemic in New York City. While NYU鈥檚 GPT was able to identify some issues of bias raised by the article, it faltered when asked to examine the paper鈥檚 methodology, noting that no sampling method had been specified. Gonzalez explained that was because no sampling method had been employed 鈥 all comers were included in the research.
Gonzalez, a physician who also sees patients at an 嘿嘿视频 center in Brooklyn, said those first experiments highlighted for her 鈥渢he importance of having the right use cases鈥 and also of having 鈥渁 human in the loop鈥 who can identify when the GPT system isn鈥檛 delivering useful information. Gonzalez applied to the prompt-a-thon because she was familiar with an NYU medical school tool that surfaces educational resources relevant to cases trainees are seeing in practice and wondered if something similar might not be done for resources around social determinants of health and other health equity topics.
The prompt-a-thon is the latest in a series of steps that the 嘿嘿视频 informatics department鈥檚 leadership has taken to harness ideas like Gonzalez鈥檚 by cultivating use of generative AI technology among staff. The hope is that in the long-run, these efforts might lead to tools that create improvements in efficiency, care, and more. While the use of generative AI in health care is still new, 嘿嘿视频鈥檚 move to quickly embrace broad experimentation with ChatGPT could serve as a model for other institutions that hope to explore the technology.
In recent months, OpenAI-powered generative AI tools for answering messages from patients and creating clinical documentation have found their way into health systems like Stanford and UC San Diego through vendors like Epic, Nuance, and Microsoft. Health systems like Massachusetts General Hospital and Cleveland Clinic have revealed homegrown experiments with GPTs. But NYU is notable for seemingly opening the internal floodgates to the technology by inviting health care use cases from all its employees.
鈥淭he ideas aren鈥檛 going to come from me, they鈥檙e going to come from everyday folks who are thinking about their own problems, who are doing things for themselves,鈥 said Yindalon Aphinyanaphongs, an assistant professor who leads the predictive analytics unit in 嘿嘿视频鈥檚 department of informatics. 鈥淎nd one advantage to GPT is that it鈥檚 incredibly democratizing with a low barrier to entry.鈥
Generative AI started gaining popular traction last year after OpenAI released the simple ChatGPT interface that allowed just about anyone to interact with its technology, and Aphinyanaphongs said that by this spring, it became clear that people across the organization were using ChatGPT.
The first thing the system did was issue a policy barring the use of ChatGPT with private health data and confidential business information. The public version of ChatGPT is not compliant with the health privacy law HIPAA, and data that鈥檚 submitted is stored by OpenAI. But rather than block access to the website domain altogether and discourage its use, Aphinyanaphongs said that they saw an opportunity.
鈥淲e think this is transformative, so let鈥檚 start to figure out how to engage the community,鈥 he said.
In March, when Microsoft announced that managed versions of ChatGPT that could be private and HIPAA compliant would be available through its cloud platform Azure, Aphinyanaphongs put in an application, and was quickly approved for use with the most advanced OpenAI GPT models. Aphinyanaphongs鈥 predictive analytics unit put out a call offering the technology to people in the 嘿嘿视频 community who were interested. Within weeks, they had over a hundred applications and Aphinyanaphongs said that his group 鈥渞eflexively鈥 gave people four weeks of access to the platform to experiment.
Some of those applications were identified as worthy of mentorship, including projects around trying to convert complex radiology reports into simple language that patients can understand or instances when certain kinds of drugs are noted in a patient鈥檚 care plan but not in their active medication list.
Though there are concerns about runaway use of generative AI in health care before it鈥檚 been adequately tested in clinical contexts, Aphinyanaphongs said that the application process in which people must spell out their use case serves as a guardrail 鈥 as does applicant鈥檚 employment status.
鈥淲e haven鈥檛 read any that have been like, OK, this seems really questionable,鈥 he said.
Moreover, the organization is being more selective about who can have access that would allow someone to use the technology for more than individual prompting sessions.
As it doled out access to staff, Aphinyanaphongs鈥 team realized that despite the simplicity of entering a few queries, it was taking people a long time to get going with GPT chat. The prompt-a-thon grew out of an effort that includes office hours and road shows for different constituencies across the organization aimed at broadening the pool of people who are equipped to experiment with the technology.
Liza Wu, another member of the group tackling equity, was aware of the excitement around ChatGPT but had barely any experience with the technology ahead of the prompt-a-thon. A music therapist who works with adults undergoing inpatient rehabilitation for stroke, traumatic brain injury, and other conditions, she applied because she wondered if it could help her come up with music that was relevant to patients from cultures she wasn鈥檛 familiar with.
Wu said one of her takeaways was understanding that it might not be appropriate to use ChatGPT if she wasn鈥檛 going to be able to evaluate whether or not the technology鈥檚 output was reliable. And that one should always ask whether using GPT would really be helpful for a task by making things more efficient or better.
She said the event made her wonder if there might not be ways to use prompting to help people whose motor skills have been impacted to make music by simply speaking descriptions of what they want it to sound like.
鈥淚 need to talk to the NYU IT people and say, hey, help me make this happen!鈥 she said.
Not everyone at the event was a novice. Arys Nogueron, a director of fundraising analytics at 嘿嘿视频 who works with machine learning models, was in a group that had quickly used NYU鈥檚 GPT system to extract abnormal lab results from a note for a patient who had been admitted to the emergency department, and asked it to order them by how much the numbers varied from normal. They were able to instruct the system to write a patient note from the lab values and then later to analyze the documented treatment plan to see if anything had been missed.
Of course, Nogueron and his partner weren鈥檛 doctors, so they called over Paawan Punjabi, a physician and prompt-a-thon mentor, to have a look at the output. He noted that NYU鈥檚 GPT gave what seemed to be appropriate treatment considerations, if not exactly spelling out potential errors or contraindications.
After teams wrapped up the hands-on portion of the afternoon and awards had been given out, Aphinyanaphongs left attendees with a call to action.
鈥淥nce you identify something that you might think would be viable or something you can work on, we鈥檙e here to help move your ideas into potentially products, into research, into other kinds of things,鈥 he said, adding: 鈥淲e think one day, every piece of text that鈥檚 generated for or by NYU will go through a language model of some sort doing something to support the work of delivering clinical care and the academic work that we do.鈥
Speaking after the event, Gonzalez said that she鈥檇 gotten good results asking NYU鈥檚 GPT to identify redundancies in a grant application she was working on with a looming deadline. It identified her use of the term implicit bias, the subject of her work, as a repeated term 鈥 something she would have to live with. But the system also suggested places where she might vary her language or where she inadvertently explained the same concept in multiple places.
鈥淚t was really nice to have a fresh set of eyes in a non-human form,鈥 she said.