News from 細細篇撞 Health
STAT News September 5
-Michael P. Recht, MD, the Louis Marx Professor of Radiology, chair, Department of Radiology
*Subscription required. Please see full text at end of report.
USA Today September 5
-Nicole M. Ali, MD, clinical associate professor, Department of Medicine, Division of Nephrology
Daily Record September 4
-Lisa Ganjhu, DO, clinical associate professor, Department of Medicine, Division of Gastroenterology and Hepatology
Epicenter NYC September 1
-Nadia S. Islam, PhD, associate professor, Department of Population Health, Institute for Excellence in Health Equity
SciTech Daily September 2
-Joanne Bruno, MD, PhD, fellow, Department of Medicine, Division of Endocrinology, Diabetes, & Metabolism
-Jose O. Aleman, MD, PhD, assistant professor, Department of Medicine, Division of Endocrinology, Diabetes & Metabolism
Cancer Network September 4
-Joshua K. Sabari, MD, assistant professor, Department of Medicine, Division of Hematology and Medical Oncology, Perlmutter Cancer Center
Health Digest September 2
-Ashley S. Roman, MD, the Silverman Professor of Obstetrics and Gynecology, vice chair for clinical affairs-Obstetrics, Department of Obstetrics and Gynecology
Medscape September 5
-Julia Greenberg, MD, resident, Department of Neurology
Womens Health September 5
-Nicole Lund, MPH, RDN, clinical nutritionist, Department of Rehabilitation Medicine, Sports Performance Center
Martha Stewart September 1
-Nina Blachman, MD, associate professor, Department of Medicine, Division of Geriatric Medicine & Palliative Care
The Straits Times September 5
-Benjamin M. Brucker, MD, associate professor, Departments of Urology, and Obstetrics and Gynecology
FOX News September 3
-Michael B. Whitlow, MD, clinical associate professor, the Ronald O. Perelman Department of Dermatology
-Doris Day, MD, clinical associate professor, the Ronald O. Perelman Department of Dermatology
-Marc K. Siegel, MD, clinical professor, Department of Medicine, Division of General Internal Medicine
FOX News Rundown Podcast September 1
-Marc K. Siegel, MD, clinical professor, Department of Medicine, Division of General Internal Medicine
USA Today September 5
-Marc K. Siegel, MD, clinical professor, Department of Medicine, Division of General Internal Medicine
-Lenard A. Adler, MD, professor, Departments of Psychiatry, and Child and Adolescent Psychiatry
News from 細細篇撞 HospitalLong Island
LongIsland.com September 4
-Erika Banks, MD, professor, Department of Obstetrics and Gynecology, NYU Long Island School of Medicine
-細細篇撞 HospitalLong Island
The Island 360 September 1
-Erika Banks, MD, professor, Department of Obstetrics and Gynecology, NYU Long Island School of Medicine
-細細篇撞 HospitalLong Island
News from 細細篇撞 HospitalBrooklyn
Popsugar September 1
-Meleen Chuang, MD, clinical associate professor, Department of Obstetrics and Gynecology, Family Health Centers at 細細篇撞
-Veleka Willis, MD, clinical assistant professor, Department of Obstetrics and Gynecology
*STAT News, September 5, 2023 - Moving beyond ChatGPT: How generative AI is inspiring dreams of a health data revolution - The worlds largest technology companies are racing to build generative AI into every corner of health and medicine.
Microsoft has formed an alliance with the electronic health records vendor Epic to wire the technology into dozens of health software products. Google is infusing it into tools used by hospitals to collect and organize data on millions of patients. Not to be outdone, Amazon has unveiled a service to help build clinical note scribes, and is separately working to embed generative AI in drug research and development.
All of this has unfolded faster than federal regulators could blink or answer questions about how the technology should be tested and evaluated, whether it will help or hurt patients, and how it will impact privacy and the use of personal data.
To keep tabs on its use, STAT built designed to catalog the emerging applications and experiments, and trace the alliances quickly forming between the builders of generative AI models and health businesses eager to save time and money and score marketing points from their use.
There is a lot of froth and hype out there, said Brian Anderson, chief digital health physician at Mitre Corp. and co-founder of the , an industry group developing standards for AI in medicine. The concern many of us have is that, particularly with generative AI in a consequential space like health, it is inappropriate to use a tool that we dont have an agreed-upon set of standards or best practices for.
Trained on vast amounts of data, these AI systems use pattern recognition to produce human-like responses to questions posed in just about any kind of language, from written text, to imaging data, to computer code. Dozens of businesses are applying generative models built by Google or OpenAI, the developer of ChatGPT, to perform tasks in health care delivery, drug research, and medical billing. Most of the work involves bureaucratic jobs that human workers would like to offload, such as answering patient emails or filling out medical records. But some users are beginning to apply the technology to core medical tasks, such as early disease detection, diagnosis, and treatment.
Efforts to apply it are constantly invoked in press releases and advertisements by health businesses wanting to appear on the cutting edge, even as its ability to improve care remains speculative, and its harms largely hidden. Theres this disconnect between the flashy headlines and claims that have yet to be verified, and presumably unflashy applications that can have immediate value to patients and people in the health care system, said Nigam Shah, chief data scientist for Stanford Health Care.
The work ahead will only get harder. Clinicians and researchers must expose the technologys biases and blind spots and develop strategies to fix them before they are used to deliver information to patients, or help doctors make decisions about their care. Heres a closer look at the projects underway.
Generative AIs ability to provided the first inkling of its potential. But the next wave of uses is already unfolding, as hospitals test its ability to surface the most salient information in voluminous patient records.
HCA Healthcare, a for-profit hospital chain based in Nashville, is developing a tool using Googles PaLM model to write the messages nurses send one another about patients at shift change.
The goal is to relieve nurses of a time-consuming search through records to find details on medication changes, lab results, and other crucial information. It is a purely bureaucratic task, but also one that can lead to harm, or sloppy care, if done incorrectly.
The question facing the AI is not just whether it makes the nurses jobs easier, but whether its use results in fewer errors and better patient care relative to the current manual process. Mike Schlosser, a physician and vice president of care transformation and innovation at HCA, said the system is testing the tool by comparing the automated and manually-produced notes side by side.
Were still learning, he said. Right now were using a lot of human-in-the-loop to ensure that its providing the right output.
At Northwestern Medicine in Chicago, clinicians are working with Epic, through its partnership with Microsoft, to use generative AI to review and summarize records to help flag emerging medical issues and help clinicians be more responsive to treatment needs.
Doug King, chief information officer at Northwestern, said the technology offers an enormous opportunity to relieve clinicians of administrative tasks that soak up hours and reduce job satisfaction. But he also emphasized that its benefits still have to be proved.
What people forget is that these things arent free, King said. They are not free to train, and you have to maintain them. Balancing the costs against the true benefit is a conversation that every health system is going to have to have.
Michael Recht, chief of radiology at 細細篇撞 Health, is leveraging ChatGPT to help patients understand their imaging results.
The idea came from a project hes been working on to give his patients short videos explaining the abnormalities or problems found in their images. The hardest part of that work, he said, is getting radiologists to translate jargon-heavy reports into plain language.
So far, ChatGPT appears highly adept at boiling it down. But Recht said its accuracy depends on the effectiveness of the prompt provided to the AI.
Id love to tell you I have this great logical way of doing it, Recht said. But to be honest with you, its been trial and error. He said hes spent hours tinkering with prompts at night as one of many clinicians at NYU to take on a wide array of tasks. In some situations, he just asks the AI directly: Why are you getting this wrong? How can I make it easier for you to understand?
Sometimes that does help. It gives me some clues on how to simplify my prompt, Recht said. Radiologists at NYU are still monitoring its output to ensure any information delivered to patients is accurate.
If we cant do that, then we wont use it, Recht said. We think we can solve it, but were still working on it, and thats why its not in production at this point.
In Boston, Massachusetts General Brigham is already exploring a longer-term goal for generative AI: Using it to help physicians diagnose patients and make the best treatment decisions.
A recent by researchers at the health system showed ChatGPT, the popular model developed by OpenAI, was 77% accurate when diagnosing cases featured in . However, the model was only 60% accurate in making differential diagnoses, or coming up with a list of plausible illnesses based on a patients symptoms. It wasnt much better 68% accurate in making treatment decisions, reflecting the technologys difficulty dealing with uncertainty.
If were going to use this, we need to know where its good and where its not, said Marc Succi, a physician and associate chair of innovation and commercialization at Mass General Brigham. Its got to be applied in very specific situations.
The health system is also pursuing lower-risk applications to draft responses to emails and automatically load appointment details into medical records. Succi said it could take many years of experimentation, and trust building, before the technology is used directly in patient care. The fastest way to do that, he said, is for model developers to be more open about how theyre iterating their models to achieve better results.
Companies should be sharing that data with hospitals if they want to get adoption, he said, adding that clinical users will likely need to fine tune the AI systems before using them. I do envision that, in 5, 10, 15 years, well be training sub-specialty chatbots as well as a generalist chatbot.
The largest category of generative AI tools publicly adopted by health systems is the clinical note scribe, which creates a transcript of a recorded doctors visit and summarizes it into a note for a patients medical record. While the tools have been in use for years, the release of GPT-4 in March has supercharged the solution and demand for it.
It became very clear pretty quickly that voice-to-text powered by GPT-4 especially, but also other models, was a very compelling use case right off the bat, said Byron Crowe, an internal medicine physician at Beth Israel Deaconess Medical Center and digital health researcher tracking generative AI. It performs well at creating visit summaries, and its possible to manage the risk of mistakes by having a human review notes.
But some health systems are pushing the systems to do even more, piloting scribes that skip the review step and fully automate documentation. Northwestern Medicine is one of the few health systems that has acknowledged its planned testing of DAX Express, Nuances GPT-4-powered automated scribe, in a bid to reduce some of the clinician burnout that can stem from documentation.
You have caregivers leaving the industry. You have an aging population, said Northwesterns King. Technology is one the main levers, if not the only lever, that we can pull to try to take tasks off of clinicians.
Most clinical tools apply generative AIs superpowers to text in medical records. But in the future, large language models will be directly applicable to more than just text, said Jeffrey Ferranti, senior vice president and chief digital officer of Duke Health, pointing out GPT-4s ability to analyze images.
Cedars-Sinai says it is using generative AI to create physician avatars that communicate with patients, with software developed by Acolyte Health. The idea is that patients will engage more with important preventive health care strategies, appointment follow-up information, and reminders when its delivered by a trusted human or at least something that looks like a human.
Its to the point where I think most patients wouldnt identify that its generated, said Craig Kwiatkowski, chief information officer at Cedars-Sinai. Its a little similar to a deep fake, and Ive seen some of those where I cant tell.
This year, its first pilot of the technology sent videos to patients encouraging them to follow up on colorectal cancer screening, using the likeness and voice of Cedars-Sinai physician Caroline Goldzweig. About 40% of those patients returned their screening tests, said Kwiatkowski a response rate about twice that of the industry benchmark for screening outreach.
It plans to expand the technology to more physicians and use cases, including preventive care prompts for diabetes, and in the long run may use AI to generate more personalized video scripts by incorporating data from patients clinical records. But that would incur more risk, Kwiatkowski said. Were proceeding deliberately and cautiously, he said. The content is established, curated, and authenticated so that we dont have AI thats just making up things that the doctor could say.
Many health systems arent diving in head first, choosing a strategy of watchful waiting as research progresses.
Duke Health, which founded the Health AI Partnership, a group that advises hospitals on AI use, intends to use generative tools, and is to establish the necessary infrastructure and training. But it doesnt have any generative AI tools in production today, said Dukes Ferranti. Neither does Mount Sinai, a press officer told STAT.
And when more risk-averse health systems do start to experiment with the technology, theyll likely stay with safer administrative tasks rather than anything patient-facing. From a health care system standpoint, I dont see the reason for being super excited about clinical use, said Stanfords Shah. Theres a whole bunch of other stuff we can do that is easy, straightforward, and direct value, including scheduling, translating patient instructions to the language of their choice, and acting as an interpreter during a visit.
This new generative AI technology that we are seeing is a world-changing technology, and it really has the potential to impact all aspects of what we do in medicine from the bench-side research in the laboratory, to the bedside, to some of the administrative things that we do, said Ferranti. But I think we all acknowledge theres a lot of real risks associated with these new technologies.