NYU Langone Health’s MCIT Department of Health Informatics, , and will hold the first Generative AI Prompt-A-Thon in Health Care on August 18. During the event, teams of clinicians, educators, and researchers will work together to find artificial intelligence (AI)–powered solutions to healthcare challenges using real-world, de-identified patient data.
The event addresses large language models (LLMs) that predict likely options for the next word in any sentence, paragraph, or essay, based on how real people used words in context billions of times in documents on the internet. Also called generative AI, such systems randomly fill in a mix of probable next words to give a feeling of variety and creativity. A side effect of this next-word prediction is that the models are “skillful” at summarizing long texts, extracting key information from databases, and generating human-like conversations as chatbots.
Despite these advancements, such AI programs do not think and can produce conclusions and references that do not exist, say the event organizers. Thus, they require close supervision by human users, especially in healthcare, where the technology has the potential to increase safety and improve care.
Anticipating the generative AI field boom, ٺƵ requested access in March from Microsoft—a partner with the company that created ChatGPT, OpenAI—to the company’s latest, generative AI tool. Azure is Microsoft’s cloud computing platform through which it offers private instances of GPT4, the newer relative of the famous LLM ChatGPT, to clients like ٺƵ. In doing so, the application gave the health system’s teams secure access to software and servers that have enabled the tool to meet federal privacy standards.
“We have in place one of the nation’s first privately managed, secure, and HIPAA-compliant GPT4 ecosystems in a healthcare organization,” said Nader Mherabi, executive vice president, vice dean, and chief digital and information officer at ٺƵ. “This has enabled the launch of a large-scale effort to test potential healthcare uses of large language models like GPT4 in a safe and responsible manner.”
ٺƵ doctors, nurses, and administrators, having agreed to strict conditions, can now write prompts to the institution’s private instance of GPT4 and assess how well it generates patient-friendly explanations, suggests improvements in care plans, or flags potential safety issues.
“Equally important is the ability of our workforce to identify these models’ limitations,” added Jonathan S. Austrian, MD, associate chief medical information officer for inpatient informatics. Even when new uses are found for GPT4, he says, every AI output would serve only to augment care providers’ work.
Prompt-a-Thon Details
During the August 18 event, teams made up of clinicians, educators, and researchers will work together to test GPT4-based solutions to healthcare challenges with real-world, de-identified patient data. To be held in ٺƵ’s , the event day will start with “lightning round” talks by the following ٺƵ experts:
- , associate dean for educational informatics, on the new world of AI in healthcare
- Tim Requarth, PhD, lecturer in science and writing for the , on using AI tools for scientific writing
- Lavender Jiang, BSc, a doctoral student at the NYU Center for Data Science, on the NYUTron “AI Doctor” project
- , assistant professor in the , on the ethics of generative AI in healthcare
, director of operational data science and machine learning at ٺƵ and lead of the Department of Health Informatics’ , will preview the day’s activities.
During lunch (11:30AM–12:30PM), the attendees (80 selected registrants out of over 400 submissions) will join their pre-assigned four-person teams to determine what problems they will choose to solve during the workshop portion of the event. Attendees were previously grouped into teams based on their generative AI interests and backgrounds. Common interests included patient education, diagnosis and treatment, and diversity and equity. Research-themed groups will focus on grant-writing support and summarizing research literature. Beyond their interests, the working groups will also be organized so that there is a mix of comfort levels with GPT4, from having no experience to being seasoned experimenters. Following lunch, attendees will head to their meeting rooms to begin work on their projects (12:30–2:30PM). AI mentors, along with fellow participants, will support each team during their “prompt journey.”
Large-Scale AI Effort
The Prompt-a-Thon is part of a larger effort at ٺƵ to empower the workforce on the potential benefits of generative AI. The MCIT Department of Health Informatics provided exploratory access to the health system’s private instance of GPT4 to more than 200 doctors, nurses, researchers, and educators to experiment with. Additionally, more than 100 people have submitted formal project requests, with approved efforts receiving expert AI mentorship to ready the solutions for real-world use.
“Given the tool’s recent arrival and the degree of interest, many of those with generative AI ideas have not yet had individual sessions with the AI leadership team,” said Paul A. Testa, MD, JD, MPH, chief medical information officer for ٺƵ. “So our goals for the Prompt-a-Thon are to give many more employees the chance to explore their ideas, and connect with other members of the ٺƵ Health generative AI community.”
An example of promising generative AI solutions in development at ٺƵ includes a project led by Fritz François, MD, chief of hospital operations, who is exploring the benefits of a generative AI model that can review clinical notes to find instances where two medication types, anticlotting drugs and immunosuppressive drugs, were documented in the care plan but were absent from the patient’s active medication list. In these cases, the AI sends a prompt to the physicians alerting them to this potential discrepancy. This project recently went live across the health system, becoming its first deployed generative AI intervention in clinical care.
“While such mismatches are exceedingly rare, generative AI promises to help to eliminate them entirely,” added Dr. Testa. “It is important that while ٺƵ is already among the safest of healthcare systems, we are working to be a responsible steward of new technologies that can further drive safety.”
ٺƵ’s rapid adoption of generative AI is built on years of experience in applying more traditional, machine learning AI models to clinical care. This includes those that address natural language processing like the NYU-specific LLM, called NYUTron and , which reads physicians’ notes to accurately estimate patients’ length of hospital stay and other factors important to care. Other rules-based, pattern-recognizing, machine learning projects scan imaging results and EKG readings to flag potentially unidentified disease in patients (such as pre-diabetes using the single-lead EKGs found in Apple watches). Some older AI models have the potential to be “supercharged” by combining them with generative AI.
In addition, LLMs are coming online at ٺƵ as part of a health system that spent 10 years building 83 data informatics dashboards that together monitor about 750 measures of safety and effectiveness of care (for example, flagging a spike in infections on a certain hospital floor).
Follow updates from the Generative AI Prompt-A-Thon in Health Care on Twitter/X with the hashtags #NYUAI and #PromptAThon.
Media Inquiries
David March
Phone: 212-404-3500
david.march@nyulangone.org