What use could health care have for someone who makes things up, can鈥檛 keep a secret, doesn鈥檛 really know anything, and, when speaking, simply fills in the next word based on what鈥檚 come before? Lots, if that individual is the newest form of artificial intelligence, according to some of the biggest companies out there.
Companies pushing the latest AI technology 鈥 known as 鈥済enerative AI鈥 鈥 are piling on: and want to bring types of so-called large language models to health care. Big firms that are familiar to folks in white coats 鈥 but maybe less so to your average Joe and Jane 鈥 are equally enthusiastic: Electronic medical records giants Epic and Oracle Cerner aren’t far behind. The space is crowded with startups, too.
The companies want their AI to take notes for physicians and give them second opinions 鈥 assuming they can 鈥 or, for that matter, divulging patients鈥 private information.
鈥淭here’s something afoot that’s pretty exciting,鈥 said Eric Topol, director of the Scripps Research Translational Institute in San Diego. 鈥淚ts capabilities will ultimately have a big impact.鈥 Topol, like many other observers, wonders how many problems it might cause 鈥 like leaking patient data 鈥 and how often. 鈥淲e’re going to find out.鈥
The specter of such problems inspired more than 1,000 technology leaders in March urging that companies pause development on advanced AI systems until 鈥渨e are confident that their effects will be positive and their risks will be manageable.鈥 Even so, some of them are sinking more money into AI ventures.
The underlying technology relies on synthesizing huge chunks of text or other data 鈥 for example, some medical models from Beth Israel Deaconess Medical Center in Boston 鈥 to predict text that would follow a given query. The idea has been around for years, but the gold rush, and the marketing and media mania surrounding it, are more recent.
The frenzy was kicked off in December 2022 by and its flagship product, ChatGPT, which answers questions with authority and style. It can explain genetics in a sonnet, for example.

OpenAI, started as a research venture seeded by Silicon Valley elites like Sam Altman, Elon Musk, and Reid Hoffman, has ridden the enthusiasm to investors鈥 pockets. The venture has a complex, hybrid for- and nonprofit structure. But has pushed the value of OpenAI to $29 billion, . Right now, the company is licensing its technology to companies like Microsoft and selling subscriptions to consumers. Other startups are considering selling AI transcription or other products to hospital systems or directly to patients.
Hyperbolic quotes are everywhere. Former Treasury Secretary Larry Summers : 鈥淚t’s going to replace what doctors do 鈥 hearing symptoms and making diagnoses 鈥 before it changes what nurses do 鈥 helping patients get up and handle themselves in the hospital.鈥
But just weeks after OpenAI took another huge cash infusion, even Altman, its CEO, is wary of the fanfare. 鈥淭he hype over these systems 鈥 even if everything we hope for is right long term 鈥 is totally out of control for the short term,鈥 .
Few in health care believe this latest form of AI is about to take their jobs (though some companies are experimenting 鈥 controversially 鈥 with chatbots that or guides to care). Still, those who are bullish on the tech think it鈥檒l make some parts of their work much easier.
Eric Arzubi, a psychiatrist in Billings, Montana, used to manage fellow psychiatrists for a hospital system. Time and again, he鈥檇 get a list of providers who hadn鈥檛 yet finished their notes 鈥 their summaries of a patient鈥檚 condition and a plan for treatment.
Writing these notes is one of the big stressors in the health system: In the aggregate, it鈥檚 an administrative burden. But it鈥檚 necessary to develop a record for future providers and, of course, insurers.
鈥淲hen people are way behind in documentation, that creates problems,鈥 Arzubi said. 鈥淲hat happens if the patient comes into the hospital and there’s a note that hasn’t been completed and we don’t know what’s been going on?鈥
The new technology might help lighten those burdens. Arzubi is testing a service, called Nabla Copilot, that sits in on his part of virtual patient visits and then automatically summarizes them, organizing into a standard note format the complaint, the history of illness, and a treatment plan.
Results are solid after about 50 patients, he said: 鈥淚t’s 90% of the way there.鈥 Copilot produces serviceable summaries that Arzubi typically edits. The summaries don鈥檛 necessarily pick up on nonverbal cues or thoughts Arzubi might not want to vocalize. Still, he said, the gains are significant: He doesn鈥檛 have to worry about taking notes and can instead focus on speaking with patients. And he saves time.
鈥淚f I have a full patient day, where I might see 15 patients, I would say this saves me a good hour at the end of the day,鈥 he said. (If the technology is adopted widely, he hopes hospitals won鈥檛 take advantage of the saved time by simply scheduling more patients. 鈥淭hat’s not fair,鈥 he said.)
Nabla Copilot isn鈥檛 the only such service; Microsoft is trying out the same concept. At April鈥檚 conference of the Healthcare Information and Management Systems Society 鈥 an industry confab where health techies swap ideas, make announcements, and sell their wares 鈥 investment analysts from Evercore highlighted reducing administrative burden as a top possibility for the new technologies.
But overall? They heard mixed reviews. And that view is common: Many technologists and doctors are ambivalent.
For example, if you鈥檙e stumped about a diagnosis, feeding patient data into one of these programs 鈥渃an provide a second opinion, no question,鈥 Topol said. 鈥淚’m sure clinicians are doing it.鈥 However, that runs into the current limitations of the technology.
Joshua Tamayo-Sarver, a clinician and executive with the startup Inflect Health, fed fictionalized patient scenarios based on his own practice in an emergency department into one system to see how it would perform. It missed life-threatening conditions, he said. 鈥淭hat seems problematic.鈥
The technology also tends to 鈥渉allucinate鈥 鈥 that is, make up information that sounds convincing. Formal studies have found a wide range of performance. One preliminary research paper examining ChatGPT and Google products using open-ended board examination found a hallucination rate of 2%. A study by , examining the quality of AI responses to 64 clinical scenarios, found fabricated or hallucinated citations 6% of the time, co-author Nigam Shah told 麻豆女优 Health News. Another found, in complex cardiology cases, ChatGPT agreed with expert opinion half the time.
Privacy is another concern. It鈥檚 unclear whether the information fed into this type of AI-based system will stay inside. Enterprising users of ChatGPT, for example, have managed to get the technology to tell them the recipe , which can be used to make chemical bombs.
In theory, the system has guardrails preventing private information from escaping. For example, when 麻豆女优 Health News asked ChatGPT its email address, the system refused to divulge that private information. But when told to role-play as a character, and asked about the email address of the author of this article, it happily gave up the information. (It was indeed the author鈥檚 correct email address in 2021, when ChatGPT鈥檚 archive ends.)
鈥淚 would not put patient data in,鈥 said Shah, chief data scientist at Stanford Health Care. 鈥淲e don’t understand what happens with these data once they hit OpenAI servers.鈥
Tina Sui, a spokesperson for OpenAI, told 麻豆女优 Health News that one 鈥渟hould never use our models to provide diagnostic or treatment services for serious medical conditions.鈥 They are 鈥渘ot fine-tuned to provide medical information,鈥 she said.
With the explosion of new research, Topol said, 鈥淚 don’t think the medical community has a really good clue about what’s about to happen.鈥