From Scans To Office Visits: How Will AI Shape Medicine?
32:31 minutes
Listen to this story and more on Science Friday’s podcast.
Researchers continue to test out new ways to use artificial intelligence in medicine.
Some research shows that AI is better at reading mammograms than radiologists. AI can predict and diagnose disease by analyzing the retina, and there’s even some evidence that GPT-4 might be helpful in making challenging diagnoses, ones missed by doctors.
However, these applications can come with trade-offs in security, privacy, cost, and the potential for AI to make medical mistakes.
Ira and guest host Sophie Bushwick talk about the role of AI in medicine and take listener calls with Dr. Eric Topol, founder and director of the Scripps Research Translational Institute and professor of molecular medicine, based in La Jolla, California.
Invest in quality science journalism by making a donation to Science Friday.
Eric Topol is the author of several books, including Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (Basic Books, 2019) The Patient Will See You Now: The Future of Medicine is in Your Hands (Basic Books, 2015), practicing cardiologist at the Scripps Clinic, and a genomics professor at the Scripps Research Institute in La Jolla, California.
IRA FLATOW: This is Science Friday. I’m Ira Flatow.
SOPHIE BUSHWICK: And I’m Sophie Bushwick. Over the past year, or basically ever since ChatGPT came out, artificial intelligence, AI, has crept into so many different corners of society, especially medicine.
IRA FLATOW: Yeah. While people have been fearful of what nefarious uses could befall AI, I have been fascinated about new studies showing the benefits of AI in medicine, like being better at reading mammograms than radiologists, how AI can predict and diagnose diseases by analyzing the retina.
SOPHIE BUSHWICK: Wow.
IRA FLATOW: And we’re going to talk about that. There’s even research showing that AI chatbots might be helpful in making diagnoses of rare disorders, people using them themselves, and ones that are even missed by doctors.
SOPHIE BUSHWICK: But of course, none of this comes without tradeoffs about security, privacy, cost, and the potential for AI to make medical mistakes.
IRA FLATOW: Yes, that’s of course. And our next guest has been following the future of AI very closely. Welcome back, Dr. Eric Topol, a cardiologist and founder and director of the Scripps Research Translational Institute, professor of molecular medicine, and executive vice president of Scripps Research based in La Jolla, California. Dr. Topol, welcome back to Science Friday.
ERIC TOPOL: Oh, thanks so much, Ira. Great to be with you again.
IRA FLATOW: Nice to have you back. And I want to tell our listeners we want to hear from you. What questions do you have about how AI can be used in medicine? Maybe you work in the health care industry. Are there uses for AI that you’re looking forward to for using in the future? Uses for AI, also– does it worry you? Our number, 844-724-8255. That’s 844-SCI-TALK. Or tweet us @SciFri. Let’s go right into this, Dr. Topol.
SOPHIE BUSHWICK: I’d like to start with what we know about using AI to read x-rays or MRIs. How effective is AI versus an experienced radiologist?
ERIC TOPOL: Well, right, Sophie. That has been several years of accruing throughout all the types of scans, whether it’s x-rays, MRIs, you name it, the CT scans. That when you take tens of thousands or hundreds of thousands of these images, and you use what’s so-called supervised learning where you have experts with ground truths of what they show, you can train the model, the AI model– these are so-called unimodal because it’s just images– to be as good or better than expert physicians, radiologists, also pathologists, and across the board. So this has been the one area that’s been firmed up over the years is superior image interpretation at least as good, if not better, than clinicians.
IRA FLATOW: That’s incredible. And speaking of incredible, I was incredibly intrigued by a TED Talk that you gave recently in which you talked about how AI can detect and even predict the onset of different diseases by analyzing the retina. Wow. How does that work?
ERIC TOPOL: Yeah, this is really striking. And so as opposed to what we were just talking about interpreting the image with machine eyes, what wasn’t predicted, Ira, was that machines could see things that we’ll never see. And so with the retina, it’s probably the prototypic example. Who would have thought that by having an AI look at the retina you could predict Parkinson’s disease five to seven years before it appears any symptoms, Alzheimer’s disease, kidney disease, heart and stroke risk, the hepatobiliary disease, control of diabetes, control of blood pressure– all these things from a retinal picture. It’s pretty darn striking.
SOPHIE BUSHWICK: That sounds amazing. But can you explain how it works? What is the AI seeing that we humans can’t?
ERIC TOPOL: Right. That’s the key, Sophie, is that we don’t have full explainability for this capability, which, of course, extends to other things, like the electrocardiogram or chest X-rays or so many of these images. We have had some work to try to get to explaining the features that the machine eyes pick up– the so-called saliency maps– and also a recent work, so-called counterfactuals. But we still have a ways to go to fully explain the power of machine eyes, which are almost using our imagination of what you could think they could do in the years ahead.
IRA FLATOW: And could there be other diseases using the retina that we don’t know about?
ERIC TOPOL: Oh, sure. I mean, I think we might have thought, of course, that the retina, because it’s brain tissue, would give us insight about neurodegenerative diseases. But you’re absolutely right. There’s probably a lot more that we’re going to pick up. This is just, at the moment, scratching the surface of where this is headed. Someday, we’ll likely be doing self-imaging of our retina for checkups through our smartphone.
SOPHIE BUSHWICK: Oh, wow.
IRA FLATOW: Wow. One of the places where AI is currently being used is mammography. There was a piece in KFF News last week about how patients are charged an additional $40 for an AI reading of their mammogram. I mean, this raises some big questions of inequity, right?
ERIC TOPOL: Absolutely. I think this is unconscionable. We’re not at a stage that patients should be charged for AI. We’re kind of in the research mode with little implementation. If this is a frontrunner for where we’re headed, where we’re going to shunt the costs of getting these systems in clinics to patients, that would be horrible. And as you say, that is going to worsen inequities.
SOPHIE BUSHWICK: And, I mean, other than this particular case, radiologists are still the ones interpreting our scans, not AI. But what do you think it would take for AI to be used regularly for this?
ERIC TOPOL: Yeah, I think what we want to have is compelling evidence. So, for example, in Sweden they had 80,000 women were randomized to having their mammograms read by the radiologist and the AI compared to just the radiologist. And it showed marked superiority for accuracy at a considerable savings of time. So that system, interestingly, would be suitable. It has the kind of compelling evidence not generally used in many places here in the US. But it’s the kind of data that we don’t have across many of the other types of medical imaging that’s done.
IRA FLATOW: Lots of people, of course, the phones are lighting up about this topic. Let’s see if we can get a few in. Let’s go to Jeff in Chicago. Hi, Jeff. Welcome to Science Friday.
JEFF: Thanks for taking my call.
IRA FLATOW: Hi. Go ahead.
JEFF: Well, I’ve been struggling with insomnia for many, many years. And I go to acupuncture. He’ll go, well, there’s many, many reasons for insomnia. And I’d like to know if AI could take a list of characteristic symptoms and come up with a probable diagnosis for what might be wrong as far as insomnia goes.
IRA FLATOW: Thanks for the call.
JEFF: So it’s not like analyzing a retina or something. It’s more like analyzing a list of symptoms.
IRA FLATOW: Yeah. Eric?
ERIC TOPOL: Yeah, I think this is where we’ve seen good evidence that when patients put in their list of symptoms, lab tests, any findings that they have to ChatGPT, or even better, GPT-4, they may get very meaningful output of what’s going on. I mean, we’ve seen it, of course, in anecdotes. But they’re striking, like the boy who went three years where his mother took him to 17 different doctors.
And he had progressive worsening of his gait and horrible pain and growth arrest. And then his mother, when she entered the symptoms, got the diagnosis of occulta spina bifida– a tethered spinal cord– which was released by a neurosurgeon in Michigan. And he did much better. So, Jeff, if you put all your symptoms and any tests that you have into ChatGPT, you might get something that’s useful. It’ll just keep getting better over time.
IRA FLATOW: In fact, you mentioned in a recent TED Talk. You told the story of a patient who had trouble getting the proper diagnosis until their parent turned to ChatGPT and imported their symptoms. And it worked, right?
ERIC TOPOL: Yeah. There are so many cases that are emerging like that, Ira. I mean, a patient of mine whose sister had the diagnosis of long COVID saw many neurologists over many months and was told there was no treatment. And then when she– the person I know– put in the symptoms and the lab tests that came out with a diagnosis of limbic encephalitis, which is treatable. And the patient was treated and is doing exceptionally well.
So this is kind of a second opinion. You made the point that, of course, it can generate mistakes. But also, we have the doctors that can overlook this human in the loop thing. So it’s something that’s useful to bounce an idea off, and it’s just going to be more accurate as we go forward.
SOPHIE BUSHWICK: Well, I have a story that’s sort of the opposite thing. A friend of mine likes to use ChatGPT. And his daughter was ill, and he entered some of her symptoms. And it diagnosed her with appendicitis. So he rushed her to the doctor, and it turns out she was completely fine. So for parents or for people who are entering their own systems, how reliable should they consider ChatGPT as a diagnostic tool?
ERIC TOPOL: Yeah, I’m really glad you made that point. Errors can be made. But what’s really important to put in context is that we make a lot of errors without AI. In fact, a recent Johns Hopkins study showed that there are 800,000 people getting medical diagnostic errors who are severely disabled or die.
IRA FLATOW: Wow.
ERIC TOPOL: And that’s without AI per year. So, yes, it’s true. But when we get studies that are going to look at this at scale, it’ll be interesting to see how many mistakes are made. The good part is we’ve got the wisdom and experience of clinicians to oversee the results of the AI. But don’t underestimate how many errors are being made today without AI. So that’s something to keep in mind.
IRA FLATOW: Interesting. Let’s go to the phones to Connecticut. Jeanie in Connecticut, hi. Welcome to Science Friday.
JEANIE: Hi. Thank you.
IRA FLATOW: Hi. Go ahead.
JEANIE: I’m a big fan. Dr. Topol, thank you for joining today. My question for you is about whether or not AI could be used in the creation of monoclonal antibodies to prevent COVID, because Evusheld fell out of effectiveness because the variant changed. Could AI be used to create monoclonal antibodies to prevent COVID on a much more rapid scale? And if it would be effective in that, could they also be used for asthma, allergies? Does AI have a role in the creation of those?
IRA FLATOW: Good question. Thanks for calling. We’ll let you drive safely. Eric?
ERIC TOPOL: So it’s a really important area, which is drug discovery, which is a very hot topic because recently a group in Boston, for the first time in 38 years, used AI to discover a new structural class of antibiotics with antibiotics that were effective against staph aureus that are resistant to current antibiotics. So the point that’s being made about COVID, there’s already been AI work to show that you could come up with pan-coronavirus antibodies that bind to key sites that are in the virus.
And so right now, of course, we don’t have an antibody that’s effective against the current variants. And this is going to be a segue to having those sort of antibodies. So across the board, whether it’s antibodies, small molecules, we’re going to see a lot of acceleration in drug discovery. It’s not going to happen so much overnight. But over the next few years, you’ll see the difference.
IRA FLATOW: Francois in Texas, hi. Welcome. Are you there, Francois?
FRANCOIS: Oh, Francois, yes. So I wonder if there’s any particular field of medicine, like cardiology gastroenterology, that is currently better served by AI than others?
IRA FLATOW: You’re talking to a cardiologist, so let’s see what he says.
FRANCOIS: Oh, OK.
[LAUGHTER]
ERIC TOPOL: Well, it’s interesting, some specialties that have really been leading the charge. I mentioned the ophthalmologists with the retina. The gastroenterologists– there have been 33 randomized trials to have machine vision during the colonoscopy. And the pickup rate of polyps– adenomatous polyps– is substantially higher.
So that one is on the brink of becoming, hopefully, the way we move forward so we don’t miss the small and important polyps at colonoscopy. But it’s affecting every type of clinical practice over time. I mean, this is not something that’s only radiologists, pathologists, and certain clinicians. But it’s starting to have an effect across the board.
SOPHIE BUSHWICK: Programs like ChatGPT, they are trained on publicly available information and on past data. And sometimes that data has a lot of bias within it. Could we be replicating that same bias by relying on these AI programs for diagnoses?
ERIC TOPOL: Yeah, this is another key point, which is since the inputs are all human content, and we have all sorts of embedded biases in that content, that will be reflected in the output, too. So that’s why we have to be on guard and interrogate the input data and the model for propagating or amplifying bias. And this is something that can’t get enough emphasis.
We have to do much better. We’ve seen so many examples of biased AI models. And now that we’re into this multimodal model phase where these transformer models that are really enabling ChatGPT and these advanced large language model chatbots, that potential can even be worse. And it’s a very serious limitation that we have to deal with.
IRA FLATOW: Do you have any suggestions on how to make it better and taking the bias out?
ERIC TOPOL: Well, I wish we didn’t have such deep biases in our human content. But since that’s the basis, that’s what has to be– of course, it’ll be helped, Ira, if the input is based on multi-ancestry data, not just a emphasis on European ancestry, for example. That’s going to help. But it has to be– you see, now the data is not supervised anymore. It’s unsupervised, self-supervised. So with that, there has to be an increased attention– tight surveillance– of what’s going in. And that should help weed out or reduce the magnitude of the bias.
IRA FLATOW: Is there enough data to do that– varying data?
ERIC TOPOL: Yes. The problem we have right now is the major models that are used today, like the ones we’ve been talking about, were not ever medically trained. They’re just trained on everything that’s out there in the internet and books and Wikipedia. And so we need fine tuning. And of course, there hasn’t been much of that. But there was a fascinating preprint published just a few days ago from Google using one of their models, which compared 20 primary care doctors versus patients and had amazingly improved outcomes on the AI. So we’ll see that when it’s trained medically.
IRA FLATOW: Now let’s go to Maria in Sunset Park, Brooklyn. Hi there. Welcome to Science Friday.
MARIA: Hello. Thank you. Welcome to you, too.
IRA FLATOW: Go ahead.
MARIA: OK. So I accompanied a good friend for sort of an emergency mammogram after a doctor’s visit, given some symptoms of what the physician had noticed. So I accompanied her because clearly her stress level was very high. So I said, I’ll come along with you. We’ll have lunch, blah, blah, blah. So we get there.
And at the desk, gets the usual paperwork, and then she was offered an option to have an additional read beyond the human eyes of the physician by AI. And indeed, the cost was $40. And I didn’t say anything because she had enough stress, and she didn’t need me questioning her decision. But it made me very concerned. First of all, if someone is doing research for a product and a patient is being asked to participate, they should be paying you. You shouldn’t be paying them.
IRA FLATOW: I’m with you on that.
MARIA: Yeah. I didn’t say anything. But then it kept me thinking, where is that information going? Because the release that she signed didn’t say anything about the security of that information. Is this going to be somewhere that an insurance company somewhere down the line, regardless of what the outcome of the read was, is going to have access to that? And what are the implications of that?
IRA FLATOW: Good question. Yeah, I’m glad you went along with her so you could ask this question. Dr. Topol?
ERIC TOPOL: Yes. Well, Maria is spot on about that concern. The way we have treated health data until now, there’s all sorts of data brokers and breaches of data. So this is something that has to be protected. So it adds on to the insult of charging a patient to use AI that isn’t proven to be of value and then to be concerned that what’s going to happen to that data. So I share Maria’s point. It’s something that has to be addressed. That’s why there’s lots of loose ends here.
IRA FLATOW: Maria, I hope that answers your question.
MARIA: Right. I just have one more question for your guest. Where would one start the query or the questions from the legal, political, legislative aspect of this? Who should be concerned about this? Is it our senators and representatives? Does it start at the state level or just everywhere and send that email to everybody like, what are you guys doing about this?
IRA FLATOW: Good question, Maria. Good question. What do you think, Dr. Topol? Who do we get involved in this the best, the easiest, the fastest way?
ERIC TOPOL: Well, we have not done a good job, Ira, as you know, for protecting our health data. And most Americans have had that breached and even perhaps more than once. So part of the problem is, if we want to get this right, people should own their data– all their health data. And it shouldn’t be sitting on servers that can be hacked and have their health systems hijacked, ransomware, all sorts of things that have happened– breaches. So we have to do much better to protect. And this is not just an AI problem. This is a general, deep problem in this country.
SOPHIE BUSHWICK: On the pro-AI side, there was a recent study from Google that suggested AI chatbots actually showed more compassion to patients than doctors did. I mean, doctors can’t be pleased by this, can they?
IRA FLATOW: How hard is that to do?
[LAUGHTER]
ERIC TOPOL: This was a shocker to me. And I was a doubting Thomas on this when the first study came out last year. But the most recent studies have really reinforced it. And what’s going on here is, as you know, machines don’t know what empathy is, but they can promote it greatly. And so what’s amazing is now that the notes are getting automated through this process of AI, they can train/coach the doctors– say, why did you interrupt Mrs. Jones after 8 seconds? Why didn’t you ask her about this concern or that?
And so the human content that’s being used to train the AI is in turn directly promoting empathy. And I wouldn’t be surprised in the years ahead that all doctors will have to go through coaching by an AI to be more compassionate, be more empathetic. Who would ever have guessed that? I had thought we would get more empathetic by having more time with patients– having direct face time. But I didn’t anticipate this. And it’s getting replicated several times now through multiple different groups.
IRA FLATOW: This is a movie coming out.
[LAUGHTER]
SOPHIE BUSHWICK: Well, actually, you mentioned something about having the AI take care of the notes so the doctor’s not doing it. Can you tell us a little more about that?
ERIC TOPOL: Yeah. This is starting to spread like wildfire in a good way because, as you know, the worst thing for clinicians is spending hours as a data clerk. That’s not why we went into this. We went into it to care for patients. And this detracts from it. And it’s just something that was never envisioned to be such a major, dominant part of medical practice. But what we see now is that that conversation between a patient and doctor can be automated, digitized in a note that’s better than anything in our charts today.
But more importantly, not just the note– which can be put in any level of education, any cultural language, whatever for the patient, with the audio file if there’s any confusion or forgetting what was discussed during the visit. But then that can drive all the data clerk work so that the keyboard liberation movement is on right now, taking care of pre-authorization, follow-up appointments, lab tests, procedures, prescriptions and nudging the patient subsequently for, did you check your blood pressure? Are you going on these walks? Or whatever that was being discussed during the visit. So this is a very welcome change and will quickly in the next couple of years be, I think, widespread throughout the practice of medicine.
IRA FLATOW: Good to know. Who would have thunk. Let’s go to Levittown, PA. Kaitlin, hi. Welcome to Science Friday.
KAITLIN: Hi. Good afternoon. So my question was– I’m a mental health therapist in New Jersey. And I work with a lot of clients who have needs that might not be met. And their families or their friends or people around them might not necessarily understand all of those needs.
So coming in to see me, I’m wondering if there’s any way for the AI to essentially put out a disclaimer almost to these potential families’ patients that they really need to talk with a mental health professional or a clinician to make these diagnoses because otherwise we might end up with situations where people could just have a need that isn’t being met. And instead, because they’re raising some concern about it, misunderstandings could lead to misdiagnoses such as oppositional defiant disorder or something like that.
So how do we prevent some of those biases? Or how do we prevent some of these misunderstandings when, inherently as human beings, it’s hard to understand what our biases are sometimes and hard to recognize that, and therefore hard to make that not a part of our software.
IRA FLATOW: Yeah, we talked about that a bit, Dr. Topol. But what about– thanks for the call. What about that? Can AI recommend to the families to treat the patient better, or how to treat the patient better, or understand the patient?
ERIC TOPOL: Yes. I mean, I think what’s being touched on here is that we have such inadequate support from psychologists, psychiatrists, counselors for mental health issues. And so we need help. But trying to find the right balance, as the questioner put forth, is tricky. And yet we have these chatbots that are trying to help manage anxiety, depression, certain parts of mental health.
But this still is in the early stages of validation. There are small randomized trials. But whether it will do what you’re asking, Ira, remains to be seen. Hopefully it will, because we have such a terrible mismatch of professional help versus need.
IRA FLATOW: Let’s go to the phones to Mike in Northern Wisconsin. Hi, Mike.
MIKE: Hi. That’s a real nice segue into the comment/question that I have for the guest. It’s my argument that we are overselling the artificial intelligence terminology idea. I argue that a lot of the things that we’ve talked about today and a lot of the advancements that are being labeled as AI are actually incremental advancements in techniques and technologies that were thought up by humans, advanced by humans, but with the advancement of computing power.
And humans that are going in and better organizing how the data is organized and what tools are being applied, that we’re getting some increasement. And it’s valuable increasing knowledge. But as far as artificial intelligence, where a computer scheme is being put together to come up with a completely new, novel, unique idea, I would argue that there is very little of that we’ve seen yet.
And I use the example of the design of experiments, which I’m sure your guest is very well aware of, is applying computing power and ever increasing amounts with ever increasing amount of data to be able to do what essentially scientists were doing 100 years ago is not the same thing as artificial intelligence, where the machines are thinking of new ways that were never thought of by humans to do something.
IRA FLATOW: Let me get Dr. Topol’s reaction.
ERIC TOPOL: Well, I mean, there’s some elements of what you’re bringing up that I agree. And that is we’re seeing massive computing power. I mean, the base models like GPT-4, the prototype, has over 24,000 graphics processing units that are being used. So a massive computing power, and it’s a trillion connections as opposed to our brain that has 100 trillion connections.
But it isn’t just that. I mean, the transformer model, which in recent times led to ChatGPT, GPT-4, Gemini, and so many other models, this is something that is AI. It’s the real deal. It isn’t just computing power and ingestion of massive content. It isn’t just a stochastic parrot. It is truly the most advanced form.
And that’s why there’s so much fear about artificial general intelligence emerging and companies that are making that their target, which is having every task of a human being performed as well or better by an AI. So, no, I don’t agree with the point that this is just human stuff and higher computing power. There’s another very vital component added to that.
SOPHIE BUSHWICK: I mean, all these definitions are areas of contention with people disagreeing over what exactly they mean by artificial general intelligence and, yes, by AI as well. But I’d like to change the topic a sec and go back to doctors and AI. I was wondering if there’s a tension between the use of these programs and between doctors who might not want to cede authority to AI. I’m thinking about how the doctor might get upset with you if you tell them you’ve googled your symptoms, and they tell you not to use Doctor Google.
ERIC TOPOL: Right. Well, Sophie, there’s a history of that with Google and also with people bringing in their data from whatever source, and the reluctance of many physicians to really take that seriously. But that’s going to get amped up now because now that these chatbots are going to be widely available, we’re going to see a different look where you have this extensive conversation and you get outputs.
And you say, I’m going to my doctor to ask about this. So it is a challenge to physicians that is ceding authority, not having total control as has been the case for a couple of millennia. And so this is just another version of that, which is perhaps even more of a challenge.
IRA FLATOW: But this is the future is what you’re saying. Get used to it.
ERIC TOPOL: Yeah. I think what we have to– everything has its benefits and risks. But if you just think about what we’ve been discussing with respect to alleviation of being a data clerk and getting a second opinion, eventually we’re going to have models that have the entire corpus of the medical literature and knowledge up to the moment. And we can’t– no human, no doctor can have that kind of information at their fingertips. So this is where we’re headed. And it is a one-way path and, I think, the net benefit. But we have been discussing many of the liabilities, too.
IRA FLATOW: Yeah. Well, that’s about all the time. We have sort of run out this hour. Will you come back, Dr. Topol, and we can get into the other discussion?
ERIC TOPOL: Absolutely. I love the conversation with you and Sophie. It’s been fun.
IRA FLATOW: That’s it. And that’s about all the time we have this hour. I want to thank our guest, Dr. Eric Topol, founder and director of the Scripps Research Translational Institute. He’s professor of molecular medicine, executive vice president of Scripps Research based in La Jolla, California.
Copyright © 2023 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/
Shoshannah Buxbaum is a producer for Science Friday. She’s particularly drawn to stories about health, psychology, and the environment. She’s a proud New Jersey native and will happily share her opinions on why the state is deserving of a little more love.
Ira Flatow is the host and executive producer of Science Friday. His green thumb has revived many an office plant at death’s door.