05/14/2021

Can An Algorithm Explain Your Knee Pain?

17:23 minutes

Side view of mixed race female doctor taking black patient's blood sample with lancet pen in the ward at hospital
Credit: Shutterstock

In an ideal world, every visit to the doctor would go something like this: You’d explain what brought you in that day, like some unexplained knee pain. Your physician would listen carefully, run some tests, and voila—the cause of the issue would be revealed, and appropriate treatment prescribed.

Unfortunately, that’s not always the result. Maybe a doctor doesn’t listen closely to your concerns, or you don’t quite know how to describe your pain. Or, despite feeling certain that something is wrong with your knee, tests turn up nothing.

A new algorithm shows promise in reducing these types of frustrating interactions. In a new paper published in Nature, researchers trained an algorithm to identify factors often missed by x-ray technicians and doctors. They suggest it could lead to more satisfying diagnoses for patients of color.

Dr. Ziad Obermeyer, associate professor of Health Policy and Management at the University of California, Berkley joins Ira to describe how the algorithm works, and to explain the research being done at the intersection of machine learning and healthcare.


Further Reading


Segment Guests

Ziad Obermeyer

Ziad Obermeyer is an associate professor of Health Policy and Management at the University of California, Berkeley in Berkeley, California.

Segment Transcript

IRA FLATOW: This is “Science Friday”. I’m Ira Flatow. Much of medical diagnosis now depends on artificial intelligence, intended to help the doctor find out what is wrong with you. For example, research shows that computers can spot potentially deadly skin cancers better than doctors. And now, a new computer algorithm is able to help diagnose knee problems that doctors may have missed, especially in people of color.

Researchers trained an algorithm to see something that X-ray technicians and doctors are not seeing. And it’s leading to more satisfying diagnoses for those patients. Here to explain is Dr. Ziad Obermeyer, Associate Professor of Health Policy and Management at the University of California, Berkeley, whose research was published in “Nature”. Dr. Obermeyer, Welcome to “Science Friday”.

ZIAD OBERMEYER: Thank you, Ira. It’s great to be here.

IRA FLATOW: So nice to have you. Tell us, why did you decide to train an algorithm to read knee X-rays?

ZIAD OBERMEYER: Pain is such a huge problem in our society as we all know from the ravages of the opioid epidemic. And so it seemed like a really interesting use for algorithms was to try to get them to make headway on what might be causing pain and what might be accounting for it so that we could try to develop better solutions for it. So imagine a patient coming into your office as a doctor with knee pain. What you do with that patient and how you think about them is going to really depend on whether or not their knee pain is rooted in causes of pain in the knee or might be, for example, the manifestation of stress, or anxiety, or depression.

And so the fundamental question is, when I do an X-ray of the knee, do I find something in the knee that I need to manage by sending them to an orthopedic surgeon or to a physical therapist? Or do I pursue other lines of reasoning? So the X-ray really helps the doctor hone in on causes of pain inside of the knee.

Now, the problem is that, if we look at the patients who are coming in with knee pain, it’s more likely, if that patient is black, for the X-ray to come back looking normal and for the doctor to then think, well, there’s nothing in the knee. I’m going to pursue some other options for helping this patient with their pain. What our algorithm showed is that some fraction of those patients and a larger fraction of patients, if they’re black, actually have some definable cause of pain in the knee that the doctors aren’t seeing. And as a result, they’re pursuing other directions rather than focusing in on the knee.

IRA FLATOW: And what does the algorithm actually do? What does it look for? How does it work?

ZIAD OBERMEYER: The algorithm learns from the patient’s experience of pain, not necessarily the doctor’s medical knowledge. So our knowledge about the particular problem we’ve studied, which is arthritis in the KNEE comes from studies that were done in the 1950s on coal miners in Lancashire, England. And so that was the source of our knowledge of arthritis.

And so if the doctor’s medical knowledge is built on certain populations who are largely white and male and only learns about the causes of pain that affect those populations, they’re not going to apply to patients who are not white and male and living in Lancashire in 1950. And so the algorithm is expanding our repertoire for understanding the kinds of things that cause pain in the knee by listening to the patient and correlating the patient’s pain report to features of the X-ray.

IRA FLATOW: That’s amazing. So that truly is a very narrow population, is it not?

ZIAD OBERMEYER: It’s a very narrow population on which this huge foundation of medical knowledge is built up. And I think that’s just the way we have historically produced medical knowledge. We need doctors to look at individual patients, look at their X-rays, try to figure out what’s going on. So I think that’s the– it’s very characteristic of the human way to produce medical knowledge. And one of the things that I’m most excited about is deploying algorithms to do that exact same thing but at much larger scale and, hopefully, in a much more equitable way.

IRA FLATOW: When you say equitable, what do you mean by that?

ZIAD OBERMEYER: One of the key strengths of our study, which was led by my co-author, Emma Pierson, who’s a professor at Cornell, is that we learned from a very diverse population of people. So the algorithm didn’t just look at one study population in one hospital. We had the benefit of a huge study that was sponsored by the National Institutes of Health that enrolled of really diverse, large populations of patients from across the US. And so the algorithm was able to learn from the experience and the X-rays of a really, really diverse set of patients, and that was the secret to the algorithm seeing things that radiologists had missed in these earlier studies.

IRA FLATOW: And I understand that you call this a tool for justice. Why do you say that?

ZIAD OBERMEYER: I think, a lot of times when we build up medical knowledge or even when we see patients, we only pay attention to certain things. And in this case, I think it’s not necessarily the doctor’s fault that medical knowledge was built up in this very specific way from this very specific population of patients. And so my hope, from this algorithm but from a lot of other places where we’re starting to see algorithms being used, is that they can learn from that huge cross-section of society and listen to the experiences and the pain of very many different groups of people.

And that’s the first step to– making that pain and that experience visible is the first step to helping those people. We can’t help people whose experience we don’t see or pay attention to. And I think that algorithms can be really helpful in highlighting exactly those experiences so that we can help.

IRA FLATOW: When you say people that the experiences don’t pay attention to, you’re talking about people of color, I imagine, because those people were probably not included in the original data set.

ZIAD OBERMEYER: Yeah, I think that’s exactly right. And I think I can just walk through maybe a concrete example, which is that, say, a patient comes into your office– and you’re the doctor– with knee pain. You might examine the patient and then send them for an X-ray. And if that patient is black, that X-ray is more likely to come back as looking normal even though that patient is in severe pain.

But that’s, in part, because what we consider normal doesn’t capture the experience of people of color, of socioeconomically and educationally less privileged people. And so the algorithm, by seeing the causes of pain in those groups, can actually help the doctor see, oh, no, there really is a problem in this person’s knee, and maybe they need to go see an orthopedist, not go see a therapist or some other modality of treatment.

IRA FLATOW: So the algorithm can point to something in the knee, but it can’t tell you what the problem is.

ZIAD OBERMEYER: Yeah, so the algorithm can point to the parts of the X-ray that look like they connect to the patient’s pain. But that’s it. It’s just going to tell you, look there. And that’s why I think a doctor or researcher will see that and take the next step of saying, well, what is there? How can I poke at that further and start to understand it better?

IRA FLATOW: So what are those next steps? How do we get to that next step, knowing what is causing the pain that we can’t see?

ZIAD OBERMEYER: Well, as they always say, further research is needed.

IRA FLATOW: [LAUGHS]

ZIAD OBERMEYER: But let me try to tell you a little bit about the kind of research that I imagine and that I know some colleagues are starting to do. When we see two X-rays that a radiologist would look at in the same way and say, these two knees basically look the same, we can find pairs of X-rays where the algorithm disagrees and says, no, X-ray on patient B actually looks like it’s going to hurt a lot more.

So then we can look at that patient’s MRI and try to get a sense of, well, how is that different? What are these things that we can see if we take a closer look that might be linked to that pain? We can also ask the doctor– here are two images that you see the same way, but the algorithm disagrees and thinks patient B is much higher. And just ask the doctor to say, what’s different about these two knees? And how does that correlate to– what you learned about in medical school about the knee and structure, how does it correlate to what ends up happening to that patient in terms of their long-term outcome? And so plugging that algorithm into the normal process of scientific discovery seems like a really promising avenue for future research.

IRA FLATOW: Do you see this algorithm as doing the doctor’s work for them? Or is it a tool?

ZIAD OBERMEYER: I see it more like a tool. I think that, as I mentioned, the usual way that we train these algorithms when we’re doing work in artificial intelligence is to train the algorithm to replicate what the doctor is saying. So there, the implicit target is replacing the doctor, of course. But I don’t think that’s what we want to do because we know that doctors and medical knowledge in general have their limits, and we want to do better. So I think that by training algorithms to listen to patients, to learn from nature, to see what goes on to happen to patients in terms of real outcomes that we care about, we’re developing a new tool that can plug into medical knowledge and add to it, not just replace the doctor.

IRA FLATOW: So is this something that, if doctors are in medical school, they would then be introduced to the training then, instead of later on in life?

ZIAD OBERMEYER: I really hope so because I think that one of the things that everyone agrees on about the future of medicine is that it’s going to be all about data. And yet, currently, medical schools are not producing graduates or even selecting people to come into medical school on the basis of any kind of data science or statistical knowledge. So given how foundational I think these tools are going to be for the future, we really have a pipeline problem of people who are going to be building and using and interpreting these tools. And I hope that medical schools, when they select their premed students, will start to catch up.

IRA FLATOW: Your profile says you work at the intersection of machine learning and health care. That sounds like an intersection we should be investing heavily in, and it sounds like you’re in favor of that.

ZIAD OBERMEYER: Absolutely. I think that a lot of really exciting, new fields come from the intersection of two other fields. But I think it’s not as simple as just putting two people together or one from each field. I think both of those people have to share a common language and be, in a sense, bilingual.

So I trained as a doctor, but I spent an enormous amount of time trying to teach myself and learn from other people about the technical side of how to build and use algorithms. And I think that people from that technical side also need to spend a lot of time in health care, understanding what the important questions are, where the data comes from, how doctors make their decisions. But I think that, once we have that cadre of bilingual researchers, those people will be a really, really powerful force for making medicine better in the future.

IRA FLATOW: I remember talking a while back with doctors about using algorithms. And I remember a study where melanoma doctors, skin cancer doctors, were asked to create an algorithm or to tell an algorithm what were the diagnoses criteria that they use for diagnosing skin cancer. And then they gave the doctors and the algorithm the same set of slides to look at. And the algorithm did better than the doctors because the doctors sort of ignored their own advice, but the algorithm went right ahead with what it was supposed to do.

ZIAD OBERMEYER: It’s a very deep point because often we don’t know how we do what we do. I think this is a very robust finding from decades of research and psychology is that we have this intuitive tacit knowledge about anything from walking down the street, which we can’t fully describe how we do, to interpreting a complex medical signal, like a picture of a melanoma or an X-ray. And so one of the miracles of human intelligence is that we’re able to do so many of these things without knowing how just by virtue of having learned from repetition. And so it doesn’t surprise me at all that an algorithm can beat the doctor at her own game simply by consistently applying these rules in a standardized way.

IRA FLATOW: Well, because we know humans are fallible, right?

ZIAD OBERMEYER: Absolutely.

IRA FLATOW: On the other hand, there are doctors who are great diagnosticians, and maybe AI hasn’t reached their level. So we’re not saying let’s get rid of the doctors and go all AI because we want some kind of combination of both.

ZIAD OBERMEYER: Absolutely. To get rid of the doctors, you’d need to be able to write down, in a rule, exactly what the doctors are doing. And as your example just illustrates, we can’t do that. We can’t say, OK, algorithm, go find all of the patients with a heart attack because we can’t actually even write down in our data set what is a heart attack.

And so that’s why I view these tools as very powerful. When we have a target like the patient’s report of pain to train them on, the algorithms can add a huge amount of value. When we can show them, OK, this patient, 10 years later, went on to have a heart attack, what can we see in their electrocardiogram today? That can be very powerful.

But simply trying to replace the doctor at what she’s currently doing today is not a great target for the algorithm because we can’t write down, do what the doctor does. That’s the magic of seven years of medical training.

IRA FLATOW: This is “Science Friday” from WNYC Studios. Are these algorithms and these tools cheap enough so that they can be used wide spread into all areas and all socioeconomic levels?

ZIAD OBERMEYER: Well, it depends on how much value you assign to an hour of my and my co-author’s time. But over in the grand scheme of things, building these algorithms is very, very cheap. I think that there are some fixed costs to hospitals getting their data online and in a form that can be integrated into those algorithms. So it’s a little bit like all of these big improvements in, for example, the digital revolution in the ’80s and ’90s or electricity at around the turn of the century. There are these big fixed costs that institutions need to pay. But once they’ve paid them, actually running the algorithms is dirt cheap and really has the potential to add a lot of value for very little cost.

IRA FLATOW: And of course, one of the questions about algorithms is, how do you keep them from being biased? Because they are made by people.

ZIAD OBERMEYER: Absolutely. The algorithms learn from data that are biased, and the data are biased because they come from a health system that is also very biased and that denies access to certain people and that treats certain people very differently. And so it’s a huge problem when we’re building these algorithms.

That said, in a lot of our own work, we found that, even though there’s widespread use of biased algorithms, a lot of those algorithms can actually be corrected and improved and turned from tools that are fundamentally unjust into tools that get resources and attention to people who need them. And it all depends on these little technical choices that we make when we’re training the algorithms.

So in our case, about the knees, do we train the algorithm to listen to the doctor and potentially replicate decades of bias built into medical knowledge? Or do we train it to listen to the patient and represent underserved patients’ experience of pain accurately and make their pain visible?

IRA FLATOW: So now that you know this much about your algorithm, where do you go from here? How do you make it better?

ZIAD OBERMEYER: I think a really important part of making any algorithm better is getting better data. So we were very, very lucky to have this study that was done by the National Institutes of Health that made the data public. That’s what led us, as researchers, access the data and build this algorithm to begin with.

But unfortunately, those kinds of data sets are very rare. And so what we’re doing now in collaboration with some of my co-authors and part of a nonprofit we started called Nightingale Open Science is working with a number of health systems in the US and around the world to build up exactly these kinds of data sets, images matched with not just what a doctor said about the image but what happened to the patient, what the patient says about their experience. By building up those data and making them public and available to researchers free of charge to do nonprofit research, I think we’re going to start building up more and better algorithms that do exactly these kinds of things.

IRA FLATOW: I think a key phrase that you said there is collecting the data from around the world, which means more diversity in the data.

ZIAD OBERMEYER: Absolutely, both around the world and also within the United States. There was an article a few months ago that showed that, of all of the algorithms that are being trained, the vast majority come from these small, niche academic medical centers that have the data resources to actually feed into algorithms. So a big part of our mission in this venture is going to under-resourced county health systems and building up their data infrastructure and going outside of the US and building up data infrastructure in lots of different places, exactly as you said, because we need that diversity to train better and more just algorithms.

IRA FLATOW: Well, we’re happy that you took time to be with us today, doctor.

ZIAD OBERMEYER: It was such a pleasure and, as a longstanding fan, a real honor to be on the show.

IRA FLATOW: Thank you very much. Dr. Ziad Obermeyer, Associate Professor of Health Policy and Management, University of California at Berkeley.

Copyright © 2021 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producers and Host

About Katie Feather

Katie Feather is a former SciFri producer and the proud mother of two cats, Charleigh and Sadie.

About Ira Flatow

Ira Flatow is the founder and host of Science FridayHis green thumb has revived many an office plant at death’s door.

Explore More

What Does Restorative Justice Look Like… In Space?

In a graphic anthology of possible futures, astronauts embarking on a moon settlement operation explore justice in space.

Read More