Why AI Is A Growing Part Of The Criminal Justice System
34:45 minutes
Facial recognition technology is all around us—it’s at concerts, airports, and apartment buildings. But its use by law enforcement agencies and courtrooms raises particular concerns about privacy, fairness, and bias, according to Jennifer Lynch, the Surveillance Litigation Director at the Electronic Frontier Foundation. Some studies have shown that some of the major facial recognition systems are inaccurate. Amazon’s software misidentified 28 members of Congress and matched them with criminal mugshots. These inaccuracies tend to be far worse for people of color and women.
Meanwhile, companies like Amazon, Microsoft, and IBM also develop and sell “emotion recognition” algorithms, which claim to identify a person’s emotions based on their facial expressions and movements. But experts on facial expression, like Lisa Feldman Barrett, a professor of psychology at Northeastern University, warn it’s extremely unlikely these algorithms could detect emotions based on facial expressions and movements alone.
Artificial intelligence shows up in courtrooms too, in the form of “risk assessments”—algorithms predict whether someone is at high “risk” of not showing up for court or getting re-arrested. Studies have found that these algorithms are often inaccurate and based on flawed data.
And though we tend to see machines and algorithms as “race neutral,” Ruha Benjamin, a professor of African-American Studies at Princeton University, says they are programmed by humans and can end up reinforcing bias rather than removing it from policing and criminal justice. At the same time, Sharad Goel, a professor of Management Science and Engineering at Stanford University, is developing a risk assessment tool that accounts for different sources of bias. He thinks there is a way to use AI as a tool for more equal outcomes in criminal justice.
These guests join Ira to talk about how AI is guiding the decisions of police departments and courtrooms across the country—and whether we should be concerned.
How does facial recognition technology work?
Think of facial recognition as your “faceprint.” A computer program analyzes your unique facial structure, such as the distance between your nose and lips, and maps those key features onto an existing image—or, commonly, against a database of existing images—for comparison. The technology can also be used in “real-time.” For example, certain police departments can scan the faces of passersby using a surveillance camera. However your faceprint is less accurate than a fingerprint when the technology is used in real-time or on large databases.
How is it used?
Facial recognition is used to identify you—and that can mean a lot of things. The iPhone X uses the technology to unlock your smartphone when you simply look at the screen. Face recognition has also become a tool for law enforcement. U.S. Immigration and Customs Enforcement has mined the DMV databases of states that grant driver’s licenses to undocumented immigrants, and Chinese authorities managed to locate and arrest a man in a crowd of 60,000 at a concert. Advertisers have also begun to dip into the technology. Dozens of Westfield shopping centers in Australia and New Zealand use digital billboards with embedded cameras that can determine viewers’ age, gender, and even mood—and can conjure tailored advertisements within seconds.
How many people are in a facial recognition database?
The short answer: A lot. But it’s nearly impossible to determine an exact number. However, we do know that there are over 117 million people in law enforcement facial recognition networks, according to a report from the Georgetown Law Center on Privacy and Technology. In addition, the Government Accountability Office found that in a four-year period the FBI conducted over 118,000 face recognition searches on its database, and Microsoft Celeb—the largest publicly available face recognition dataset in the world—contains over 10 million images of nearly 100,000 individuals.
How can I find out if I’m in one of those databases?
You can’t—and one in every two American adults is in a law enforcement face recognition network. You might have already unknowingly consented to the release of the pictures you have uploaded to social media, or dating sites. Given the amount of encounters everyone has with facial recognition technology on a daily basis, you most probably are.
What are my privacy rights? And are any cities or states doing something about this?
When it comes to how law enforcement uses face recognition, several cities have banned the technology, but there’s no national standard. But a handful of states have limited restrictions on how private companies can collect and use data for face recognition. Illinois became the first state to pass legislation requiring affirmative consent from customers in order for companies to collect and store biometric data. Texas and Washington have developed similar laws. but the Illinois Biometric Information Privacy Act (BIPA) is the only one that allows individuals the right to seek legal action against companies for violations. Recently, a federal court ruled that users can now sue Facebook for unlawful use of facial recognition technology. In addition, the E.U. is planning for larger regulations.
Jennifer Lynch is the Surveillance Litigation Director at the Electronic Frontier Foundation.
Lisa Feldman Barrett is a professor of Psychology at Northeastern University.
Ruha Benjamin is author of Race After Technology: Abolitionist Tools for the New Jim Code and a professor of African American Studies at Princeton University in Princeton, New Jersey.
Sharad Goel is a professor of Management Science and Engineering at Stanford University.
IRA FLATOW: This is Science Friday. I’m Ira Flatow. Three US cities– I’m talking about Oakland, San Francisco, and Somerville, Massachusetts– have banned their police departments from using a form of artificial intelligence called facial recognition.
It analyzes a person’s facial features and checks its database of faces and comes up with an identity match. The idea that your face is now being recorded and stored can be upsetting. Presidential candidate Bernie Sanders is demanding a national ban on this technology, and other candidates are calling for greater scrutiny of how it’s being used.
This hour, we’re going to be talking about it ourselves. We’re taking our own look at how police departments, law enforcement agencies, and courtrooms are using facial recognition and other forms of AI from motion detection algorithms to risk assessments and asking whether these technologies are really accurate and fair, and that’s when I’m going to be asking you.
What do you think about using facial recognition and other forms of AI in the criminal justice system? Give us a call– our number, 844-724-8255, 844-724-8255, or you can tweet us. Tweet us at SciFri.
Let me start out with the facial recognition technology first. Jennifer Lynch is the surveillance litigation director at the Electronic Frontier Foundation Welcome to Science Friday.
JENNIFER LYNCH: Thank you.
IRA FLATOW: Well, just to start off, give me give me a definition of what facial recognition is.
JENNIFER LYNCH: Sure. Facial recognition or face recognition is a technology that allows you to identify or verify the identity of somebody based on specific features of their face. Usually, that’s performed on digital images, but it can also be performed on video.
IRA FLATOW: When law enforcement is using facial recognition, where are these photos coming from?
JENNIFER LYNCH: Well, for the most part, if law enforcement is using face recognition, the photographs are coming from mug shot databases. So about 14 different states partner with the FBI and share their mugshot photographs with the FBI, and then they have access to the FBI’s photos. But what we also know is that many states also include face recognition in their driver’s license databases. So about 43 states in the United States include face recognition in their driver’s license databases, and of those, 20 to 30 states are sharing that information with the cops as well.
IRA FLATOW: That’s really very interesting. Could I find out if I’m in a database?
JENNIFER LYNCH: Well, I think that if you live in one of the 43 states that has face recognition in their driver’s license database, then you’re likely in that database. If you have a driver’s license, if you have a passport, you could also be in the State Department’s face recognition database. And if you’ve ever been arrested for a crime, there’s a good chance that you’re in the FBI’s mugshot database.
IRA FLATOW: But there’s no central place on the internet that I can look up my name and see if I’m in a database someplace?
JENNIFER LYNCH: No, there’s no central place on the internet. And so I think that’s really challenging for Americans right now because based on a study out of Georgetown, we learned a couple years ago that pretty much 50% of Americans are already in a government face recognition database, but it’s hard to figure out which database you’re in, who has access to that information, and whether you can actually get yourself out of that face recognition database.
IRA FLATOW: Does law enforcement to have to disclose how they’re using your day your facial recognition or you in the database?
JENNIFER LYNCH: Well, I think there’s a good argument that under public records laws, they should have to disclose that. We have a right to information about what data the government has on us under the Privacy Act, which is a federal law, but we should also be able to contact our local and state police departments and ask them whether they have information on us as well. And that includes face recognition.
IRA FLATOW: Do we have a sense of what could be next and how it’s used?
JENNIFER LYNCH: Well, we do have a sense of that. And so for the most part, what we’re seeing now with face recognition is that law enforcement is trying to use face recognition on static images. So that might be trying to identify somebody in a Facebook post or Instagram post or trying to identify somebody who refuses to identify themselves against the mugshot database. But what we’re seeing on the near horizon is the use of face recognition on the back end of cameras like surveillance cameras and body cameras. So I think we will start to see that very soon in cities across the country unless we see cities start to pass bans or moratoria like we’ve seen in San Francisco, Oakland, and Somerville.
IRA FLATOW: What about facial recognition, for example, used in the public by commercial uses? Like if you walk into a mall and you go into a store, they do they take a picture of your face and go into a database or possibly figure out what you shopping for.
JENNIFER LYNCH: Yeah, well, we don’t have any federal privacy laws right now that require stores or malls to disclose that information to you. We do know that there are companies that are selling face recognition technology to stores and malls, and these companies are claiming that stores can use it to identify shoplifters or even to identify people who are longtime customers who might be willing to pay a lot of money for that next shoe or piece of jewelry. And sorry, go ahead.
IRA FLATOW: Go head. I’m sorry.
JENNIFER LYNCH: Oh, I was going to say what we don’t know is how do people get into these databases. In order to have face recognition identification, you have to match somebody’s image against an existing database of photos. So our store is responsible for putting people in a database. We don’t know that, and stores could be basing that on discriminatory practices. I think that that’s even more of a threat from a government database because we don’t have access to public records laws that can let us know whether stores have us in a database.
IRA FLATOW: So that’s one of your biggest worries if this thing just gets totally flooding. Everybody sooner or later is in a database.
JENNIFER LYNCH: Yeah, I think we’re on the cusp of that right now, and that’s why it’s so important for communities to have conversations about what they really want to have happen in their communities. We’re seeing this happen in California. I mentioned the two cities that have already banned government use of face recognition, but we also have a bill that just passed the Senate yesterday, our state Senate, that would put a moratorium on face recognition use on mobile cameras for three years.
IRA FLATOW: So how concerned should we be about what I’m going to call universal face recognition and coming our way?
JENNIFER LYNCH: I think we should be very concerned. We can look at what’s happening in China right now where there are multiple cameras on every street corner, and those cameras aren’t just using face recognition. But they’re also using other kinds of technologies like gait recognition to identify people as they’re walking away from the camera, object recognition, and character recognition to recognize license plates and cars and all sorts of different technologies like that. And I think that we already have existing networks of cameras in the United States. It wouldn’t take much to add face recognition onto the back end of those cameras.
IRA FLATOW: What about the possibility of a mismatch? How accurate this facial recognition?
JENNIFER LYNCH: Well, it really depends on several factors. So lighting an angle of view are huge, but we also know based on some research studies that face recognition is much less accurate any identifying people of color, women, and children or young people.
IRA FLATOW: That’s a lot of people.
JENNIFER LYNCH: That is a lot of people. And if you consider that our criminal justice system is disproportionately made up of people of color, that means that the use of face recognition in the criminal justice system would have an even more disproportionate impact on people of color.
IRA FLATOW: We got a comment on this from a listener through Science Friday box pop, Robert from Holly Springs, Georgia.
ROBERT: Facial recognition is already in use with passports and other security devices. So why should the criminal justice system not use facial recognition.
IRA FLATOW: Yeah, you get your phone opened right by facial recognition. You were already being recognized there.
JENNIFER LYNCH: Well, there’s different ways to use face recognition. So if you’re using face recognition on your phone, in general, that biometric is just stored on your phone. And your phone is the only source for that and the only place where there is access to that. But there are also these vast government databases. Now the question is, should the government have access to photographs that were taken not for a criminal purpose, but for a purely civil purpose to be able to drive a car, to be able to travel outside the country? And I think there’s a strong argument that we have never allowed the government to have vast access to those databases, unrestricted access, and we shouldn’t allow that now.
IRA FLATOW: And I guess the other difference would be when you use your phone for facial recognition, you’ve given your phone permission to look at the face recognition.
JENNIFER LYNCH: That’s exactly as you have, yeah.
IRA FLATOW: I want to move on a little bit. And believe it or not, there is also emotion detection AI. And this technology that claims to assess facial movements and expressions and make conclusions about whether someone is frayed or nervous and angry and just interesting to think about. And if you follow the money, this is a $20 billion industry. Companies like Amazon and Microsoft and IBM are developing and selling this technology to police departments among others.
So how accurate could these motion detection systems be? Here to fill us in on this is Lisa Feldman-Barrett, a professor of psychology at Northeastern University. She joins us via Skype. Dr. Barrett, welcome to the program.
LISA FELDMAN BARRETT: Thanks for having me on your show.
IRA FLATOW: You’re welcome let’s start with how good are humans at detecting emotions from facial expressions?
LISA FELDMAN BARRETT: Well, humans don’t detect emotions. Humans infer emotions. So if you and I were in the same room right now, our brains would be processing not only our facial movements but our vocal sounds, our body postures. There’s a whole broad context that your brain takes advantage of to make a guess about what the raise of an eyebrow means, what the curl of a lip means, and so on.
IRA FLATOW: But so can we tell with any surety how if someone is happy or angry or not?
LISA FELDMAN BARRETT: Well, I think it depends on how well we know each other and whether or not we come from the same context. So it’s in cultural context. So I think the research shows pretty clearly that humans are guessing. To you, it feels like you’re reading someone’s face like you would read words on a page, but that’s actually not what your brain is doing. It’s making an inference.
And if someone comes from the same culture as you and you’ve known them for a long time, you’ve learned a lot about the patterns of their facial movements and what they mean. So you can guess pretty well.
But if you and I come from a different culture, then we’re probably going to have some mistakes in our guesses because the data are really clear that people in Western cultures like ours scowl more often than chance when they’re angry, but only about 30% of the time, which means that 70% of the time when you scowl, on average, you’re feeling something else. And you scowl at a lot of times when you’re not angry, like when you’re concentrating or when you’re confused about something. So we’re using not just the face, but a whole ensemble of signals. And so face reading is really limited, I would say.
IRA FLATOW: Yeah, that’s what my question was. If we’re not good as humans and knowing what these expressions mean, how do we teach that to an AI to recognize it?
LISA FELDMAN BARRETT: Yeah, so first of all, I think it’s really important to understand what AI can do and what it can’t. I have four senior colleagues, and the five of us just published a paper where we reviewed over 1,000 scientific studies, some of which are AI studies where we reviewed all the AI studies that we could get our hands on. And it’s really clear that what AI can do pretty well is it can detect a smile but not what the smile means. It can detect a frown, not what a frown means and that’s under perfect recording conditions– so when the face isn’t occluded and when the light conditions are good and so on.
It doesn’t make inferences about what a facial movement means. It just detects the facial movements. Humans are, I would say, reasonably good at guessing under some circumstances and really bad at other circumstances.
And scientists study you know what makes a good perceiver, when two people make mistakes and so on. So it’s a complicated question that’s not just about the movement of the face. It’s also about what that movement means in a psychological way.
IRA FLATOW: I’m Ira Flatow. This is Science Friday from WNYC Studios, talking about the artificial intelligence and facial recognition. So why is this so appealing then? I mean, why do companies like Amazon want to get in on it, even though it doesn’t align with what we know about facial expressions?
LISA FELDMAN BARRETT: Well, I think that there’s a persistent belief that everybody around the world smiles when they’re happy and frowns when they’re sad and scowls when they’re angry. And everyone around the world can recognize smiles and frowns and scowls as expressions of emotion. And so companies think, oh, this is a really great way to be able to read someone’s emotions in an objective way and then capitalize on that for selling products or for determining guilt or whatever people want to use it for. But the fact of the matter is that, as I said, facial movements can mean many different things depending on the context, and they’re not universal. That’s one thing that we know pretty clearly, I think, at this point.
IRA FLATOW: Lisa, do you agree? Do you think these companies are aware of the issues with their technologies is not being quite ready?
LISA FELDMAN BARRETT: I don’t think they’re necessarily aware that they’re making claims, sweeping claims that are incorrect. I think a couple of companies are becoming aware based on this paper that we published in the press that it’s getting, but it’s really interesting. Some companies are super interested in trying to figure out how to do what they want to do, which is to guess at what someone’s emotion is in an accurate way. Other companies are maybe being a little more defensive and really want to defend what they have because they’ve invested a lot of money in it.
IRA FLATOW: Jennifer, what do you think about all of this motion detection technology?
JENNIFER LYNCH: Well I’m really worried about it because I think, as your other guest mentioned, in the best of circumstances, the technology might be able to identify a frown or a smile, but it can’t identify what that means. However, when we see this technology sold to law enforcement agencies or schools, which is where it’s being sold quite a lot now, the companies really claim that they can tell if somebody is going to do something they can predict with this motion detection technology. And it’s just not there. It’s just not accurate.
And I think it will be used to target people, especially people who are from different cultures, different races or ethnicities. And it will be used to make assumptions about people. Who is going to be the bad kid in the school who we need to pull out of school? Or who is going to be the person who is lying about whether they committed a crime? That’s how motion detection will be used in the near future.
IRA FLATOW: Not ready for prime but still moving ahead. I think we’ve run out of time. I’d like to thank both of you, Lisa Feldman Barrett professor of psychology at Northeastern University, Jennifer Lynch surveillance litigation director at the Electronic Frontier Foundation. Thank you both for taking time to be with us today.
JENNIFER LYNCH: Pleasure.
LISA FELDMAN BARRETT: Thank you.
IRA FLATOW: We’re going to take a break. And when we come back, we’re going to continue our AI theme here. We’re going to talk about how bias shows up in AI. I’m Ira Flatow. This is Science Friday from WNYC Studios.
This is Science Friday. I’m Ira Flatow. We’re talking about the facial recognition and how it is being used to recognize people and not recognize people so well in artificial intelligence. We’ve got some interesting tweets coming in.
Thomas on Twitter says the fact that the world’s tech epicenter of San Francisco has banned the technology should be a major red flag. Somebody else, Ryan, says I think it should be similar to when law enforcement wants to enter your home. Law enforcement should need a warrant to use facial recognition. We’re going to move on now and be more inclusive about are machines unbiased and neutral? We think that they might be. They’re supposed to be.
Machines are made and programmed by people. AI is trained on whatever data we put into it, So the AI used by police officers and judges may not be as neutral as we think it is. Alex from Madison, Wisconsin waited on this through the Science Friday Box Pop App this week.
ALEX: I’m really uncomfortable with our criminal justice system using facial recognition software because that software was created by humans, and we have a lot of implicit bias. Some of us are you know overtly racist. Some of us are accidentally racist. And I can’t imagine that our technology doesn’t reflect that at least somewhat.
IRA FLATOW: So what do you think? Let us know what you think on the Science Friday Box Pop App or by calling 844-724-8255, 844-SCI-TALK, or of course, you can tweet us at SciFri. My next guest is here to tell us more about how bias shows up in artificial intelligence and what we can do about it. Ruha Benjamin is a professor of African-American studies at Princeton. Dr. Benjamin, welcome to Science Friday.
RUHA BENJAMIN: Thank you for having me.
IRA FLATOW: We just talked about facial recognition. But what other AI is being used in the criminal justice system?
RUHA BENJAMIN: So in addition to facial recognition, we have predictive policing, which is identifying particular areas and neighborhoods where more officers should be sent, greater attention. We have risk assessment software of all kinds, deciding people’s fate in a pretrial detention and sentencing and parole. Basically, every arena where decisions are being made, there are programs underway to automate those decisions to reduce the context in which the criminal justice system operates to a single score that can be used to decide people’s fate.
IRA FLATOW: So is our listener, Alex, right? Is bias built into AI?
RUHA BENJAMIN: Absolutely, I mean, as you said at the top of the show human beings are creating this and not all human beings, but a very small sliver of humanity are coding, values, assumptions, decisions into the software and the power, and the danger of the system is that we assume that it’s neutral and objective. And it has this great power embedded in it.
IRA FLATOW: And in what ways does the bias show up? How does the bias get into the machine?
RUHA BENJAMIN: So a number of different levels. So for starters, you have to train algorithms and software how to make decisions. And what do you use to train? Typically, you use historic data the way that we made decisions in the past. Let’s say sentencing decisions or decisions about what neighborhoods to patrol– all of that data is then in the input to these systems to train algorithms how to identify patterns and therefore make future decisions.
So if the way that decisions were made what neighborhoods to patrol were based in part on the socioeconomic class of that neighborhood, the racial composition of that neighborhood, all of that data is the input to these systems. And then what’s spit out is something that on the surface looks objective but in fact, is a reduction of those past decisions into scores into projections about where police should go or how judges should make decisions.
IRA FLATOW: Yeah, but we are only human. I mean, we can’t escape our own history if we’re the ones programming the AI.
RUHA BENJAMIN: Absolutely. And so what all of these systems have in common is this they are trying to identify, project, predict risk the risk of individuals. And so my interest is not in looking at the risk of individuals but in the institutions that produce risk. And so, yes, we have this history, but one of the things that we can begin to start to rethink is, where is the locus of risk? And many of the policies, many of the institutional biases are now being put on the shoulders of individuals. Then we’re giving individuals a score about their relative riskiness. And I think we have to zoom the lens back on the institutions that create a context that makes certain kinds of vulnerabilities possible.
IRA FLATOW: You have called this new technology part of the new Jim code. What do you mean by that?
RUHA BENJAMIN: So here I am trying to get us to really reckon with this history that we’re talking about. And so again, we like to think of technology as asocial, apolitical, ahistoric. And by calling it the new Jim code, which is building on Michelle Alexander’s notion of the new Jim Crow, which itself evokes the history of Jim Crow white supremacist institutions in the United States, and it’s saying that history of segregation of hierarchy of oppression is the input to our new technical systems. And it gets imagined as objective when it’s really coded bias in these systems. And so let’s talk about it with this historical context in mind.
IRA FLATOW: So how do we fix it then?
RUHA BENJAMIN: So there’s a number of ways that we can begin to do that. And things are underway. So we have legislation, as one of your callers, mentioned in terms of San Francisco and other municipalities that are banning certain forms of facial recognition and other automated carceral tools. We have lawyers underway that are thinking about how to litigate algorithms.
So when a decision is made about you using an algorithm, should you have the right to take that algorithm to court or the company that produced it if you feel that the decision was wrong? Another area is organizing, even among tech workers working together, to actually challenge the companies that they work for under the hashtag techwontbuildit.
And so you have tech workers saying, no, even if you employ me, if you tell me to build something for ICE or for the military or for police, I’m going to resist. And then we have a broader campaign of education. I mean, everyday people need to start questioning these tech fixes and holding our representatives accountable for adopting them.
IRA FLATOW: I want to continue with this thread by bringing on someone else, Sharad Goel, who is a professor of management science and engineering at Stanford University. He joins us now via Skype. Welcome to Science Friday.
SHARAD GOEL: Hi, thanks for having me.
IRA FLATOW: You are doing a work with California on improving risk assessment. How are you going to do that?
SHARAD GOEL: It’s a complicated question. I mean, first, I want to say that I agree with a lot of what’s already been said that science and technology, these are human endeavors. And almost certainly everything that we’re doing is affected by this complex history that we’re contending with. And at the same time, my feeling is that if we’ve done well that there is a place for having equitable outcomes or more equitable outcomes by using technology to drive those decisions. So let me give you one example of something that we’re doing in San Francisco.
So when somebody is arrested, a district attorney’s office has to make a decision about whether or not to go forward whether or not to charge that individual with a crime. And traditionally, how it’s being done is by reading police narratives with human experts reading police narrative and then making this decision. And well, I think there are lots of good reasons to do that. At the same time, there are implicit human biases that can taint those decisions.
And so what we’re doing is we’ve built a blind– what we’re calling a blind charging platform– that attempts to reduce the effect of race on those decisions, removing explicit mentions of race, removing people’s names from those police reports, removing hairstyle, location, other indicators that we don’t really think are necessary to make a more objective decision about whom to charge and whom not to by using these technical tools to police justice. That is, human decision makers.
IRA FLATOW: But it’s going to be the judges then, though, who have the final decision on these things.
SHARAD GOEL: Well, in this particular case, it’s the district attorney’s office, not a judge, who’s making that decision. But you’re absolutely right. In some of these circumstances, when someone is trying to decide, when a judge has to decide whether or not an individual is released, whether or not they’re detained on bail, or whether they have other sorts of obligations. And you’re right. There is this interplay between the recommendation that an algorithm can provide and what that human expertise might say.
IRA FLATOW: Yeah, can you actually test these new algorithms out as to what you’re talking about to see if they are biased or unbiased?
SHARAD GOEL: Well either way, everything is bias, and so in essence, we don’t need a test. And so that’s just the nature of not only algorithms. But in human decision-making that these are going to have consequences that we’re not happy with, and I think in some cases, they’re just they’re bad. And we have to try to fix them. But so in my mind, I don’t try to paint this in this binary of biased or on bias or good or bad. But is it better? Is in moving us in the right direction?
IRA FLATOW: And how do you do that?
SHARAD GOEL: So in some of these cases, we can look, for example, these risk assessment algorithms that are being used now across the country, to see whether or not they’re resulting in more people being released into the community, which I think many people would agree is a good thing. And at the same time, these are public safety, cost to those increased in that increased release.
IRA FLATOW: Mm-hmm. And you see this is as a tool then for improving criminal justice?
SHARAD GOEL: Yeah, I think it is a tool in many sectors of society and criminal justice. That’s the area that that’s one of the areas that I’m particularly interested in. And I do see that this is a tool for bringing equity.
And here I again want to emphasize that my view, I think many people, as you see in this area, is that this isn’t a panacea, and we can’t treat it that way that we have to say the criminal justice system is broken in so many ways. There are so many systemic problems that, honestly, algorithms are not going to be the right fix for those. But in some of these least narrow instances, I do think that it can act as a tool to improve outcomes.
IRA FLATOW: Let me ask both of you, Ruha and Sharad, what then is a fair algorithm? What does it mean for an algorithm to be fair, in this case?
RUHA BENJAMIN: I’m interested in how to employ technology tools, algorithms, less for fairness and more for justice. And so what that would mean would actually be using these tools to expose the work of institutions and of those who wield power and through various kinds of carceral techniques. So that would mean looking at the institutions, for example, that create instability that thinking about the way that the criminal justice system requires not more tools and investment and resources, but for us to decarcerate and thinking about how to turn the lens onto those who are making decisions about the most vulnerable among us.
IRA FLATOW: Is it possible to challenge an algorithm in court for being biased? Do you see that is happening also?
RUHA BENJAMIN: There are a group of lawyers that are working to build that capacity. And there are a number of cases that I’d be happy to tweet out later of individual lawyers and groups that are working to litigate algorithms in order to make them more transparent and just.
IRA FLATOW: I’m Ira Flatow. This is Science Friday from WNYC Studios talking about our algorithms, artificial intelligence, and the uses in our legal system. Are you positive? Are you hopeful that this thing, this kind of better fairness or more justice, as you say, is actually going to happen?
SHARAD GOEL: No, I think we’re at that point in history where it’s not totally clear. I think with many emerging technologies, face recognition is one of these examples that we’ve been talking about is that I think these can be a force for good. I think they can also be abused. And we’re at that tipping point that if we don’t have the right regulation to make sure that these things are accountable that we do have transparency that we can understand what’s going on that we can accordingly take action that these can cause more harm than good.
But I’m optimistic that we can have this thoughtful discussion and at least set up a system where we can get some benefit and at least minimize some of the cost. But I don’t think it’s a done deal. And I would very much worry that these will be widely abused if not regulated appropriately.
RUHA BENJAMIN: What I’m really positive about is a growing movement of communities in different locales and nationwide who are creating greater critical consciousness and speaking back to a lot of these automated systems. So we’re talking about the Stop LAPD spying coalition. In Los Angeles, the Algorithmic Justice League, Tech won’t Build It, Our Data Bodies Project, and a number of other organizations and initiatives that are not just paranoid about technology and what’s happening, but are building power in communities to be able to push back and reimagine what a community would look like without these tech fixes.
IRA FLATOW: So do you think this is going to be a grassroots movement? In other words, community by community, state by state, and not a national movement? If On the other hand, maybe other states are looking at what you’re doing.
RUHA BENJAMIN: I mean, it’s really both. In almost every locale, you can find some organizations that’s really working diligently around tech justice, but it’s also a growing national movement, a national consciousness are thinking about what we as citizens, not as users of technology because users get used. And so when we begin to think about our obligation, our responsibility as communities and as people who need to take the power back in our own hands.
IRA FLATOW: How should we talk about it? Is our vocabulary the right way? Do we need to change the way we talk about this?
RUHA BENJAMIN: I’m a little fed up with the bias and fairness as a more watered down version of what we’re talking about, of systems of oppression, institutionalized forms of racism and sexism that the biased talk both individualizes, and it makes it seem like it’s a level playing field like we all have bias when, in fact, power is being monopolist and exercised in very patterned ways. And we don’t all do it to the same extent.
IRA FLATOW: Well, we’ve run out of time. Yes, quickly.
SHARAD GOEL: I think I agree with that. I think that this focus on bias and fairness misses the point that these are complex policy decisions that we’re trying to make, and they can’t be distilled down to these simple binaries.
IRA FLATOW: I’m glad I allowed you to say that. It sounds like a very interesting point. I want to thank both of you for taking time to be with us today. Sharad Goel is professor of management science and engineering at Stanford and Ruha Benjamin, professor of African-American studies at Princeton. Again, thank you both for taking time to be with us today.
SHARAD GOEL: Thank you for having me.
IRA FLATOW: You’re welcome. Before we go, Science Friday is headed to North Carolina next week on Wednesday, September 18 for an evening highlighting science in the Tar Heel state. We partner with WUNC to screen three new films shot and produced by Science Friday in North Carolina followed by a chance to ask the featured scientists more about their work.
So don’t miss out. Go to sciencefriday.com/carolinafilm to get your tickets. That’s sciencefriday.com/carolinafilm.
Charles Bergquist is our director, senior producer Christopher Intagliata. Our producers are Alexa Lim, Christie Taylor, and Katie Feather. And today, we had help from Danya AbdelHameid– sorry, Danya. Our intern is Camille Petersen, and she produced that fantastic segment you just heard about AI and the criminal justice system.
And we bid her a fond farewell this week. And I know any place she goes, they’ll be happy and lucky to have her there. We had technical engineering help today from Rich Kim and Kevin Wolf. BJ Leiderman composed our theme music.
If you want to hear your voice on Science Friday, download this Science Friday Vox Pop App. This week, we’re asking you, whose responsibility is it to ensure clothing is sustainable– clothing brands, textile manufacturers, or the consumers? I’m Ira Flatow in New York.
Copyright © 2019 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/
Camille Petersen is a freelance reporter and Science Friday’s 2019 summer radio intern. She’s a recent graduate of Columbia Journalism School. Her favorite science topics include brains, artificial brains, and bacteria.
Andrea Corona is a science writer and Science Friday’s fall 2019 digital intern. Her favorite conversations to have are about tiny houses, earth-ships, and the microbiome.
Johanna Mayer is a podcast producer and hosted Science Diction from Science Friday. When she’s not working, she’s probably baking a fruit pie. Cherry’s her specialty, but she whips up a mean rhubarb streusel as well.
Ira Flatow is the host and executive producer of Science Friday. His green thumb has revived many an office plant at death’s door.