Can You Hold An Algorithm Accountable?
11:54 minutes
We rely on computer algorithms for so many things. They help us figure out what music we like and who we decide to date. They can influence who gets accepted into college and who gets fired from their job. And they’re at the center of the future of driverless cars. That’s a lot of power for a computer algorithm to hold. And if something goes wrong, the results can range from mildly annoying to disastrous. But how do you hold an algorithm accountable? Ben Shneiderman, a professor of computer science at the University of Maryland, has proposed a National Algorithm Safety Board, which would provide oversight for high-stakes algorithms. He joins Ira to discuss what this type of regulation would mean.
[Why machines discriminate—and how to fix them.]
Ben Shneiderman is a professor of Computer Science at the University of Maryland in College Park, Maryland.
IRA FLATOW: This is Science Friday. I’m Ira Flatow. We rely on computer algorithms, if you think about it, for just about everything– to help us figure out what music we like, who we should date. They can influence who gets accepted into college, who gets fired from their job or even hired, and of course, they’re at the center of the future of driver-less cars. Now that’s a lot of power for a computer algorithm to hold, and if something goes wrong? Well, the result can range from mildly annoying to utter disaster.
Take for example, the Air France flight that was brought down by an autopilot failure in 2009. Was an algorithm legally responsible for such a tragedy? Perhaps not, but as my next guest says, that doesn’t mean nothing can be done about it. He’s proposed a regulatory body called a National Algorithms Safety Board, which would provide oversight for high-stakes algorithms, and he joins me today to discuss it. Ben Shneiderman is Professor of Computer Science at the University of Maryland. Welcome to Science Friday.
BEN SHNEIDERMAN: Glad to be here, Ira. Thank you.
IRA FLATOW: Where do things stand right now– if an algorithm fails and does some sort of harm, can the developers be held responsible for that technology the way that machine manufacturers are?
BEN SHNEIDERMAN: That’s what we’d like to see. That’s the right idea, Ira, and yet, most software contracts hold harmless, which means that the developers are not reliable and responsible or the software is delivered as is. And those terms and contracts mean that there is not sufficiently clear liability for those failures, and I think that’s what’s going to have to change.
And I’m looking for those changes in contracts, but also changes in the way algorithms that are widely used and have potential for damage could be reviewed before they’re implemented, like zoning boards review house plans before they’re built. And then when you go build the house, the inspector comes by and gives you a certificate of occupancy. It’s a common notion to have some kind of independent oversight for large projects where safety is involved, where you want to make sure the builder uses safe materials to prevent fire, that it’s strong enough to support the weight that’s necessary.
So there are building codes, and we’ve come over the years to agree on the ways in which buildings should be built. And I think we need to begin to move down that path. It will take a decade or more till those kind of rules are developed for different industries, but I think that’s where we want to go.
It’s time to move out of the adolescent phase of software engineering. After 50 years of this adolescence where hold harmless remains the tradition, it’s time to move on to take responsibility. So I’m after increasing the levels of automation, but ensuring, as much as possible, that safety rules are adhered to.
IRA FLATOW: When you say take responsibility, are you saying that people should be given more responsibility for anything going wrong, rather than the hardware?
BEN SHNEIDERMAN: Not just more responsibility, but clarify that all responsibility inheres in the humans, whether it’s the developers of the programs and the software and the algorithms, or the company who sold it, or those who operate it. There’s a chain, a complex chain. But we know– as you started out in saying in manufacturing or aircraft– we know that there are designers, producers, and operators, and when a disaster happens, we know how to look for that. So existing liability law is probably good enough, my lawyer friends tell me, but we need to put that in place.
And so the first part of it is the planning oversight, but when there are disasters, we want to have the kind of retrospective analysis of these disasters like the NTSB, the National Transportation Safety Board, does. And that way, there would be public and independent reviews that could be read by everyone with recommendations for improvements.
IRA FLATOW: Well, where would you put responsibility for software that gets hacked? I mean, more and more– we hear stories now about even hacking the new driver-less car software. We know about hacking electoral systems. If something gets hacked, who’s responsible for that?
BEN SHNEIDERMAN: Pretty terrible. Pretty terrible things. Well, the person who did the hacking might be one candidate, but those who made the software with insufficient protection also may have some liability. So we need to set some standards so we know of those who developed the software. And software developers would like this as well, to know if they’re protected because they’ve done the right thing.
I think the insurance companies will be our friends in this process, as they have been for buildings and other major constructions, and they will push for safer conditions for all of us to use these algorithms. I want to ensure human control while increasing the level of automation. That’s how I think we’re going to get to quality.
IRA FLATOW: Are we getting to the point where we’re giving too much credit to artificial intelligence or robots as being equal and in controlling what goes on? I know you–
BEN SHNEIDERMAN: Bravo. [LAUGHS] You bet. You bet. That’s been a long-term pursuit of mine, that I do think the ideas that are suggested, that machines are becoming our partners and our collaborators, and we’re working together with them as equals, is preposterous and, let me say, misleading and dangerous.
And for me, computers are no more intelligent than a wooden pencil. When we pick up the pencil and we use it, we’re responsible for what we do with the pencil. Computers are more powerful, more complex, but still, human responsibility and human creativity is what drives the world forward.
IRA FLATOW: Would that mean maybe, perhaps going backwards a little, taking a step back from automation? Let me think out loud. For example, like the voting system. We have electronic voting. To make it completely un-hackable or more un-hackable, why not just go backwards to paper, where everybody’s doing the paper and there’s no way to get into the machine?
BEN SHNEIDERMAN: Sure. The studies about electronic voting machines do favor some kind of paper trail. But the advantages of electronic systems are strong, and that’s why they’ve propagated. But not all electronic systems get designed properly on the first go-around.
Let’s take the case of airbags. They save about 2,500 lives a year, but in the beginning, they inadvertently killed about 100 children and others a year. And the design was insufficient, and so the improved designs made them safer.
And I think that’s what we have to learn– how to make safe technologies that can be used widely. Certainly, for the case you mentioned of driver-less cars, that’s going to be a major issue, and we need to ensure that those who promote self-driving cars are also going to take responsibility for the times when they fail.
IRA FLATOW: Wait–
BEN SHNEIDERMAN: I think inevitably, they can be safer than the current systems because this increased levels of automation is what I’m advocating, while ensuring human control.
IRA FLATOW: Well, your National Algorithms Safety Board, would it have teeth to enforce or make rulings about the safety of algorithms that come before it?
BEN SHNEIDERMAN: Exactly the right question. So there’s a variety of things– I’ve studied independent oversight methods, and you want to have sufficient teeth to carry out investigations of failures– for example, possibly subpoena power. You want to have open and transparent processes by which they conduct their investigations, and then you want public reports to present this in an open way that others can critique or discuss it. The question, then, is whether their recommendations are enforceable– in what ways are they enforceable.
So the NTSB, the National Transportation Safety Board, provides one model. It’s wisely been set up by Congress to be independent of any government agency. It’s funded separately because often the NTSB investigates the work of different government agencies. And so they’ve become a respected source for doing these investigations and presenting the results and then making recommendations about the maintenance of aircraft, their improved design, and so on. So we could learn a lot if these processes get to be made more open.
IRA FLATOW: So are the tech developers on board with this idea, too?
BEN SHNEIDERMAN: It’s a mixed story. We’re still in the early days of this, but you can see that there are major companies that recognize the problem. The five major companies that are involved in partnerships on AI, just a few months ago, started this group that would address these issues in a much more substantive way. I’m encouraged by that.
I’m encouraged by the fact that, starting in November, a number of conferences in the artificial intelligence and machine learning world began to address this issue of explainable algorithms, explainable AI. I’m encouraged that DARPA, the Defense Advanced Research Projects Agency, has started a program on explainable AI with 11 projects going forward. So there are good indicators that this issue has risen to public awareness and that people in the technology field, public policy, and elsewhere are putting forward proposals, developing technical solutions.
Just look– the simple idea of the way we expect the flight data recorder to record the details of an aircraft’s flight, and yet, we have no standards or ways in which a computer algorithm would log its usage. And we don’t have the tools yet that would allow you to examine a million or a billion lines of such a log in a way that would help reveal the patterns of what happened and what went wrong.
IRA FLATOW: Let me take a quick question because we’re running out of time, and I want to go to David in South Bend. Quickly, David.
DAVID: My question is instead of a regulatory commission, what about a society that would govern the standards of the industry? Like, I’m a nurse, so I go by the AORN standards of practice. So why wouldn’t that type of thing be useful for them? Like, they would get a certification saying that they meet the qualifications on risk analysis and–
IRA FLATOW: OK, let–
[INTERPOSING VOICES]
Yes, go ahead. I’ve got about a minute.
BEN SHNEIDERMAN: That’s a reasonable proposal as well, but I just want to mention that the NTSB is not a regulatory agency. Its function currently is investigative of failures, and that alone seems one reasonable route. Now, what I advocated earlier, the planning oversight, that has like zoning boards that were so familiar with. That is usually interpreted as a regulatory approach. So that gets you closer to the idea of regulation, but remember, NTSB and like organizations could be valuable just for their investigative and retrospective review.
IRA FLATOW: All right, Ben. Thank you. I’m sorry we’ve run out of time. Very interesting. Ben Shneiderman, Professor of Computer Science, University of Maryland.
Copyright © 2017 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of ScienceFriday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/
Katie Feather is a former SciFri producer and the proud mother of two cats, Charleigh and Sadie.