For Better Or Weirder: How AI Fails
20:44 minutes
AI may be short for “artificial intelligence,” but in many ways, our automated programs can be surprisingly dumb. For example, you can think you’re training a neural net to recognize sheep, but actually it’s just learning what a green grassy hill looks like. Or teaching it the difference between healthy skin and cancer—but actually just teaching it that tumors always have a ruler next to them. And if you ask a robot to navigate a space without touching the walls, sometimes it just stays still in one place.
AI researcher Janelle Shane, author of a new book about the quirky, but also serious errors that riddle AI—which, at the end of the day, can only do what we tell them to.
Plus, she unveils some new, AI-generated show ideas for Science Friday.
Invest in quality science journalism by making a donation to Science Friday.
Janelle Shane is an artificial intelligence researcher based in Boulder, Colorado and author of You Look Like A Thing And I Love You (Voracious, 2019).
IRA FLATOW: This is Science Friday. I’m Ira Flatow. Artificial intelligence, AI– it’s a fact of life, right? Facial recognition algorithms that recognize who’s in your photos, email filters that keep your inbox relatively spam-free, and even your airplane’s autopilot, they’re all powered by forms of AI.
But AI is not omnipotent. In fact, artificial intelligence has severe shortcomings. It can only really do exactly what we tell it to, even if what we’re telling it to do is to learn and evolve. And this has consequences for everything, from how well AI can identify tumors, to whether self-driving cars decide to stop in time to avoid hitting a pedestrian.
Dr. Janelle Shane is an artificial intelligence researcher, blogger, and author of the new book, You Look Like a Thing and I Love You– How Artificial Intelligence Works and Why it’s Making the World a Weirder Place.
She spends her time teaching neural nets to write recipes, tell knock-knock jokes, and even flirt. Yes, these are all harder than they sound. And she’s here to talk about how AI works, and why things go wrong, and how AI is making our world weirder. Welcome, Dr. Shane.
JANELLE SHANE: Hey. Great to be on the show.
IRA FLATOW: Yeah, thank you. You know, I read a lot of books, and I come across a lot of titles. But your title, You Look Like a Thing, and I Love You, how did that come about?
JANELLE SHANE: Well, this was one of these experiments where I was trying to get one of these artificial intelligence algorithms to imitate pickup lines. You know, these kind of a cheesy one-liners that nobody uses in real life, but you’re supposed to use to pick up strangers.
And so I trained this computer to try to imitate these. I gave it a whole bunch of examples of existing pickup lines. And I had it try to do its best to produce more of them. And as it turns out, the algorithm wasn’t quite capable of latching on to kind of the gross innuendos or the bad puns or anything like that. And instead, what came out was these sort of sweet, direct, uncomprehending lines. And the one that was my very favorite was, you look like a thing, and I love you.
IRA FLATOW: Well, give me an idea of the process. How do you begin? How do you teach an algorithm to flirt?
JANELLE SHANE: Yeah. So you need to give it examples, because it’s going to start with no idea of what English even is, or what pickup lines are. It has no clue of any of that.
So I have to give it a whole bunch of examples. As far as it knows, this could be a grocery list in Finnish. This could be a cookbook recipe. It could be anything.
And so then it has to look at this text that I gave it, and look at the patterns, analyze the patterns, and try to predict which letters come after which other letters.
IRA FLATOW: So then do you have to tell it, oop, that’s wrong. Try again. I mean, this is a long learning process?
JANELLE SHANE: It does this process without any direct help from me. Basically, it’s doing a lot of trial and error just by itself, looking at these examples I gave it, making guesses and saying, OK, how close did I get?
And so it’s pretty hands-off for me, actually. I let it start– I give it the data. I let it start learning. And I come back a while later to see what it’s doing.
IRA FLATOW: So it will stop and come up with something it’s figured out that might be correct. And you look at it and say, well, that’s not really what it is?
JANELLE SHANE: Yeah. So I can stop it at any point and look at how it’s doing and say, oh, I better let it learn for a little longer. Or I could say, oh, I made a mistake, and I gave it the wrong data set. And I thought– I accidentally gave it the cookie recipe data set again, and it’s doing cookies because it doesn’t know I want pickup lines now.
IRA FLATOW: And you also say in your book that it has a– it doesn’t have much of a memory, a short memory.
JANELLE SHANE: Yeah. That’s one of these things that you see as a real hallmark of computer-generated text, actually, is that it loses its train of thought a lot. So even the most sophisticated text-generating algorithms we have today, they can do full sentences now, which they weren’t able to do a while ago.
But still, if you look at them trying to write a story, you’ll see– it’s almost like looking at a story in a dream. So it kind of forgets where you are. Suddenly you’re not in an airplane anymore. You’re in a shop, and you don’t know exactly how. And there’s a new character in the room. And now we’re talking about something different. So yeah. It is a kind of surreal experience listening– reading these stories that these things generate.
IRA FLATOW: Yeah. You had an example of that about writing recipes for– they didn’t have a whole lot of– didn’t they– it wasn’t– this didn’t do so well, did it?
JANELLE SHANE: Yeah. You know, it’ll be asking you to take a pie out of the oven that you never put in there in the first place. And it’ll ask you to use ingredients that it definitely didn’t call for. Or it may– the title of the recipe maybe say, oh, yes, we’re making cookies. And then by the end, you’re making soup now for some reason.
IRA FLATOW: And your big thesis is that AI makes the world a weirder place. So where exactly does the weirdness come from?
JANELLE SHANE: Well, this is a thing that I really like looking at. There is a lot of things that AIs do that really reveal how very different they are from a human level intelligences. And I find that particularly interesting.
Because we often forget that a lot, just because the AIs in our science fiction tend to be pretty human-like. But what we have today in the real world is a lot simpler, like it’s maybe the rough computing power of an earthworm, something like that. And so because it doesn’t have this kind of understanding and this kind of context that we have, it will do weird things.
IRA FLATOW: So you know– and that brings up the question in my mind, if it’s doing weird things, why are we putting so much effort into depending on it?
JANELLE SHANE: Well, it is really successful for a lot of problems we’ve had trouble solving before. So one of the first big commercial successes was translation, like language translation.
So Google Translate rolled out this AI-powered machine translation. And it was this huge leap in the ability of algorithms to automatically translate text. And it’s still not perfect. There are definitely glitches. But it’s pretty functional for a lot of basic purposes.
IRA FLATOW: Our number, 844-724-8255 if you’d like to call in. 844-724-8255, talking about artificial intelligence with Janelle Shane, author of You Look Like a Thing, and I Love You.
So facial– there’s a serious side to it. As you say, it is successful at doing certain things. For example, let’s talk about self-driving cars. An investigation recently revealed that Uber’s self-driving car hit and killed a pedestrian last year because the software wasn’t told that pedestrians could be outside the crosswalks. That’s a pretty big limitation and a mistake.
JANELLE SHANE: Yeah. Yeah. And this is these things. Like, the AIs do exactly what we tell them to do. So if we tell them that you’re not ever going to find pedestrians outside of a crosswalk, well, that’s a heck of a thing to tell a self-driving car. But these things don’t have the common sense that a human might. And they don’t realize that this is a bad directive, and they should question this. Or maybe this thing could be a pedestrian after all.
And then there are other mistakes layered on top of that, too. Like, even if you don’t know what it is, maybe you should break.
IRA FLATOW: Yeah. Well, that goes back to my earliest days of my computing life, where if you don’t put in good information, you’re not going to get good results. They used to call it garbage in, garbage out. And you talk about that a little bit.
JANELLE SHANE: Oh, yeah. Yeah. That absolutely still holds true, almost even more so with these kinds of algorithms, because they just imitate what they’re given. Or they’ll do exactly what they were given.
IRA FLATOW: Mm-hmm. And where do you see the frontier of artificial intelligence going now? What is the hardest nut to crack here?
JANELLE SHANE: Gosh. These hard nuts to crack, they end up being in unexpected places. One thing I see is that people are generally really bad at guessing what’s a bad project– what is a broad project, and what is, therefore, a really tricky one to give AI.
So things like content moderation. Human language is really, really complex, and sometimes we don’t realize that until we try to get a computer to understand what we write, and see all the different ways in which it trips up. So yeah.
IRA FLATOW: Well, I was thinking, one of the interesting areas where it has been successful has been in medicine in diagnosing diseases. And famous case of looking at slides of melanoma, and the computer doing so much better than human doctors because it was told how to look for them.
JANELLE SHANE: Yeah. There’s a lot of promise there, and there’s a lot of people working on AI for medicine. There’s a lot of really repetitive tasks that we’d like to be able to automate, or we’d like to be able to at least use AI as a second layer of examination on some of these slides or these images. So yeah. There’s a lot of promise in medicine for sure.
IRA FLATOW: Let me see if I can get a quick call in before we go, to Tom in Milwaukee. Hi, Tom.
TOM: Hi.
IRA FLATOW: Go ahead. Sure.
JANELLE SHANE: Hi.
TOM: Hi. Just from your descriptions of how the computer is programming, how it’s trying things. Is there– it sounds sort of like an analog to biological evolution, and just trying things to see if they work. Do you see that? Or is that not applicable?
JANELLE SHANE: Oh, no. You are absolutely right. It is totally applicable. And in fact, there are a lot of these AI systems, their method of trial and error and their structure even is drawn direct inspiration from biology.
So computer versions of evolution, evolutionary algorithms, genetic programming– this is one big area of artificial intelligence, and one that’s had some really nice successes, too. So absolutely.
And then also it gets really weird, too, in the same way biology gets weird. And you get things that look like bird poop, or they eat hydrogen sulfide, or they eat the weirdest things, you’ll get these artificial organisms that are really simple. They’re being trained in simulation. And they’ll learn to do things like harvest energy from the simulation’s math errors. It is so interesting, the sources of energy they find.
IRA FLATOW: All right. We’re going to come back, and hang around, Janelle. We’ll talk to you after the break. Janelle Shane, artificial intelligence researcher, author of You Look Like a Thing, and I Love You. It’s a really interesting book. I love your little hand drawn pictures in there, too, Janelle. They’re kind of fun.
Stay with us. We’ll be right back taking your calls. 844-724-8255. You can also tweet us @scifri talking about AI weirdness. Stay with us.
This is Science Friday. I’m Ira Flatow. For better or for weirder, artificial intelligence is changing our world and the way we live in it. And we’ve been talking about some of the strange things that happen when you ask AI to write a recipe or tell jokes, even flirt. Or you just don’t give it good instructions.
With my guest Janelle Shane, author of You Look Like a Thing and I Love You. And while we were preparing for our interview, we asked you, Janelle, to do something for us. You had a neural net generate some new Science Friday show ideas from a list of past topics.
Now, these topics are generally two or three word phrases that briefly summarize what we talked about. For example, we give the topics names like news roundup or volcanoes or insect extinction. And my producer sent you a couple of hundred of these. What did you do with them next?
JANELLE SHANE: Well, since there were a couple hundred, I decided to use a neural network, one of these AI algorithms that had already been pretrained. Because I knew just a couple words would not be enough for it to really learn much English or much– very many ways of doing anything other than just copying the existing list and spitting it back to me.
So I turned to a big neural net that had been already trained by a group called OpenAI. And they trained it on a couple billion pages of internet text, like a really huge data set.
So in this training data, in the course of this training, rather than just seeing Science Friday topics, it was seeing everything from recipe blogs to news sites to Harry Potter fan fiction, like anything you can imagine is somewhere in the data that this algorithm learned from.
And so it learns things like what words tend to be used together, how do you copy a phrase? And so I figured that if I gave it some of– if I gave it this list of existing topics and just said, try and add to the end of this list, that it would come up with something that it thought went along with the list of topics Science Friday’s done in the past.
And so I used this website, TalktoTransformer.com to actually do the interfacing with this big neural net. And I got some pretty fun results.
IRA FLATOW: Yeah. Let me go– let me list some of the results we got. We had– here’s a list of them. Wolf; bear and pumpkin; trees, doe, and more; California drawers; big cows; spider buster; cats and bones; pancakes; pigeon foes; muscles in the earth; horse cobra; grass to beard ingredients; vaporize shoes; dungeons of lessing crystal.
So we– I don’t know if these are going to make it on any of our future shows. But that is some of the weirdness that you are talking about. And that brings up an interesting topic I want to get into. And this is from a tweet that’s coming in now.
Matthew wants to know, can we discuss the differences between machine learning and true AI? Seems most discussion has actually been about machine learning.
JANELLE SHANE: Yeah. So this is one of these things where AI is used to mean a whole bunch of different things, depending on who’s talking. And so for the course of– for my book actually, I had to really choose between those different definitions.
So if I’m saying AI, am I going to mean like science fiction AI? Or am I going to mean like remote workers who are typing in the answers from some distant location? So what I ended up going with is the definition that a lot of computer programmers use today, in which I’m using AI to mean the same thing as machine learning– so these kinds of algorithms that learn via trial and error to achieve some kind of goal, rather than being given exact instructions.
IRA FLATOW: Are we eventually going to get the weirdness out of AI? Or do these unexpected results have things to offer us?
JANELLE SHANE: Yeah. I think the weirdness that we’re seeing in stuff like the Science Friday topics is really symptomatic of something that we’re going to have as part of AI. It’s kind of what results from a really simple, narrow, artificial intelligence.
One of these– it’s so much simpler than a human brain, and yet is trying to understand really complex world, a really complex– the full complexity of what humans do. And what results is often mistakes that happen from this lack of context.
So yeah. I think we are going to be stuck with these narrow algorithms for a long time yet. You kind of, say, compared to C-3PO, it’s a lot closer to toasters that we’re working with.
IRA FLATOW: That’s interesting. Now, your book, it’s a great primer on how AI actually works. It takes shortcuts. It does literally what we tell it to. It sometimes find solutions that we rather it didn’t. But is it also capable of learning and evolving to a pretty sophisticated degree? This seems like a pretty fun mix of possibilities and limitations.
JANELLE SHANE: Yeah. So it is interesting. It was basically the narrower the problem you give one of these AIs, the smarter that it seems. So if you choose something that’s really narrow, like chess or Go, AI is really good at these kind of games. And it will completely astonish human players and come up with new strategies that people never heard of– really beautiful strategies.
On the other hand, you give it a task like folding laundry, and that’s much harder in many ways. It’s really interesting.
IRA FLATOW: You know, we talked about it. And to amplify on that, as an example we were speaking of self-driving cars and how the Uber self-driving car hit a pedestrian. I know in my Tesla now, and I know it’s an evolved– they’re upgrading it all the time– I actually can see pedestrians, little figures of them on my screen even– no matter where they walk. So you can evolve and make it a lot smarter.
JANELLE SHANE: Yeah. And I think there’s a lot of incentive that people have to keep working on these algorithms. And we’re going to see them rolled out in a lot of different areas. They are very useful.
There’s also stuff we have to watch out for, of course, as we don’t want to place too much trust in these little warm level intelligences.
IRA FLATOW: All right. And are– should we be fearful then, or not fearful, of the supposed singularity that some very smart people say that AI is going to be smarter than we are and take over?
JANELLE SHANE: Well, I actually tend to, from what I’ve seen, I tend to agree with the researchers who say that we’re not going to see that kind of AI in our lifetimes. Probably not even close.
I think brains in general are a lot more complex than we tend to give them credit for. And the human world and just the world in general is a lot more complex than we give things credit for.
So you always see peoples trying to build AIs to do something, and then discovering once they try this that it is actually a lot harder than people thought. And humans are amazing. Humans are doing really broad tasks, like laundry, for example, without realizing what level of sophistication you really need for that.
I mean, maybe someday. So it’s– I’m a fan of science fiction. And it’s fun to speculate about what might happen someday and what future intelligences might look like. But I don’t think we’ll see that anytime soon.
IRA FLATOW: I take heart in knowing AI can’t do laundry yet. So that’s very– thank you, Janelle.
JANELLE SHANE: Yeah. Alas, no butler bots.
IRA FLATOW: Janelle Shane, artificial intelligence researcher and author of a really cool little book, You Look Like a Thing and I Love You. And we have an excerpt of the book on our website ScienceFriday.com/weirdness. Thank you for taking time to be with us today.
JANELLE SHANE: Hey. Thank you so much.
Copyright © 2019 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/
Christie Taylor was a producer for Science Friday. Her days involved diligent research, too many phone calls for an introvert, and asking scientists if they have any audio of that narwhal heartbeat.
Ira Flatow is the founder and host of Science Friday. His green thumb has revived many an office plant at death’s door.