Why Do Humans Anthropomorphize AI?
17:20 minutes
Artificial intelligence has become more sophisticated in a short period of time. Even though we may understand that when ChatGPT spits out a response, there’s no human behind the screen, we can’t help but anthropomorphize—imagining that the AI has a personality, thoughts, or feelings.
How exactly should we understand the bond between humans and artificial intelligence?
Guest host Sophie Bushwick talks to Dr. David Gunkel, professor of media studies at Northern Illinois University, to explore the ways in which humans and artificial intelligence form emotional connections.
Dr. David Gunkel is a professor of Media Studies at Northern Illinois University in DeKalb, Illinois.
KATHLEEN DAVIS: This is Science Friday. I’m Kathleen Davis.
SOPHIE BUSHWICK: And I’m Sophie Bushwick. Artificial intelligence has become increasingly more sophisticated in a very short amount of time. Even though we may understand that when ChatGPT spits out a response, there’s no human writing us back. Yet we can’t help but anthropomorphize, maybe imagining that the AI has thoughts or feelings even a personality.
So how exactly should we understand this bond between human and artificial intelligence? Joining me now to help answer this question and more is my guest Dr. David Gunkel, professor of media studies at Northern Illinois University based in DeKalb, Illinois. Dr. Gunkel, welcome to Science Friday.
DAVID GUNKEL: Hello. Thanks for having me.
SOPHIE BUSHWICK: To start off, why do we as humans have this tendency to anthropomorphize technology, including AI?
DAVID GUNKEL: So anthropomorphism is a human behavior that is manifested in a number of different places and in different ways. We anthropomorphize our animals like our pets. We anthropomorphize our technologies. And there’s been decades of research under the umbrella term the media equation that looks at this phenomenon. And we actually anthropomorphize each other.
Anthropomorphism is the way that we understand another social entity as social. We don’t know what goes on in the head of another person or an animal for that matter, but we project into them intentional states, emotions, thoughts, et cetera. And this is part of what it means to be social. In fact, we could say anthropomorphism is the social glue that allows us to be social animals and to engage with others in this exact way.
SOPHIE BUSHWICK: And does a physical presence like a robot, for example, change the kind of thoughts and feelings we project onto AI versus just a disembodied chatbot?
DAVID GUNKEL: So there are two things that really trigger anthropomorphism. One is speech. And that’s because human beings– since the ancient Greeks define the human being as the animal with speech, we understand speech as being really intimately connected to who and what we are as a species. So if something talks, we tend to accord to it certain expectations that go along with what we accord to other human beings, who also possess speech.
The other factor that really is powerful in the triggering of anthropomorphism is movement and social presence. And so we’ll find that even very rudimentary social presence or just rudimentary movement can actually cause us to anthropomorphize objects in ways that we normally wouldn’t if they weren’t in motion. A good example of this was done several years ago by a researcher named Wendy [? Zhou ?] in which she created a Ottoman that just moved around the room. And it didn’t have a face. It didn’t have the ability to talk or anything. It just moved.
And when people came into the room, the Ottoman would either run away from them or cozy up to them. And as a result, they started to say, well, the Ottoman is happy to see me or the Ottomans mad at me today. And none of that is actually going on inside the Ottoman, but it is part of that social presence and the way that the Ottoman is moving within the space that’s occupied by the human beings.
SOPHIE BUSHWICK: First of all, that sounds adorable. And I’ve seen something similar. There were these trash cans that had remote controllers so a human could steer them around, and they would roll around, and people would say, oh, it wants my trash. And talking about robots makes me think about this idea of the uncanny valley, when a robot looks like a human, but it’s just a little off. That has a different effect on the type of relationships we form with it, right?
DAVID GUNKEL: Right. So the uncanny valley hypothesis is this idea that, as robots or any other object starts to look more and more humanlike, we begin to accept it as human or humanlike until a certain point is reached where it gets creepy, and we get a little creeped out by the fact that it’s just too close to an actual human being to make us feel comfortable.
This is why a lot of the morphology for robots tend to use animals as opposed to human beings as their model. Because as animals, the uncanny valley expectation or the creepiness that happens as a result of that doesn’t transpire as readily. And so a lot of social robots are designed to emulate animals as opposed to actually try to be the facsimile of a human being.
SOPHIE BUSHWICK: And it’s not all bad to anthropomorphize robots to some extent. So how can we draw on our desire to make robots seem more human but use it for good? Are there some examples of this?
DAVID GUNKEL: So anthropomorphism, I think, is one of these things that is often addressed as a binary. Either it’s good or bad. And I don’t think that gets us very far in understanding our human interaction with the other things in our world. I think we’ve got to look at this more as a degree of difference and come up with a way of understanding the opportunities and the challenges of anthropomorphism so that we are not just either embracing it wholly or dismissing it completely.
I say this all the time that my students. Anthropomorphism is not a bug we’re trying to fix. It’s a feature we’re trying to manage, and we’re trying to manage well for ourselves as individuals but also as a community. But there’s a really big one that came out several years ago, and that’s with soldiers working with explosive ordnance disposal robots in the field of battle. And these robots are not autonomous. They don’t have friendly looking faces. They don’t talk to us. They’re mainly remote controlled. They’re tank like objects with a big arm that reaches out and grabs a bomb to defuse it. But because of the way that these robots interact with the unit and the way in which they really protect the unit from these explosive ordinances that are in the field, they are given names.
The soldiers that work with them award them battlefield promotions and in some cases have even risked their own lives to protect that of the robot. Or when the robot is destroyed, they collect the pieces very carefully and ask for the manufacturer to please return their Scooby-Doo to them. They don’t want a new one. They want that robot because that’s their comrade.
SOPHIE BUSHWICK: Wow.
DAVID GUNKEL: These are soldiers. These are not people who are lost in some sort of ideology of robots or science fiction. They’ve got a job to do. But in order for that job to be done effectively, they need to work with the robot in very close quarters, and they need to create connections with the robot that allow them to function as a team. And this is where the anthropomorphism, I think, really works quite well. It allows for the soldiers and the robot to create these very workable scenarios in which the soldiers rely on the robot, and that reliance doesn’t even need to be bidirectional. It can just be the soldiers value the robot, and as a result, they have an emotional connection to the object.
SOPHIE BUSHWICK: That reminds me of the very intense feelings that the people who worked on the opportunity Mars rover had when they decommissioned it.
DAVID GUNKEL: Yeah, that’s a really good example because I think this is something that surprised a lot of us. When NASA decommissioned the rovers, there was this outpouring of emotion on social media. And it shouldn’t have surprised us, but it did. And the reason it shouldn’t have surprised us is that NASA’s really invited the anthropomorphism by giving these robots a personality in their social media presence by allowing the robots to talk for themselves using the first person singular pronoun, to talk about their activities on Mars.
And so this is part of a very concerted effort to build public trust in this project and get people really invested in these rovers on this very distant world. But it also meant that, when these rovers were no longer functioning and they were being decommissioned, people suffered. They went through a kind of grief as they were losing a companion.
They were losing something in which they were really invested, emotionally, personally, in which they would check in daily to see what the news from Mars was from the rovers. So I think we’ve got to recognize that, with all these opportunities, there’s also challenges, and there’s also hardship that has to be managed. And that’s why I say this is not about an either/or. This is about really effective management and us, as human beings, knowing how to create the opportunities but also how to address the challenges when they come.
SOPHIE BUSHWICK: And, of course projecting these humanlike traits onto AI has some serious drawbacks as well. What sticks out to you?
DAVID GUNKEL: There’s a lot of things to be worried about. And as you can tell, there has been a lot of press on what we could call the bad examples or the bad outcomes. So one example is the way in which the large language models are able to generate text that is very readable, very legible, but also very confident sounding. And that has led a number of users to trust that what the large language model is providing is somehow accurate and valid. And obviously, it is not.
And there are a lot of examples on social media and elsewhere about how wrong and how misguided a lot of the outcomes from the large language models can be. So if we just trust that what is coming out of these things is acceptable, we could be in a situation where we are exposed to some danger and some problems with regards to misplaced trust in the algorithm.
Another really big recent example is in Europe, where a man was said to have been talked into suicide by his interactions with ChatGPT. Now, these are very dramatic. They get a lot of press, but they do point to the fact that this emotional investment that we’re making in our technologies also is a vulnerability, and it’s a vulnerability that we have to be aware of and that we have to be ready to confront so that we are adequately protecting ourselves as we engage with these technologies.
SOPHIE BUSHWICK: Misplaced trust, that issue makes me think of a term coined by the data journalist Meredith Broussard, techno chauvinism. We assume technology can do a task better than humans can. Are we more likely to follow advice generated by AI rather than a human-generated solution?
DAVID GUNKEL: So this is a really old question. In fact, it goes all the way back to Plato and a disruptive technology that really shook up the Greek civilization, and that technology was called writing. And Socrates was worried that people would trust writing more than the people who know the things they speak about. And that this misplaced trust would lead the youth astray and cause society to crumble and fall apart and have this catastrophic outcome for human beings.
Now, obviously, we figured out writing, and it didn’t do exactly what was predicted by Socrates. So I think we’ve got to use our own history with media and technology as a guide for how we gauge what the real dangers are and how well we are suited to respond to those challenges. And I don’t think it means that every technology repeats the same pattern, but I do think it means that every response that we have to disruptive technology tends to fall in line with these historical patterns. And that gives me some confidence that we’ll also figure out large language models. It’s just going to take some time and a lot of experimentation.
SOPHIE BUSHWICK: I want to talk a little about chatbots. There’s a company called Replica that creates AI chatbots. And it basically offers to provide an emotionally available AI best friend. So does this change the dynamic at all, that promise of a humanlike experience?
DAVID GUNKEL: Yeah. I think it changes it in a very significant way. We have to remember that a lot of these applications, a lot of these products are created by very powerful and very influential multinational corporations. We think of organizations like Google, like OpenAI, like Amazon. So that when you’re talking to Siri, you’re not talking to Siri. You’re talking to Apple. When you’re talking to Alexa, you’re not talking to Alexa. You’re talking to the corporation who is taking your data to create a profile and help anticipate your needs and sell your products.
So I think with something like Replica, we have to look at it as part of their marketing to get people engaged with using this piece of technology to position it in this way. But again, this is not new. All of our consumer products are sold to us with the promise of something bigger than just the product. You think about why we buy automobiles. It’s because it is pitched and marketed to us as a way of creating influence or sex appeal or whatever else. You think about how we sell cosmetics or how we sell lifestyle. These are not about the product per se but about a marketing framing for the product.
And I think we see the same thing going on with these AI products. And so I think we need to be aware of what powerful influence is behind these products, their development, and their distribution and how our interacting with these services and these products may be serving some other needs of the organization that stands behind the individual item that we think we are talking to.
SOPHIE BUSHWICK: This is Science Friday from WNYC Studios. I’m Sophie Bushwick. If you’re just joining us, I’m talking with Dr. David Gunkel, media and technology scholar at Northern Illinois University. I recently interviewed the hosts of a podcast who followed several people who had developed long-term relationships with AI chatbots. One person had used her as a dating coach. And that ended up backfiring because she said the was so good that it ruined her for dating real men. So are we losing something by having these intimate and meaningful relationships with AI instead of humans?
DAVID GUNKEL: So this is a worry that we hear from a number of corners. And I think it’s worth thinking about as we engage with these technologies, but I also think we get a lot of hyperbole that is associated with these kinds of very dramatic outcomes. There’s just as many examples of people who have used these things as a way of helping them become more social. You can think already back to the debate about video games. Will playing video games make us antisocial and never leave the house and only want to interact online?
We can think even further back the invention of the novel. When the novel was invented, it was worried that women would spend too much time reading romances and not go out and do the family’s tasks and get married and make children. So there’s worry that is being repeated, I think.
For me and what I try to do in my research and with my students is remember our own history with technology and ask the question, how much of this is repeatable? How much of it is different? What is changing? And really get a good understanding of the different things that have to be balanced in making these opportunities available to us but also protecting ourselves from some dangers and some challenges that may come along with them.
SOPHIE BUSHWICK: It doesn’t seem like AI is going to go away anytime soon. So what’s your vision for the future of how humans interact with AI?
DAVID GUNKEL: The thing that makes me optimistic, I have to say, is when I hear about what artists are doing with these various tools, things like the diffusion models for images or the large language models for text. There’s a lot of experimentation going on in the community of artists. And we see people like Mark Amerika experimenting with large language models to write his novels. We see people like Holly Herndon working with AI voice model trained on her own voice for collaborating and creating new musical compositions.
And I think if we look at how this experimentation is going, we can see some real effort to try to test the limits of what this AI is giving us and also see what the real challenges are. So I look to the artists as leaders in thinking through a lot of the very practical but also very interesting possible futures that we’re confronting as these technologies become more a part of our everyday lives.
SOPHIE BUSHWICK: Thank you so much for coming on the show. What a fascinating conversation?
DAVID GUNKEL: Thank you for inviting me. It’s been really great talking with you about this, and I wish you well.
SOPHIE BUSHWICK: Dr. David Gunkel is a professor of media studies at Northern Illinois University based in DeKalb, Illinois. Are you a teacher or caregiver who’s concerned about AI’s impact on your learners? Maybe you want to learn more about how it all works or find ways to prepare your students for the future, well, Science Friday has covered.
This May, were hosting an entire month of great conversations about AI in STEM education with free student activities to learn about artificial intelligence, chatbots, and machine learning. So check out sciencefriday.com/aimonth. That’s sciencefriday.com/aimonth.
Copyright © 2023 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/.
Shoshannah Buxbaum is a producer for Science Friday. She’s particularly drawn to stories about health, psychology, and the environment. She’s a proud New Jersey native and will happily share her opinions on why the state is deserving of a little more love.