Grade Level
6-12
minutes
15 min - 1 hr
subject
Media Guide
Activity Type:
artificial intelligence, Asimov, Computer Science, Discussion, English, ethics, robots, Writing
Introduction
In this excerpt from Science Friday, Mark Riedl discusses how his research group uses a program called Quixote to teach robots morality and etiquette. As humans get closer and closer to developing self-aware artificial intelligence, we need to make sure that artificially-intelligent computers and robots follow rules that keep them from endangering us and that align with our social conventions. Robots need to be efficient, but not in a way that leads them to bad behaviors like cutting in line, stealing, or physically harming people. How do we make sure that efficiency does not trump morality in a machine we’ve programmed?
See the Educator’s Toolbox below for a student worksheet (DOC & PDF) and audio transcript.
Vocabulary
- ethical— acting within accepted principles of right and wrong.
- etiquette— behavior that aligns with social norms (e.g. being polite).
- protagonist— the main character of a story who must overcome obstacles and barriers.
- artificial intelligence— software or programs capable of independent deduction, reasoning, problem-solving, critical thinking, and/or creativity (e.g. playing go or telling jokes).
Activate Prior Knowledge
- How did you learn right from wrong? How did you learn how to be polite?
- What is your favorite robot ( from movie, TV, game, book, etc.)? Why?
- Do you think that robots that think for themselves are a good idea? Why or why not?
- Use these robot discussion cards with small groups to get the conversation going.
Media Resources
- Audio Excerpt: “Storytelling Teaches Robots Right and Wrong” Feb. 26, 2016. (original segment)
- Audio Transcript
- Reading: Flood, Alison. “Robots Could Learn Human Values by Reading Stories, Research Suggests.” The Guardian. 18 Feb. 2016.
- Reading: Riedl, Mark. “Why Artificial Intelligence Should Read and Write Stories.” The Huffington Post. N.p., 14 Oct. 2015
- Student Worksheet (DOC or PDF)
How Do You Teach a Robot Right From Wrong? Story Time.
Student Listening Task
While listening to the audio excerpt “Storytelling Teaches Robots Right and Wrong,” collect information that answers the following questions:
—Why is there a pressing need to develop artificial intelligence that complies with societal norms?
—Why did Riedl and his team choose stories as a source of cultural values?
—How do machine learning systems process data?
—Why shouldn’t robots just rely on one story to learn societal norms?
—Why is storytelling an efficient way to teach robots?
—Why does Riedl argue that stories are better than large data sets for teaching robots?
Student Discussion Questions
- Do you think stories are a good way to teach robots to understand human behavior?
- Science fiction writer Isaac Asimov introduced the “Laws of Robotics” to guide the programming of intelligent robots in order to keep them from turning against humans. They are:
—First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.Second Law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
—Second Law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
—Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(Asimov, Isaac. I, Robot. New York: New American Library, 1956.)
—Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm. (Asimov, Isaac. Robots and Empire. Garden City, NY: Doubleday, 1985.)
Riedl and Harrison propose that we use programs that “teach” robots to create their own set of rules by analyzing stories. Asimov proposes that we program intelligent robots with directives, or rules that they must follow. Explain which approach you think is better: Riedl’s collection of stories (teaching) or Asimov’s Three Laws of Robotics (directives).
- Mark Riedl argues that it is important not to “cherry-pick” the stories used to teach robots, “because by doing so, we run into the danger of unintentionally reinforcing certain behaviors.” Do you agree with Mark Riedl? Why or why not?
- GENERATE IDEAS: As a group, come up with a list of social norms and behaviors you think moral robots should have. Rank those social norms and behaviors in terms of importance. Choose your group’s top three social norms and behaviors and recommend stories that should be used in the Quixote program. What do you hope the Quixote program would learn from each story?
Student Writing Prompt
Eric Schmidt, the CEO of Alphabet Inc (the parent company of Google), has asserted that “we’re closer than ever before to true artificial intelligence, and that continued research into its development will have positive side effects that will benefit the public.” That may be true, but Riedl’s research and Asimov’s fiction warn that without incorporating programming that protects us, this progress could negatively affect humans. Given that artificial intelligence technology is progressing quickly, should humanity pursue Asimov’s directive-based “Laws of Robotics” or Riedl and Harrison’s teaching approach that uses stories told to a program like Quixote? Create an argument supporting one approach.
Ideally, writing responses should:
—Incorporate evidence from media.
—Explain the type of programming (teaching vs. directive) in student’s own words.
—Address a counterclaim. (Figure out an opposing argument and defend against it.)
Related Resources
- Original Study: Riedl, Mark, and Brent Harrison. “Using Stories to Teach Human Values to Artificial Agents.” Association for the Advancement of Artificial Intelligence, 2015.
- Video:“Isaac Asimov: The Three Laws of Robotics.” YouTube.
- Reading: Allen, Colin. “The Future of Moral Machines.” Opinionator The Future of Moral Machines Comments. The New York Times, 11 Dec. 2012.
- Movie Robots: Dirks, Tim. “Robots in Film.” AMC Filmsite. American Movie Classics.
The Future of Artificial Intelligence
The Limits of Artificial Intelligence
Extension
In groups, have students discuss robot fears using the cards here as prompts for different tables. Printable cards here. You may want to draw on the explanations behind each of these fears from the original article.
Common Core Learning Standards
CCSS.ELA-LITERACY.RI.6.7
Integrate information presented in different media or formats (e.g., visually, quantitatively) as well as in words to develop a coherent understanding of a topic or issue.
CCSS.ELA-LITERACY.RI.9-10.7
Analyze various accounts of a subject told in different mediums (e.g., a person’s life story in both print and multimedia), determining which details are emphasized in each account.
CCSS.ELA-LITERACY.W.6.1, CCSS.ELA-LITERACY.W.7.1, CCSS.ELA-LITERACY.W.8.1
Write arguments to support claims with clear reasons and relevant evidence.
CCSS.ELA-LITERACY.W.9-10.1, CCSS.ELA-LITERACY.W.11-12.1
Write arguments to support claims in an analysis of substantive topics or texts, using valid reasoning and relevant and sufficient evidence.
Educator's Toolbox
Meet the Writer
About Xochitl Garcia
@msxgarciaXochitl Garcia was Science Friday’s K-12 education program manager. She is a former teacher who spends her time cooking, playing board games, and designing science investigations from odds and ends she’s stockpiled in the office (and in various drawers at home).