A Flaw in Human Judgment: How Making Decisions Isn’t As Objective As You Think
17:16 minutes
If two people are presented with the same set of facts, they will often draw different conclusions. For example, judges often dole out different sentences for the same case, which can lead to an unjust system. This unwanted variability in judgments in which we expect uniformity is what psychologist Daniel Kahneman calls “noise.”
The importance of thoughtful decision-making has come in stark relief during the pandemic and in the events leading up to the January 6th insurrection.
Ira talks with Nobel Prize-winning psychologist Daniel Kahneman about the role of ‘noise’ in human judgment, his long career studying cognitive biases, and how systematic decision-making can result in fewer errors.
Kahneman is the co-author of “Noise: A Flaw in Human Judgment,” along with Oliver Sibony and Cass R. Sunstein, now available in paperback.
Invest in quality science journalism by making a donation to Science Friday.
Daniel Kahneman is professor emeritus at Princeton University and co-author of Noise: A Flaw in Human Judgment.
IRA FLATOW: This is Science Friday. I’m Ira Flatow. I’ve been thinking a lot about what drives powerful people to make, well, how can you say it, bad decisions, decisions that seem shortsighted or ignore key facts. The importance of thoughtful decision making has come into stark relief during the pandemic and the events leading up to the January 6 insurrection.
I was drawn to the research of Nobel Prize-winning psychologist Daniel Kahneman, who has made a career about studying decision making. I was hoping he would help me better understand just what’s going on. His most recent book, which he co-authored with Olivier Sibony and Cass Sunstein, is now available in paperback. It’s called Noise– A Flaw in Human Judgment.
Daniel Kahneman, welcome to Science Friday.
DANIEL KAHNEMAN: My pleasure.
IRA FLATOW: Nice to have you. All right, let’s begin talking about this. The title of your book is called Noise. What is noise? And how is it different from bias?
DANIEL KAHNEMAN: Well, the starting point, really, is that judgment is a form of measurement. We call it a measurement where the instrument’s in the human mind. And so the theory and the concept of measurement are relevant. Bias, in the theory of measurement, is simply an average error that is not zero. That’s bias.
Noise, in the theory of measurement, is simply variability. So that you could measure a line, and measure it repeatedly. You’re not going to get– if your ruler is fine enough, you’re not going to get the same measurement twice in a row. There’s going to be variability. That variability is noise.
And you can see that noise is a problem for accuracy because assume that there is no bias, that is, that the average of your measurement is precisely equal to the length of the line. It’s still, obviously, you’re making mistakes if your judgments or your measurements are scattered around the value. So that’s noise and that’s bias.
IRA FLATOW: So why do people make those mistakes? Why do we have people measuring things and then coming up with different results?
DANIEL KAHNEMAN: Well, there are several reasons. One reason in that, really, people are inherently noisy so that when you sign your name twice in a row, it doesn’t look exactly the same. We cannot, in fact, exactly repeat ourselves. We’re in a series of states, and those states have an effect on the judgments we make. We call that occasional noise. So a judge passing sentences is not the same in the morning and in the afternoon. The judge is not the same when in a good mood and in a bad mood.
And then there are two other kinds of noise. To understand the next form of noise, the easiest– well, let’s stay with the judge. So some judges are more severe than others. Some judges are lenient. We call that level noise because the level of their judgment, there is an individual bias.
But then the most interesting source of noise in that judges do not see the world in the same way, that is, if they had to rank defendants or crimes, they would not rank them alike. Some judges are really more severe with young defendants than with old defendants. For other judges, it’s the opposite. Those differences, which we call pattern noise, they’re really interesting, and they are in quite a few situations. They are the main source of noise.
IRA FLATOW: Is that because that’s where biases may influence the noise because people have different biases that makes it noisy?
DANIEL KAHNEMAN: That’s exactly it. Noise is really produced by the fact, that is, certainly pattern noise, that people have different biases.
IRA FLATOW: A lot of us have experienced that when we go to doctors, and we get a second or a third opinion. The doctors are looking at us, conducting the same tests, and yet they come up with a different diagnosis or a different prognosis.
DANIEL KAHNEMAN: There is a lot of noise in medicine. This is really one of the reasons we wrote that book is that we find a lot of noise in very important systems in society. So there are easy cases. It’s easy to diagnose a common cold. But the moment that things get more challenging, different physicians make different judgments. And very difficult cases, of course, there is a lot of noise. So noise in medicine is a big problem.
IRA FLATOW: Speaking about that, when thinking about judgments that have a wide range of decisions, I can’t help but think about the COVID pandemic. How can the concept of noise help us better understand how differently world leaders decide to deal with the virus?
DANIEL KAHNEMAN: Well, it’s one of the best examples of noise that we know, that is, leaders at all levels, from municipalities to leaders of countries, were faced with the problems, were quite similar, and they made a wide variety of different choices. That’s an example of noise. And each of them did it thinking that they were doing the right thing. But obviously, they couldn’t all be doing the right thing if they were doing different things in the same situation.
IRA FLATOW: So how might leaders then be able to make better decisions and reduce noise around the very complicated decisions that need to be made about COVID?
DANIEL KAHNEMAN: Well, we have a piece of advice that is unlikely to be taken up very soon. But our advice is that, in the case of COVID, it’s a matter of designing how you’re going to make the decision and doing it making the decision in a disciplined way. When you design the process by which you will reach conclusions, then you are going to have less noise. People are more likely to reach the same conclusions if they all follow a sensible process to get to the decision.
There is one source of noise that is not going to be controlled by that, and this is differences in values. So if people want different things, then they will reach different judgments. But if you’re faced with an objective problem, you’re trying to control the number of hospitalizations, that’s a problem where the value is pretty obvious with a systematic process of decision making. People ought to and, we think, would be less noisy than they were.
IRA FLATOW: When talking about making these decisions, what about using artificial intelligence or machine learning? There was a study that came out last year showing that the AI was better than the dermatologists in detecting melanoma. How does AI reduce noise in decision making?
DANIEL KAHNEMAN: AI does better than reducing noise. Any algorithm, any systematic rule that takes inputs and combines them in a specified way, will have one crucial property– it will be noise-free. You present an algorithm with the same problem twice, you’re going to get the same answer.
But in general, algorithms are noise-free. And it turns out this is one of their major advantages over humans, that is, when you compare the performance of people to the performance of algorithms and rules, in many situations the algorithms and rules are already superior to people or match people. And the main reason for the lack of accuracy of people compared to algorithm is noise. People are noisy. Algorithms are not.
IRA FLATOW: But you’ll get pushback from doctors or other people who say, every patient is different. I have to treat every patient differently, and that takes a human interaction. How do you answer that?
DANIEL KAHNEMAN: Well, I answer that by looking at data and by comparing mistakes, the number of mistakes that are made. And it is true that humans have that tendency of viewing each case as unique. But it’s also true that if you take just a few objective measures in the situation and you combine them appropriately, in many situations an objective combination of scores is going to do better than a human judge, although the human judge has access to a lot of information and has many powerful intuitions.
IRA FLATOW: I hear that same kind of argument about how AI is better than people. When I talk to AI people who are designing self-driving cars, they say, we get a lot of pushback that the AI is not smarter, but if you look at the data, you’ll see that a computer will drive a car better than a person, meaning that there’ll be fewer accidents.
DANIEL KAHNEMAN: Well, all of us are biased against algorithms. And the reason we are is that when a self-driving car causes an accident, we look at that accident and we say, oh, I wouldn’t have done it. A human driver would just not have made that mistake. But of course, no one asks the self-driving car about the mistakes that humans made.
And the same is true in all contexts. Where you measure the performance of people against the performance of algorithms, the question is overall accuracy. But the way that people look at it, mistakes that artificial intelligence makes look stupid to us. They are mistakes we wouldn’t make. And the fact that we make more mistakes, overall, than the AI, that’s not something we respond to.
IRA FLATOW: One of the ideas that stuck out to me in the book was about overconfident leaders who too heavily trust their own intuition instead of weighing evidence or are too confident in the decision that’s more due to chance than their own judgment. What’s going on here?
DANIEL KAHNEMAN: Well, what’s going on is that most of us are overconfident most of the time. And in a way, it’s a very good thing. By overconfident what I mean is that we look at the world, and we see the world in a particular way. And we feel a sense of validity. We feel that the reason we see the world as we do is because that’s the way it is.
But we cannot imagine that other people looking at exactly the same situation would see it differently because I see the truth, and I respect your judgment. I expect you to see exactly the same thing that I do. Now, that’s one aspect of it. Overconfidence is almost built in.
But overconfidence in intuition is, in a way, particularly pernicious when it’s not justified. Now, there are cases where intuitive expertise exists. So chess players can look at a chess situation, and every move that occurs to them is going to be a strong one. But people feel they have intuitions when there is no way that they could have correct valid intuitions.
For example, anybody who makes predictions about what will happen in the stock market to individual stocks, in particular, is just deluding himself. It’s not possible. And yet people feel that it is possible. They have intuitions, and they trust them, and it’s a big problem.
IRA FLATOW: I’m Ira Flatow. This is Science Friday from WNYC Studios. If you’re just joining us, I’m speaking with Nobel Prize winner Daniel Kahneman about some of the flaws in human judgment. One of the things I’ve been batting around a lot lately is what biases lead people to believe something that is patently false, specifically how so many people bought into the big lie that Donald Trump really won the election and then the ensuing insurrection of January 6. What makes people believe in an easily disputable lie so fully?
DANIEL KAHNEMAN: Well, we have the wrong idea about where beliefs come from, our own and those of others. We think we believe in whatever we believe because we have evidence for it, because we have reasons for believing. When you ask people, why do you believe that, they are not going to stay dumb. They are going to give you reasons that they’re convinced explain their beliefs.
But actually, the correct way to think about this is to reverse it. People believe in the reasons because they believe the conclusion. The conclusion comes first. And the belief in the conclusion, in many cases, is largely determined by social factors.
You believe what people that you love and trust believe, and then you find reasons for it. And they tell you reasons for believing that, and you accept the reasons. But it’s largely a social phenomenon. It’s not an error of reasoning.
And that, by the way, is true for your beliefs and my beliefs. Your beliefs and my beliefs reflect how we’ve been socialized. It reflects the company we keep. It reflects our belief in certain ways of reaching conclusions, like a belief in the scientific method. Other people just have different beliefs because they’ve been socialized differently. And because they have different beliefs, they accept different kinds of evidence, and the evidence that we think is overwhelming just doesn’t convince them of anything.
IRA FLATOW: Are there cases in which variability in judgment is actually a good thing?
DANIEL KAHNEMAN: Oh, many cases, that is, we define noise– and that’s important, we define noise as unwanted variability so that when you have underwriters in an insurance company looking at the same risk, you would want them to reach approximately or exactly the same conclusions. But I want variability in the judgments of my film critics. I want variability in the judgments and opinions of people who are creating or inventing new things. So variability is often very desirable. But in some contexts, variability is noxious.
IRA FLATOW: One last question. I’ve been following your career for a long time, and I’ve always wondered what got you and your long-time former psychologist partner, the late Amos Tversky, so interested in human biases and studying? Where did you fellas decide this was something you wanted to study?
DANIEL KAHNEMAN: Well, it was really ironic research. We found that we were prone to mistakes. It was all about statistical thinking when we started. And we noticed that we had wrong intuitions about many statistical problems. We knew the solutions, and yet the wrong intuitions remain attractive.
IRA FLATOW: Can you put a finger on why we have so many flaws in our intuitive judgment?
DANIEL KAHNEMAN: So it’s not that you could– we could perform surgery and excise all the sources of biases from human cognition. If you removed all the sources of biases, you would remove a great deal of what makes cognition accurate in most situations. So we are built to reach conclusions, not necessarily in a logical way, but in a heuristic way.
And heuristic ways of thinking always necessarily lead to some mistake, although, on average, they could lead to correct judgments and faster than reason would do. It’s not that we’re studying incorrect mechanisms. The mechanisms are very useful. They sometimes, that mechanism which is usually useful, will lead people to systematic errors.
IRA FLATOW: Well, thank you very much, Dr. Kahneman, for taking time to be with us today.
DANIEL KAHNEMAN: It’s a pleasure talking with you.
IRA FLATOW: Daniel Kahneman, Nobel Prize winner professor emeritus at Princeton University, is the co-author of the book, Noise A Flaw In Human Judgment. If you want to hear more from Daniel Kahneman and how he approaches his work, go to sciencefriday.com/noise to watch a profile of him from our Desktop Diary video series back in 2013.
Copyright © 2022 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/
Shoshannah Buxbaum is a producer for Science Friday. She’s particularly drawn to stories about health, psychology, and the environment. She’s a proud New Jersey native and will happily share her opinions on why the state is deserving of a little more love.
Ira Flatow is the founder and host of Science Friday. His green thumb has revived many an office plant at death’s door.
Cookie | Duration | Description |
---|---|---|
_abck | 1 year | This cookie is used to detect and defend when a client attempt to replay a cookie.This cookie manages the interaction with online bots and takes the appropriate actions. |
ASP.NET_SessionId | session | Issued by Microsoft's ASP.NET Application, this cookie stores session data during a user's website visit. |
AWSALBCORS | 7 days | This cookie is managed by Amazon Web Services and is used for load balancing. |
bm_sz | 4 hours | This cookie is set by the provider Akamai Bot Manager. This cookie is used to manage the interaction with the online bots. It also helps in fraud preventions |
cookielawinfo-checkbox-advertisement | 1 year | Set by the GDPR Cookie Consent plugin, this cookie is used to record the user consent for the cookies in the "Advertisement" category . |
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
csrftoken | past | This cookie is associated with Django web development platform for python. Used to help protect the website against Cross-Site Request Forgery attacks |
JSESSIONID | session | The JSESSIONID cookie is used by New Relic to store a session identifier so that New Relic can monitor session counts for an application. |
nlbi_972453 | session | A load balancing cookie set to ensure requests by a client are sent to the same origin server. |
PHPSESSID | session | This cookie is native to PHP applications. The cookie is used to store and identify a users' unique session ID for the purpose of managing user session on the website. The cookie is a session cookies and is deleted when all the browser windows are closed. |
TiPMix | 1 hour | The TiPMix cookie is set by Azure to determine which web server the users must be directed to. |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
visid_incap_972453 | 1 year | SiteLock sets this cookie to provide cloud-based website security services. |
X-Mapping-fjhppofk | session | This cookie is used for load balancing purposes. The cookie does not store any personally identifiable data. |
x-ms-routing-name | 1 hour | Azure sets this cookie for routing production traffic by specifying the production slot. |
Cookie | Duration | Description |
---|---|---|
__cf_bm | 30 minutes | This cookie, set by Cloudflare, is used to support Cloudflare Bot Management. |
bcookie | 2 years | LinkedIn sets this cookie from LinkedIn share buttons and ad tags to recognize browser ID. |
bscookie | 2 years | LinkedIn sets this cookie to store performed actions on the website. |
lang | session | LinkedIn sets this cookie to remember a user's language setting. |
lidc | 1 day | LinkedIn sets the lidc cookie to facilitate data center selection. |
S | 1 hour | Used by Yahoo to provide ads, content or analytics. |
sp_landing | 1 day | The sp_landing is set by Spotify to implement audio content from Spotify on the website and also registers information on user interaction related to the audio content. |
sp_t | 1 year | The sp_t cookie is set by Spotify to implement audio content from Spotify on the website and also registers information on user interaction related to the audio content. |
UserMatchHistory | 1 month | LinkedIn sets this cookie for LinkedIn Ads ID syncing. |
Cookie | Duration | Description |
---|---|---|
__jid | 30 minutes | Cookie used to remember the user's Disqus login credentials across websites that use Disqus. |
_gat | 1 minute | This cookie is installed by Google Universal Analytics to restrain request rate and thus limit the collection of data on high traffic sites. |
_gat_UA-28243511-22 | 1 minute | A variation of the _gat cookie set by Google Analytics and Google Tag Manager to allow website owners to track visitor behaviour and measure site performance. The pattern element in the name contains the unique identity number of the account or website it relates to. |
AWSALB | 7 days | AWSALB is an application load balancer cookie set by Amazon Web Services to map the session to the target. |
countryCode | session | This cookie is used for storing country code selected from country selector. |
Cookie | Duration | Description |
---|---|---|
_fbp | 3 months | This cookie is set by Facebook to display advertisements when either on Facebook or on a digital platform powered by Facebook advertising, after visiting the website. |
fr | 3 months | Facebook sets this cookie to show relevant advertisements to users by tracking user behaviour across the web, on sites that have Facebook pixel or Facebook social plugin. |
IDE | 1 year 24 days | Google DoubleClick IDE cookies are used to store information about how the user uses the website to present them with relevant ads and according to the user profile. |
NID | 6 months | NID cookie, set by Google, is used for advertising purposes; to limit the number of times the user sees an ad, to mute unwanted ads, and to measure the effectiveness of ads. |
personalization_id | 2 years | Twitter sets this cookie to integrate and share features for social media and also store information about how the user uses the website, for tracking and targeting. |
test_cookie | 15 minutes | The test_cookie is set by doubleclick.net and is used to determine if the user's browser supports cookies. |
vglnk.Agent.p | 1 year | VigLink sets this cookie to track the user behaviour and also limit the ads displayed, in order to ensure relevant advertising. |
vglnk.PartnerRfsh.p | 1 year | VigLink sets this cookie to show users relevant advertisements and also limit the number of adverts that are shown to them. |
VISITOR_INFO1_LIVE | 5 months 27 days | A cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface. |
YSC | session | YSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages. |
yt-remote-connected-devices | never | YouTube sets this cookie to store the video preferences of the user using embedded YouTube video. |
yt-remote-device-id | never | YouTube sets this cookie to store the video preferences of the user using embedded YouTube video. |
yt.innertube::nextId | never | This cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen. |
yt.innertube::requests | never | This cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen. |
Cookie | Duration | Description |
---|---|---|
_dc_gtm_UA-28243511-20 | 1 minute | No description |
abtest-identifier | 1 year | No description |
AnalyticsSyncHistory | 1 month | No description |
ARRAffinityCU | session | No description available. |
ccc | 1 month | No description |
COMPASS | 1 hour | No description |
cookies.js_dtest | session | No description |
debug | never | No description available. |
donation-identifier | 1 year | No description |
f | never | No description available. |
GFE_RTT | 5 minutes | No description available. |
incap_ses_1185_2233503 | session | No description |
incap_ses_1185_823975 | session | No description |
incap_ses_1185_972453 | session | No description |
incap_ses_1319_2233503 | session | No description |
incap_ses_1319_823975 | session | No description |
incap_ses_1319_972453 | session | No description |
incap_ses_1364_2233503 | session | No description |
incap_ses_1364_823975 | session | No description |
incap_ses_1364_972453 | session | No description |
incap_ses_1580_2233503 | session | No description |
incap_ses_1580_823975 | session | No description |
incap_ses_1580_972453 | session | No description |
incap_ses_198_2233503 | session | No description |
incap_ses_198_823975 | session | No description |
incap_ses_198_972453 | session | No description |
incap_ses_340_2233503 | session | No description |
incap_ses_340_823975 | session | No description |
incap_ses_340_972453 | session | No description |
incap_ses_374_2233503 | session | No description |
incap_ses_374_823975 | session | No description |
incap_ses_374_972453 | session | No description |
incap_ses_375_2233503 | session | No description |
incap_ses_375_823975 | session | No description |
incap_ses_375_972453 | session | No description |
incap_ses_455_2233503 | session | No description |
incap_ses_455_823975 | session | No description |
incap_ses_455_972453 | session | No description |
incap_ses_8076_2233503 | session | No description |
incap_ses_8076_823975 | session | No description |
incap_ses_8076_972453 | session | No description |
incap_ses_867_2233503 | session | No description |
incap_ses_867_823975 | session | No description |
incap_ses_867_972453 | session | No description |
incap_ses_9117_2233503 | session | No description |
incap_ses_9117_823975 | session | No description |
incap_ses_9117_972453 | session | No description |
li_gc | 2 years | No description |
loglevel | never | No description available. |
msToken | 10 days | No description |