Early in my career as a professor, I recognized confirmation bias in my students. In her evaluation of reincarnation in Buddhism, for example, one student wrote that she found it “silly and weird.” This silliness and weirdness, apparently, did not also apply to her own belief in heaven and hell, which she fervently defended for reasons that would have equally applied to a belief in reincarnation. Confirmation bias is the tendency to favor one’s existing beliefs and to reinterpret or ignore evidence that contradicts them.
I had realized years before that confirmation bias was strong in me, too. As I became more aware of the way I disagreed with friends, family and colleagues on a number of issues, I noticed how easy it was to run straight to the evidence that supported my side.
Today, we class confirmation bias as one among many other cognitive biases, including loss aversion, the attribution error and the availability heuristic. Experts disagree over how, and whether, these biases can be reduced. A point that seems not to have been emphasized enough is the pernicious and persistent nature of confirmation bias. Before applying any other bias-reduction techniques, confirmation bias must be addressed. Following the methods of philosophy is the best way to do it.
Raymond S. Nickerson argues:
If one were to attempt to identify a single problematic aspect of human reasoning that deserves attention above all others, the confirmation bias would have to be among the candidates for consideration … it appears to be sufficiently strong and pervasive that one is led to wonder whether the bias, by itself, might account for a significant fraction of the disputes, altercations, and misunderstandings that occur among individuals, groups, and nations.
The apparently fierce intractability of the confirmation bias led me to study it in depth for my doctorate (indeed, it was an irony not lost on my advisers that I was confirming my own pre-existing beliefs about the confirmation bias by studying the confirmation bias).
Naturally, my studies led me to Amos Tversky and Daniel Kahneman’s work on prospect theory, which at the time proposed a radical re-evaluation of how people make decisions under risk. The authors argue that we are less rational than we think when faced with risky decisions. In 2011, Kahneman culled the wisdom of his life’s work, and the resulting bias research that branched from it, into the book Thinking Fast and Slow.
In a recent article in the Atlantic, Ben Yagoda discusses the phenomenon of cognitive bias, reminding us that there are now over 100 listed biases. Yagoda interviews Kahneman, as well as Richard E. Nisbett, who was instrumental in popularizing Tversky and Kahneman’s work.
Yagoda’s article is a great primer on cognitive bias, giving many real world examples of the way in which it plays out. He also agrees that confirmation bias is uniquely detrimental in its effect on human reasoning.
Can biases be reduced? Yagoda highlights the difference of opinion between Kahneman—who is more skeptical—and Nisbett, who believes that they can be.
As his book title suggests, Kahneman adheres to a dual process model of mind, according to which we think in two different ways: fast and slow. Fast thinking refers to immediate processing of information, such as noticing that two structures are basically the same height. Slow thinking refers to more deliberative processing, like counting the number of raised hands in a senate meeting to determine a vote. Kahneman calls fast thinking System 1 and slow thinking System 2.
Cognitive biases, like the availability heuristic, are examples of fast thinking gone awry. In order to reduce bias, we have to find some way to use System 2 on System 1. But Kahneman thinks System 1 is too resilient. After outlining some bias-reduction techniques in his interview with Yagoda, Kahneman says:
My position is that none of these things [bias-reduction techniques] have any effect on System 1. You can’t improve intuition. Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And, for most people, in the heat of argument the rules go out the window.
Nisbett, on the other hand, has gone to great lengths to instruct people on how to be less biased, including the creation of a Coursera course. He seems to have had some success. He reports that, after taking a statistics class, students are more likely to understand regression to the mean (a fallacy in which one erroneously supposes that random variation in a population has a cause besides that of normal variation).
While I agree that some of our biased thinking can be mitigated, Nisbett underestimates the power of the confirmation bias. In theory, one could take his course, study statistics, be made aware of other biases, such as the attribution error, and yet still engage in confirmation bias. I may be fully aware of my own use of these other biases, for example, but not address or accurately characterize any serious counterarguments to my position.
Expert Counterargument to Reduce the Confirmation Bias
To be fair, Nisbett does recommend attempting to disprove your own hypothesis. This is a good strategy. But I would take it a step further. In a study by Hsieh-Hong Huang, Jack Shih-Chieh Hsu and Cheng-Yuan Ku, the authors used computer-mediated counterargument to reduce the confirmation bias, with some success.
In the study, participants (241 people enrolled at seven different universities in on-the-job MBA programs) were assigned randomly to two groups: one with computer-mediated counter-argument, and one without. The participants were told they had a certain amount of money to invest in stocks, restricted to two companies in two industries. This design, suggest the researchers, forced the participants to cope with lots of data and uncertainty, a situation which more closely mirrored decision-making in the real world outside of the laboratory environment. The information given to participants came from actual investment reports.
The researchers measured confirmation bias by first noting the participants’ preconceptions (which industry participants decided to invest in), then matching that against their later, selective reading behavior. For instance, if participants invested in a particular industry (mean that they tended to spend significantly more time reading only positive messages about that industry, in spite of other options), then these participants tended to be rated higher on the scale of confirmation bias. After participants had chosen their industry and read either positive or negative information about it, those in the counterargument group were given expert counterarguments as to their investment decisions (if they bought into a positive argument, they were given a negative argument, and vice versa). Participants’ confidence in their beliefs was measured before and after, using a seven-point scale.
The study found that the computer-mediated counterargument had no effect on people with no or a low level of confirmation bias. On the other hand, for those who had a high degree of confirmation bias, the counterargument was very effective in reducing their confidence in their beliefs.
There are a couple of takeaways here. The first is that some people may just be less biased than others (as Yagoda himself implies when, after correctly completing the Wason Selection Task, he says he may just be an “unbiased guy”). The second is that expert counterargument was effective in reducing the confirmation bias in some people.
Nisbett’s advice—that is, trying to prove oneself wrong—may not lead someone to expert counterargument. One of my Facebook friends, for example, recently posted about her attempt to follow the advice of her friends and listen to the other side. (She’s a political progressive.) How did she do this? She talked to some redneck who was spouting conservative views at a coffee shop. According to her post, she came away from the incident believing that she had attempted to prove herself wrong, consequently making her even more certain of her own worldview as a result of the weak arguments presented by the conservative coffee shop patron.
But she should have read the National Review or the Wall Street Journal, not listened to some douche at a coffee shop. She should have read George Will, not listened to Sean Hannity.
What Is Philosophy and How Can It Help?
Philosophy is a rational method of addressing deep questions relating to reality, knowledge and values. More importantly, philosophy has a built in mechanism for dealing with expert counterargument. That’s the essence of the discipline—to investigate deep issues rationally, become seriously interested in the best arguments on every side, and follow those arguments wherever they lead (whether or not we like the final destination). Socrates, of course, is the prime example of this. The Socratic Method is supposed to enable us to rationally synthesize different points of view, leading us to truth.
However, philosophers themselves can be quite biased. There is even experimental evidence, largely driven by the work of Eric Schwitzgebel, that ethicists (ethics is a branch of philosophy) are not very ethical themselves.
My own experience studying philosophy confirms Schwitzgebel’s views in that I would not actually consider some of my philosophy professors true philosophers, given their failure to apply basic principles of rational argument.
And herein lies the problem: it takes a very consistent, perspicacious mind to truly overcome bias, insofar as it’s possible. But philosophy at its best is our best option. The late Hilary Putnam is a good example of what a philosopher should be: he fiercely followed the arguments, and was often so convinced by experts who disagreed with him that he changed his views. But we can’t be sensitive to the other side of the argument unless we deeply engage with the expert views on that side, as Putnam did.
For all Nisbett’s optimism about overcoming bias, and all the professional resources he provides to help others overcome theirs, he clearly engages in the confirmation bias. Indeed, when we consider the following example, Kahneman’s skepticism is understandable.
In a Vox article, Nisbett and two other authors argue against the idea that genes (in particular genes related to race) have an influence on intelligence. I am not a geneticist. I have no idea whether they are right. Nevertheless, Nisbett and his co-authors’ bias is on full display in the article, and it doesn’t take a geneticist to see that.
For one, the article, which references the controversial work of Charles Murray—who does believe that genes have an effect on intelligence—accuses Murray of “peddling junk science” in the very title (and attacks Sam Harris for “falling for it” while interviewing Murray). Strangely, the article then goes on to say that, “Some well-informed scientists hold views closer to Murray’s than to ours.” So is the work of those well-informed scientists also “junk”? Even a little outside research reveals a field of science that is much less settled than Nisbett and his co-authors let on, as they give short shrift to those supposedly well-informed scientists.
To present an article with that title, and to mention well-informed scientists but not present the best interpretation of their arguments, is to engage in the confirmation bias. Even if we look to academic reviews of Nisbett’s solo work, published long before the Vox article, we find claims like this: “Given that much of the critical research so clearly lies ahead, Nisbett’s certainty regarding his own premature conclusions is quite remarkable.” Additionally, we find geneticists like Robert Plomin who focus the question not on whether genes have an influence on intelligence, but which genes are doing the influencing.
Don’t misunderstand me. I do not know who’s right on this question. But when I see respected experts on the other side of the debate being basically ignored and aligned with junk science by the very guy going around telling others not to be biased, I can’t help but be skeptical.
Without a serious effort to deal with all sides of the argument, even bias-reduction techniques are not good enough. Being made aware of all the different types of biases is important, but if we don’t address the confirmation bias, then it’s all for naught. Nisbett and his co-authors didn’t use the fatal attribution error and they didn’t misunderstand regression to the mean—yet they are poster children for the confirmation bias.
You might argue that the focus on critical thinking and tackling all sides of the debate is built into academia, whatever the discipline. But this objection fails to appreciate just how deep philosophy truly goes. (Also, I am not denying that there may be sociologists, psychologists, and others who are, at the core, philosophically inclined. My argument isn’t that everyone should get a degree in philosophy, but that everyone should follow the lessons of philosophy.)
Philosophy addresses the deepest of deep assumptions, from politics to the self. And, at its best, it addresses them with rational arguments from every conceivable angle. The sociologist who has analyzed the oppression of women with groundbreaking studies may never have considered the potential immorality of the act of bringing ham (slaughtered pig) sandwiches into her graduate seminars. Unchecked cultural relativism runs rampant in some academic departments, despite departmental guidelines that emphasize critical thinking. Despite their claims to the contrary, even scientists need philosophy to check their assumptions sometimes.
In other words, our assumptions run deep, and it often takes more than just basic critical thinking to spot them. It takes a full, honest analysis of all sides of the argument—something that philosophers have been doing for a couple thousand years now, some better than others.