Confirmation bias in the sciences – a double edged sword?
Baijayanta Roy
Confirmation bias is a tricky thing to write about. One only needs to put her guard down for a while and the biases about confirmation bias creeps in. Confirmation bias is the psychological tendency of a person to seek or interpret evidence in ways those are partial or biased to existing beliefs, expectations, or a hypothesis in mind (Nelson & McKenzie; Nickerson, 1998). Think of the difference between a lawyer and a judge. A lawyer collects and interprets evidences only in favour of her client. This is only fair since the lawyer has taken a side. However the judge never picks sides of either the plaintiff or the defendant. She will judge and interpret evidences and data presented to her irrespective of her own personal preferences. Confirmation bias is a motivation for a lawyer and an evil for the judge.
We like to believe that our own political, social and ideological opinions are backed by hard evidence and logic. In fact, wherever we go the world seems to behave exactly the way we thought it would. So our conclusions must be right, right? Not so fast, take a good look at your confirmation bias meter. Experiments have shown that even though we don’t have anything at stake we tend to gather evidences that favour our beliefs and prejudices (Nickerson, 1998). Effectively the rest of the world that doesn’t fit into our belief system goes blind. This is why confirmation bias is also known as my side bias. Another funny thing about cognitive biases such as this is we are very quick to identify confirmation bias in others but fail to acknowledge the same in us. So everybody except our own self seems to be afflicted by biases.
But why would such a horrible thing happen to us? There are many theories. The desire to believe in things that we like to be true, the tendency to gather a single hypothesis at a time, adopting test strategies that are likely to confirm our beliefs, trying to explain a belief and trying to avoid making errors were suggested as the main reasons behind confirmation bias (Nickerson, 1998). In a nutshell, our brains have limited resources. We, therefore, want to perform cognitive tasks in a way that will take the least effort. This makes us ‘cognitive misers’ (Allport, 1954). Intuitive part of our brain makes easy and quick judgements which are often inaccurate. Only when we run these intuitive judgements through our logical machinery can we get logically sound judgements. But the logical part of our brain requires more energy and takes much time and effort to process information (Kahneman, 2011). We therefore need an extra push and motivation to make sure that we are thinking logically. Most of us don’t grant this extra effort in our daily chores and life goes on easily enough, until these little intuitive judgements add up to create landslide of prejudices, stereotypes and biases in our minds. In 1620, when Francis Bacon published his book Novum Organum he noted the following about the inertia of our minds:
The human understanding when it has once adopted an opinion …draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects; in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate… And such is the way of all superstitions, whether in astrology, dreams, omens, divine judgments, or the like…
In 1960, psychologist Peter C. Wason published a series of experiments titled – On the failure to eliminate hypotheses in a conceptual task. He observed that “very few intelligent young adults spontaneously test their beliefs in a situation which does not appear to be of a “scientific” nature” (Wason, 1960). Being aware of the philosophical work published by Karl Popper the previous year, Wason noted that rational people don’t always show their “willingness to attempt to falsify hypotheses” instead they try to confirm it. Hence he coined the term confirmation bias.
Confirmation bias among the vanguards of reason (the scientific community)
Systematic observation and collection of empirical data is at the heart of any scientific discipline. If I bring a glass of water to a scientist and tell her to just ‘observe’, what will she possibly do? Imagine how many different ways you can observe a glass of water. In utter confusion the scientist will ask – observe what and why? If you answer the ‘what’ part of the question, you’ll give the scientist a region to focus on. The scientist will strip away the irrelevant parts of the reality (i.e. the glass of water) and reduce it to only the relevant parts. If you answer the ‘why’ part of the question, you are giving her an objective and a specific task to complete. Both of these answers are provided together in what is known as a hypothesis. This is the reason a scientist is observing nature for. And this comes first – before making any observation.
But where does the hypothesis come from? There is no definite answer but the usual suspects are intuition, gut feeling and hunch (Bowers, Regehr, Balthazard, & Parker, 1990; Shirley & Langan-Fox, 1996). Therefore it is highly likely that a hypothesis gets influenced by the scientist’s subjective opinions, personal beliefs, biases, prejudices and even, stereotypes. There is nothing wrong in it as long as the scientist doesn’t fall too much in love with her hypothesis that she can’t change it given contrary data. This is where confirmation bias creeps in and shows its ugly fangs. Confirmation bias is no less prevalent in science than it is in other disciplines (Mitroff, 1974). Scientists are biased about their theory (Coolican, 1990) and they try to interpret data to best fit their theory. Ian Mitroff interviewed a group of scientists and observed that the idea of hard-headed and objective science is naïve. He reported (Mitroff, 1974) the following from his survey:
…in order to be a good scientist, one had to have biases. The best scientist, they said, not only has points of view but also defends them with gusto. Their concept of a scientist did not imply that he would cheat by making up experimental data or falsifying it; rather he does everything in his power to defend his pet hypotheses against early and perhaps unwarranted death caused by the introduction of fluke data.
Then how does science manages to get scientific things done? Well, the answer lies in the democratic process of scientific culture. Most scientists do not try to disprove their ideas; rivals do it for them (Ridley , 2012). So while the scientist is fallible and prone to cognitive biases like any normal human being, science maintains objectivity by ‘dispersing the incentives in many different centres’. Confirmation bias is therefore only a major concern for a scientist’s own career.
Unfortunately, the scientific community doesn’t simply comprise of scientists pursuing their passions. There are scientific journals, journal editors, journal reviewers and an entire machinery to ensure that a worthy scientific work sees the light of day. This machinery plays a vital role in scientific process. So vital in fact, that the journal reviewers are sometimes known as the ‘gatekeepers of science’ (Crane, 1967; DeGrazia, 1963). They let the right ones in and keep the junk out. But studies have found journal reviewers to be strongly biased against the results contrary to their commonly accepted theoretical perspective (Mahoney, 1977). This is a serious concern. Though the effect of confirmation bias among peers may average out on a longer run but experts believe that confirmation bias in peer review process will seriously slow down progress by impeding innovative and oath breaking work (Olson, 1990; Armstrong, 1996; Hojat, Gonnella, & Caelleigh, 2003).
Confirmation bias among us mortals
Hypochondriacs wouldn’t listen to reason. It’s funny because they only listen to the unhealthy signs of their body and thoroughly rationalize its importance (Pennebaker & Skelton, 1978). People who believe in witchcraft, card reading, fortune telling, astrology and homeopathy systematically avoid taking the contrary evidences into account (Nickerson, 1998). They only focus on the cases where these things yielded positive results. Serious racial prejudices and stereotypes are formed and maintained due to confirmation bias. People just go blind sometimes. Confirmation bias may be the cause of much social evil. Usually it is hard to find a deep rooted prejudice in one’s own mind. Social conditioning puts things in our mind beyond our control and at a very early age. Being aware of our prejudices can certainly help us to be rational about it. But confirmation bias is such an evil that it will stop one from changing or even to consider thinking about their existing beliefs. We don’t label our beliefs as prejudices and that’s where we are mostly wrong. Confirmation bias also plays its dirty tricks in judiciary, politics, governance, journalism, education and various other walks of life.
Philosopher Karl Popper in his book Logic of Scientific Discovery (1959) prescribed that a scientific work must be motivated by the spirit of falsifiability. That is, scientists should form theories that can be easily falsified. If a theory cannot be proven wrong, it cannot be true either. This is not to suggest that every theory must be false, this simply means a theory should always keep a window open for its falsification. A scientist should always ask what kind of data can refute her pet theory (not that the same will be definitely found). If a theory does leave no room for any possible data that may refute it, then it is a bad theory, according to Popper. In the same spirit I think we should subject our beliefs and opinions to check whether we have left any room for scepticism. We should encourage ourselves to think of possible scenarios under which we will be willing to discard our very fundamental beliefs. If we are unable to find an answer, we may be suffering from confirmation bias. In the end it is all about the tug of war between reason and intuition and our free will makes a huge difference.
BIBLIOGRAPHY
Allport, G. W. (1954). The Nature of Prejudice. Cambridge: Addison-Wesley.
Armstrong, S. (1996). We need to rethink the editorial role of peer reviewers. The Chronicle of Higher Education, 43(9), B3.
Bowers, K. S., Regehr, G., Balthazard, C., & Parker, K. (1990). Intuition in the context of discovery. Cognitive psychology, 22(1), 72-110.
Coolican, H. (1990). Research Methods and Statistics in Psychology. Hodder and Stoughton Educational.
Crane, D. (1967). The gatekeepers of science: Some factors affecting the selection of articles for scientific journals. The American Sociologist, 32, 195–201.
DeGrazia, A. (1963). The scientific reception system and Dr. Velikovsky. American Behavioral Scientist, 38–56.
Hojat, M., Gonnella, J. S., & Caelleigh, A. S. (2003). Impartial Judgment by the “Gatekeepers” of Science: Fallibility and Accountability in the Peer Review Process. Advances in Health Sciences Education, 75–96.
Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Ladyman, J. (2002). UNDERSTANDING PHILOSOPHY OF SCIENCE. New York: Routledge.
Mahoney, M. (1977). Publication prejudices: An experimental study of confirmatory bias in peer review system. Cognitive Therapy and Research, 1, 161–175.
Mitroff, I. (1974). The subjective side of science. Amsterdam: Elsevier.
Nelson, J. D., & McKenzie, R. (n.d.). Confirmation Bias. In M. Kattan, The Encyclopedia of Medical Decision Making. (pp. 167-171 ). London: Sage.
Nickerson, R. S. (1998). Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Review of General Psychology, 2(2), 175-220.
Olson, C. (1990). Peer review of biomedical literature. American Journal of Emergency Medicine, 356–358.
Pennebaker, J. W., & Skelton, J. A. (1978). Psychological parameters of physical symptoms. Personality and Social Psychology Bulletin, 4, 524-530.
Ridley , M. (2012, July 20). When Bad Theories Happen to Good Scientists. Wall Street Journal.
Shirley, D. A., & Langan-Fox, J. (1996). Intuition: A review of the literature. Psychological Reports, 79(2), 563-584.
Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 129-140.
About the author
Baijayanta is an audio analyst and senior developer in GP Robotics. He is also a freelance sound designer, location sound engineer for audiovisual productions and a music producer. He has been professionally working with the independent film circuit for quite some time. He is also interested in AI, cognitive psychology and philosophy of science. You can reach him on twitter: @BaijayantaRoy
Confirmation bias in the sciences – a double-edged sword? by Baijayanta Roy is licensed under a Creative Commons Attribution 4.0 International License.
One Response
I see you don’t monetize your website, don’t waste your traffic, you can earn additional cash
every month because you’ve got hi quality content.
If you want to know how to make extra money,
search for: best adsense alternative Wrastain’s
tools