According to my experience of being human – and as one of the species, I feel I have some insight into the subject – we don’t always make the right choices. Our decisions are based on a guess about what the future will look like, and the impossibility of being certain about that foresight is scary. But despite the statistical inevitability of making some bad choices in life, at least the decision-making process itself is based on a critical evaluation of the best information available. Right?
Then again, it’s only been a couple of hours since I critically evaluated my way out of Coles holding a 6-pack of croissants and a tea infuser I didn’t need. And snack goods aside, there are actually many other factors that influence human decision-making.
What’s in a choice?
Research since the 1980s has explored how fallible we are to ‘framing effects’. That is, the choices we make are constantly influenced by how those choices are packaged, or ‘framed’. In the supermarket, consumers tend to choose products that are described in positive terms, so a cut of beef which is 75% lean sounds better than one which is 25% fat. Even in medical situations where the decision stakes are higher, a patient is more likely to opt in for surgery with a 90% survival rate than one with a 10% mortality rate.
Another factor affecting human decision-making is complexity. ‘Overchoice’ describes the effect of paralysis that is often elicited when one enters the pasta sauce section of a supermarket. Being presented with so many brands, sizes and ingredient combinations makes consumers more likely to place value on attributes they wouldn’t otherwise consider important. It might also lead to consumers deferring the decision altogether, having been so overwhelmed by variety that they head instead to the slightly less complicated pizza section.
Framing and overchoice effects demonstrate how our judgements are influenced by human imperfection. So just how do we imperfect humans hope to make sense of a world so tinted by bias?
Science to counter bias
This is where scientific expertise plays such an important role. Human curiosity drives the questions we ask, but science gives us an objective framework within which to answer those questions by generating, testing and evaluating hypotheses. Unlike our own human minds, science doesn’t yield to extraneous factors. It produces facts under explicit conditions. Sometimes those facts even contradict our experiences or common sense. For example, the earth is definitely – though counter-intuitively – round, and that’s an important thing for us to know about our universe.
Bias to counter science
Then again, it isn’t robots who conduct science – it’s human beings. And becoming an expert means investing a lot into learning about a specialty area, so the stakes for being wrong might be higher than for the general public. Could scientists with ‘expert’ status then be more likely to resist ideas that clash with their ingrained beliefs?
This might be the case for some experts. On the whole though, there is evidence to suggest that experts are better able to judge the merits – and importantly, the limits – of their own understanding. Such knowledge likely counteracts the tendency to cling to a theory in the face of contradictory evidence. After all, the more resources someone commits to becoming an expert, the more they realise how much they don’t actually know about that field.
That is not to say that some subjectivity doesn’t seep into research. One concern relating to the validity of scientific findings relates to ‘confirmation bias’, or the tendency to find patterns in observations that support preconceived beliefs. Researchers investigating a specific problem or question don't work from a blank slate – they predict the results based on existing theory, evidence, and knowledge of the variables at play. That's just good science. And yet, these grounded expectations can bias well-intentioned experts’ interpretations, and even affect the data gathering process itself.
Towards a compromise
The fact that experts are fallible humans shouldn’t make us lose faith in a scientific method of investigation. Certainly, it should make us think about how best to limit – or at least make explicit – the influence of bias in science. To this end, there has been an increasing push for researchers to follow transparent scientific practices, by openly sharing data, or preregistering plans and predictions for a study before collecting results.
As a way of piecing together the world, science is worth sticking with, even despite the flaws of the beautifully messy humans that conduct, consume and communicate it. Each new scrap of research brings us closer to some approximation of the right answer. Without this kind of logical system for explaining our universe, we rely instead on intuitive judgement, and the focus is misdirected towards who said it, how they phrased it, and how loudly they yelled.
 Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211, 453-458.
 Levin, I.P., & Gaeth, G.J. (1988). How consumers are affected by the framing of attribute information before and after consuming the product. The Journal of Consumer Research, 15(3), 374-378.
 Veldwijk, J., Essers, B.A., Lambooij, M.S., Dirksen, C.D., Smit, H.A., & de Wit, G.A. (2016). Survival or mortality: Does risk attribute framing influence decision-making behavior in a discrete choice experiment? Value Health, 19(2), 202-209.
 Ariely, D., & Norton, M.I. (2011). From thinking too little to thinking too much: A continuum of decision making. Wiley Interdisciplinary Reviews: Cognitive Science, 2(1), 39-46.
 Gourville, J.T., & Soman, D. (2005). Overchoice and assortment type: When and why variety backfires. Marketing Science, 24(3), 382-395.
 Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134.
 Hergovich, A., Schott, R., & Burger, C. (2010). Biased evaluation of abstracts depending on topic and conclusion: Further evidence of a confirmation bias within scientific psychology. Current Psychology, 29(3), 188-209.
 Hrobjartsson, A., Thomsen, A.S.S., Emanuelsson, F., Tendal, B., Hilden, J., Boutron, I., Ravaud, P., & Brorson, S. (2012). Observer bias in randomised clinical trials with binary outcomes: Systematic review of trials with both blinded and non-blinded outcome assessors. BMJ, 344.
 Nature editorial (2015). Let’s think about cognitive bias. Nature, 526(7572), 163.
 Cunningham, C.A., & Gonzales, J.E. (2014). The golden age of data sharing. Retrieved from http://www.apa.org/science/about/psa/2014/12/data-sharing.aspx.