Truthiness of irrelevant detail in explanations from neuroscience to mathematical models

TruthinessTruthiness is the truth that comes from the gut, not books. Truthiness is preferring propositions that one wishes to be true over those known to be true. Truthiness is a wonderful commentary on the state of politics and media by a fictional character determined to be the best at feeling the news at us. Truthiness is a one word summary of emotivism.

Truthiness is a lot of things, but all of them feel far from the hard objective truths of science.

Right?

Maybe an ideal non-existent non-human Platonic capital-S Science, but at least science as practiced — if not all conceivable versions of it — is very much intertwined with politics and media. Both internal to the scientific community: how will I secure the next grant? who should I cite to please my reviewers? how will I sell this to get others reading? And external: how can we secure more funding for science? how can we better incorporate science into schools? how can we influence policy decisions? I do not want to suggest that this tangle is (all) bad, but just that it exists and is prevalent. Thus, critiques of politics and media are relevant to a scientific symposium in much the same way as they are relevant to a late-night comedy show.

I want to discuss an aspect of truthiness in science: making an explanation feel more scientific or more convincing through irrelevant detail. The two domains I will touch on is neuroscience and mathematical modeling. The first because in neuroscience I’ve been acquainted with the literature on irrelevant detail in explanations and because neuroscientific explanations have a profound effect on how we perceive mental health. The second because it is the sort of misrepresentation I fear of committing the most in my own work. I also think the second domain should matter more to the working scientist; while irrelevant neurological detail is mostly misleading to the neuroscience-naive general public, irrelevant mathematical detail can be misleading, I feel, to the mathematically-naive scientists — a non-negligible demographic.

You might have noticed, dear reader, that for most posts on TheEGG, just like many other blogs and webzines, there is a vaguely related image in the opening slug — for this article: Stephen Colbert on the debut episode of the The Colbert Report — and occasionally interspersed elsewhere for longer posts. Unlike the occasional graph of results, these images are often irrelevant to the primary thesis of any given article. I include them not to strengthen the point I want to make, but to lighten the mood, make the post less intimidating and more memorable. However, Newman (2013; also, see Newman et al., 2012; Fenn et al., 2013; and for the effects of this in the courtroom, see Newman & Feigenson, 2013) suggests that even these irrelevant images might make my thesis feel stronger, and feel more likely to be true. Unfortunately, my difficult to pronounce name might offset all the benefit of these images (Newman et al., 2014). Joking aside, this effect of irrelevant images increasing confidence can be an even bigger problem in articles on psychology or neuroscience, where potentially irrelevant brain-scan images in the opening slug might seem more convincing given their ‘science’-y appearance.

McCabe & Castel (2008) started this research direction with a splash, by showing that brain images have a stronger effect than other types of media (or lack of media) in increasing our confidence in an explanation of cognitive mechanisms. They suggested that this effectiveness is from an appeal to an urge for reductionist and physicalist justifications. Unfortunately, as is surprisingly common in psychology, the effect might not replicate as Michael et al. (2013) suggested after an attempt to recreate the result with 10 experiments with over 2000 participants and Hook & Farah (2013; also, see Farah & Hook, 2013) suggested after running 3 more detailed experiments with nearly 1000 participants. Thankfully, the failure to replicate the original study has not stopped research, and Schweitzer et al. (2013) showed that even while the original experiments do not replicate, a more subtle bias among the lay audience in favor of neuoroscientific images in explanations exists.

In a parallel line of research, Weisberg et al. (2008) considered the effect of irrelevant neuroscientific text content in explanations of psychological phenomena. They showed that among students of neuroscience, and naive adults this irrelevant detail increased confidence in the result, while having no effect on neuroscience experts. As with the imaging studies, further work has shown this effect to be more subtle. For example, Scurich & Shniderman (2014) have argued that part of the difficulty in replication comes from a selective deployment of this truthiness effect; those that already agree with the conclusion, find the irrelevant neuroscience explanation to make it more convincing. Among those that disagree with the conclusion, however, the inclusion of an irrelevant neuroscientific explanation made the participants even more skeptical. Fernandez-Duque et al. (2014) seperated the conceptual and pictorial aspects of the results, by showing that a conceptual effect similar to Weisberg et al. (2008) was possible, while the purely brain image related aspect of McCabe & Castel (2008) was not present. Plunkett et al. (2014) noted a further subtlety, by observing that the increase in confidence was present when the irrelevant explanation involved ‘typical’ neurological functioning while there was no increase in confidence if the fictional explanation involved an ‘atypical’ or ‘abnormal’ neurological functioning as might be associated with mental illness.

To me, these two lines of research suggests that there is something to the idea that irrelevant neuroscientific reasoning increases confidence in a conclusion; at least, among non-experts. This might have lead to an entrenchment in the popular psyche of often inaccurate folk-neuroscience. Further, I fear that the displacement of folk-psychology by this new folk-neuroscience can lead to adverse effects on social cohesion and cooperation.

Less speculatively, this makes me worry about a similar effect in irrelevant mathematical detail when it supports a story we wanted to be true — as often seems to be the case in, say, economics. I fear that this might be a particularly big problem for fields like mathematical oncology where the parent discipline of clinical oncologists and experimental biologists do not have significant levels of mathematical training (or respect for the field). Although based on Fawcett & Higginson (2012), I could expect that the math detail, relevant or irrelevant, will also lead to fewer citations, which presumably means less of an impact, positive or negative. Still, my fear is that mathematical models be cherry-picked not for their explanatory power, but simply because they are presented as (potentially irrelevant) support for conclusions that we prefer on non-mathematical grounds. I’ve picked on Michor et al.’s (2005) well-regarded result as a potential example of this, at least under one interpretation — I pick on this result not because I think the mathematical aspect is particularly unnecessary to the conclusion, but because the researchers are well enough established and the paper often enough cited that my criticism won’t hurt anybody. Cherry-picking, especially when it is inadvertent, can be especially frightening when one specializes — as I do — in heuristic models that are aimed to clarify our ideas and not necessarily be tested against (the hopefully unbiased by model presentation) empirical data. I think that in such cases, it is especially important for me to take responsibility for my model, and not over-sell it or advertise the result as extra certain because of the mathematical justification. Do you think that mathematical models are particularly susceptible to be used to find truthiness instead of truth? Or do you think that I am fretting over a tempest in a teapot?

References

Farah, M. J., & Hook, C. J. (2013). The seductive allure of “seductive allure”. Perspectives on Psychological Science, 8(1), 88-90. http://pps.sagepub.com/content/8/1/88.short

Fawcett, T. W., & Higginson, A. D. (2012). Heavy use of equations impedes communication among biologists. Proceedings of the National Academy of Sciences, 109(29)L 11735-11739.

Fenn, E., Newman, E. J., Pezdek, K., & Garry, M. (2013). The effect of nonprobative photographs on truthiness persists over time. Acta Psychologica, 144(1): 207-211. http://www.cgu.edu/PDFFiles/sbos/pezdek%202014/Fenn_Newman_Pezdek_Garry_2013.pdf

Fernandez-Duque, D., Evans, J., Christian, C., & Hodges, S. D. (2014). Superfluous Neuroscience Information Makes Explanations of Psychological Phenomena More Appealing. J Cogn Neurosci. 12: 1-19. http://www.ncbi.nlm.nih.gov/pubmed/25390208

Hook, C. J., & Farah, M. J. (2013). Look again: effects of brain images and mind–brain dualism on lay evaluations of research. Journal of Cognitive Neuroscience, 25(9), 1397-1405. http://www.mitpressjournals.org/doi/abs/10.1162/jocn_a_00407#.VLbhNyvF-z4

McCabe, D. P., & Castel, A. D. (2008). Seeing is believing: The effect of brain images on judgments of scientific reasoning. Cognition, 107(1), 343-352. http://www.sciencedirect.com/science/article/pii/S0010027707002053

Michael, R. B., Newman, E. J., Vuorre, M., Cumming, G., & Garry, M. (2013). On the (non) persuasive power of a brain image. Psychonomic bulletin & review, 20(4), 720-725. http://link.springer.com/article/10.3758/s13423-013-0391-6

Michor, F., Hughes, T., Iwasa, Y., Branford, S., Shah, N., Sawyers, C., & Nowak, M.A. (2005). Dynamics of chronic myeloid leukaemia. Nature, 435(7046): 1267-1270.

Newman, E. J., Garry, M., Bernstein, D. M., Kantner, J., & Lindsay, D. S. (2012). Nonprobative photographs (or words) inflate truthiness. Psychonomic Bulletin & Review, 19(5): 969-974. http://link.springer.com/article/10.3758/s13423-012-0292-0

Newman, E.J. (2013). Nonprobative Photos Inflate the Truthiness and Falsiness of Claims. PhD Thesis at Victoria University at Wellington. http://researcharchive.vuw.ac.nz/handle/10063/2648

Newman, E.J. & Feigenson, N. (2013). The Truthiness of Visual Evidence. The Jury Experts, 25(5): 1-6. http://www.thejuryexpert.com/wp-content/uploads/1311/JuryExpert_1311_TruthinessVisuals.pdf

Newman, E. J., Sanson, M., Miller, E. K., Quigley-McBride, A., Foster, J. L., Bernstein, D. M., & Garry, M. (2014). People with Easier to Pronounce Names Promote Truthiness of Claims. PloS one, 9(2), e88671. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0088671

Plunkett, D., Lombrozo, T., & Buchak, L. (2014) Because the Brain Agrees: The Impact of Neuroscientific Explanations for Belief. Proceedings of the 36th annual conference of the cognitive science society. https://mindmodeling.org/cogsci2014/papers/209/paper209.pdf

Schweitzer, N. J., Baker, D. A., & Risko, E. F. (2013). Fooled by the brain: re-examining the influence of neuroimages. Cognition, 129(3), 501-511. http://www.sciencedirect.com/science/article/pii/S0010027713001662

Scurich, N., & Shniderman, A. (2014). The Selective Allure of Neuroscientific Explanations. PloS one, 9(9), e107529. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0107529

Weisberg, D.S., Keil, F.C., Goodstein, J., Rawson, E., & Gray, J.R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20 (3), 470-7 PMID: 18004955

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

4 Responses to Truthiness of irrelevant detail in explanations from neuroscience to mathematical models

  1. ejwinner says:

    You have touched on many of the issues that post-modernists frequently fret over without lapsing into their relativist despair. A brief, well-handled introduction to an important problem I think many of us are noticing these days. I think there are solid grounds for concern worthy of greater inquiry and discussion.

  2. Pingback: Cataloging a year of blogging | Theory, Evolution, and Games Group

  3. Pingback: Methods and morals for mathematical modeling | Theory, Evolution, and Games Group

  4. Pingback: Plato and the working mathematician on Truth and discourse | Theory, Evolution, and Games Group

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.