Misbeliefs, evolution and games: a positive case

A recurrent theme here in TheEGG is the limits and reliability of knowledge. These get explored from many directions: on epistemological grounds, from the philosophy of science angle, but also formally, through game theory and simulations. In this post, I will explore the topic of misbeliefs as adaptations. Misbeliefs will be intended as ideas about reality that a given subject accepts as true, despite them being wrong, inaccurate or otherwise mistaken. The notion that evolution might not systematically and exclusively support true beliefs isn’t new to TheEGG, but it has also been tackled by many other people, by means of different methodologies, including my own personal philosophising. The overarching question is whether misbeliefs can be systematically adaptive, a prospect that tickles my devious instincts: if it were the case, it would fly in the face of naïve rationalists, who frequently assume that evolution consistently favours the emergence of truthful ways to perceive the world.

Given our common interests, Artem and I have had plenty of long discussions in the past couple of years, mostly sparked by his work on Useful Delusions (see Kaznatcheev et al., 2014), for some more details on our exchanges, as well as a little background on myself, please see the notes[1]. A while ago,  I found an article by McKay and Dennett (M&D), entitled “The evolution of misbelief” (2009)[2], Artem offered me the chance to write a guest post on it, and I was very happy to accept.

What follows will mix philosophical, clinical and mathematical approaches, with the hope to produce a multidisciplinary synthesis.

The original question is whether misbeliefs can be systematically adaptive: if that were the case, it would require to scrap what is the typical default assumption: that evolution consistently favours the emergence of truthful ways to perceive the world. To this end, M&D wrote a very long, thoroughly documented, and somewhat exhausting essay; since they wish to challenge a prevalent view, they also set their own bar very high. Their aim is to see if there is any instance of misbeliefs that fulfils the following criteria:

  • Adaptive Misbeliefs (AMs) need to be the result of the normal operation of some evolved belief-formation system. If a misbelief is formed because the belief-formation system happens to be working under exceptional circumstances that were not present while it was evolving, it would not count as a genuine AM.
  • AMs need to be systematically produced by a belief-formation system. If they occasionally happen as a side-product of a normally functioning system, they could be seen as the side effect of the normal constrains that limit the design solutions available to evolutionary explorations.
  • They need to be actual beliefs: for example, many risk-averse behaviours, such as avoiding all snakes, or not eating unrecognised mushrooms, could be explained by an illusionary belief (all snakes are poisonous), but can be explained also in terms of risk-management. One doesn’t need to believe that a mushroom is poisonous to avoid it, one could just (correctly) believe that it might be harmful, and avoid taking the risk.
  • Finally, misbeliefs need to be systematically favoured by natural selection. In order to declare that evolution favours the occurrence of a given misbelief, it must be possible to identify provable causal relations between holding such belief and overall fitness. For example, they argue that our tendency to produce religious belief-systems may be the side effect of a risk-management bias: we easily presume that there is agency where there isn’t (a risk-management heuristic). However, since they could not find definite proof that religious belief systems are consistently able to increase fitness (they could only identify probable causes), they eventually didn’t list religion as a systematically adaptive misbelief.

Still, they did find some examples of AMs: they call them Positive Illusions. In this class, they insert overestimations of our worth, ability, or our chances of overcoming a given disease. Even more strongly, they include similarly over-optimistic evaluations of our partners, offspring and close associates. All these misbeliefs can systematically encourage us to invest more resources in an endeavour (climbing the social ladder, looking after our partner and/or caring for our children) and thus increase our probability of success in a rather direct way. Their conclusion is that some very specific misbeliefs can be systematically adaptive, and thus, natural selection does not always favour adherence to reality.

To me, this is satisfying, but not completely. First, M&D offer only a post-hoc theoretical interpretation of existing data, it could well contain a mistake, even if I could spot none. Furthermore, I would have preferred a more precise definition of such AMs. The reason is simple: I’d like to know when I can trust my own reasoning, and to do so I need easy to apply, unequivocal rules of thumb. Thus, I kept looking for research in the field. Leaving aside Artem’s and colleagues efforts for the moment, I found the work of Lisa Bortolotti: she is busy promoting the concept of Epistemic Innocence. This is the idea that some misbeliefs might have, despite being wrong, an overall positive effect on the knowledge of who believes them. She applies the concept to mental health clinical settings, where patients under considerable duress occasionally develop actual delusions. Her exploration of actual clinical cases leaves little room for doubt: in some cases, such delusions allow the patient to function and engage with the world, thus permitting to collect new knowledge and at least try to overcome the difficult circumstances. This is one of the cases that M&D classified as “shear pin” breakage: sometimes a belief-system breaks down (starts producing delusions) to preserve some other more important ability (engaging with the world). Importantly, although this case didn’t make into M&D list of AMs, it shares with official AMs an important aspect: the cases listed by Bortolotti are in fact optimistic delusions. Their wrong content make the overall situation more bearable to the misbeliever, their lives become less worrying than they actually should be. Thus, the patient finds the courage, stamina or motivation to actually tackle the difficulties that she’s facing. The common mechanism with proper AMs is that all such misbeliefs are able to promote potentially positive action.

This reminds me of my own crude heuristic: the concept of cognitive attraction. When a belief produces behaviours that will tend to generate evidence that supports the original belief, then the belief will be reinforced. In this context, a belief that is able to promote actions that have the distinct potential of producing positive outcomes will end up being reinforced every time this potential is somewhat fulfilled, and may thus produce a positive feedback loop. Positive outcomes produce more optimistic actions, which reinforce the over-optimistic initial misconceptions. This circle may break, of course, but even if it works just by chance and for limited time, it will self-reinforce, and would therefore happen more frequently than one would otherwise predict. Certain beliefs contain a self-fulfilling seed.

In a sense, this is Bayesian logic: one starts with priors, in this case wrong priors, but the consequence is that, because of such wrong assumptions, the individual finds itself in a situation where encountering evidence that confirms the assumptions is a little easier. Note that Bayesian logic can work within a single subject or across generations (in the latter case, we call it evolution)[3]. Thus, we reach Artem and colleagues evolutionary model, where, because of the game rules, a positive misbelief can become predominant: the more it is present, the more it is convenient to hold it. For me, this starts suggesting some sort of (discursive) explanation to the following question: why do AMs need to be positive/optimistic?

Consider a negative case, a wrong assumption that is self-fulfilling in the cognitive-attraction sense, but applies to something that is negative, or supposed to decrease our success/survival chances. Despite its self-fulfilling properties, this kind of belief may have little chances of being retained across generations. I’ll make my point with a single example (with apologies for the informal way to explore the issue[4]): you move to a new part of town and believe that the neighbourhood is extremely dangerous. Consequently, you stay a lot more at home, and indeed remain alive and unharmed (thus reinforcing, or not disproving your belief). However, by not strolling around, you do make crime easier, so this particular misbelief is also marginally self-fulfilling (holding this belief makes it somewhat truer). In this way the misbelief is reinforced, but it also gives you fewer chances to find a partner, learn about your neighbourhood, etc. Consequently, your fitness is reduced, and even if the belief is somewhat self-sustained within the individual, it will not spread very effectively. However, the spreading-potential is reversed in the symmetric situation, where you misjudge the new neighbourhood and believe it is safer than it really is. In this case you’ll spend more time around, giving other people reasons to feel safer (look: s/he is not worried!), and also factually make the area safer by simply being around. If you will be lucky enough to remain unharmed, the self-fulfilment element of your misbelief would make it more likely that it will spread. In other words, over-optimistic illusions are frequently (maybe even usually?) more likely to self-sustain and replicate, while the self-limiting consequences of all misbeliefs tend to matter more when the misbelief is negative.

In conclusion, I have briefly linked together four alternative, but tightly interconnected, approaches that all support the original M&D hypothesis: evolution does sometimes systematically promote misbeliefs. Such misbeliefs however seem to all belong to a very special kind of illusions: those that contain some elements of self-fulfilling over-optimistic prophecies. With some luck, this exploration might ignite the curiosity of those that are more inclined to produce formal mathematical models: it would be satisfying to see my hunch confirmed or dismissed via a more rigorous approach!

Notes and References:

  1. I’m a former molecular neurobiologist, with an interest in evolution, cognition, consciousness, science and whatnot. As my mind is restless, I decided that the only way to put ideas to the test (and see if some coherence is indeed there) was to write them down publicly. While I was exploring Science Epistemology and the Demarcation Problem, Artem found me and commented in exactly the way I would have hoped: providing new food for thought. A few months later his post on Interface Theory of Perception sparked a long conversation, and eventually informed another post of mine: Sources of error: we are all biased, but why?
    This post is the direct result of the fortuitous encounter between Artem and me, something that could only happen because of the Internet, and is another good example on why it is important to share our thoughts and to seek the input of the widest range of opinions. [Back]
  2. The article itself is published by the Behavioral and Brain Sciences journal: it is a very peculiar publication, particularly suited for what I’ll be exploring today. It captures — in a more traditional form — the sort of dialectic exploration that Artem and I seek in our blogs. Their format starts with a target article, followed by 10-25 peer commentaries, closed by a response from the original. It might be worthwhile to direct further questions about this format to Artem; he wrote a response in an issue on quantum models of decision-making. The full issue on misbelief is available here and is a fantastic example of how a multitude of views is vastly superior to any individual effort. I will not try to comment on the whole debate, and will instead use M&D’s initial contribution to kick off a multidisciplinary exploration.
  3. This is the whole idea behind Memetics: ideas/concepts/knowledge compete for replication across minds more or less as genes compete for replication across organisms. It’s important to note that this analogy allows to model both (standard genetic) evolution and cultural evolution by using the same maths, and that in both cases Bayesian equations can be employed. However, as Artem noted, this also highlights how mathematical models can sometimes hide differences: taken at face level, the mathematical equivalence could be taken to mean that both types of evolution are equivalent. This is obviously not the case at the intuitive level: learning, evolution and Bayesian logic may be closely related but they can’t be the same thing! Formally however, it’s an interesting challenge to find a way to represent and distil the difference. Artem proposes to use the distinction between objective and subjective “selection forces”, a very reasonable approach, but how do we make sure that it is indeed capturing the distinction in a meaningful way? [Back]
  4. In my original Cognitive Attraction post, I’ve used the negative case of a dog which assumes all other dogs are aggressive, it exemplified the self-reinforcing dynamics, but doesn’t capture the difference between negative and positive misconceptions. Formally, I’m finding it difficult to find a strict definition that could isolate this difference. Furthermore, whenever I try to put my finger on it I stumble on variations of risk management strategies: risking to not get the expected advantage (because of an overestimation of your chances) is usually better than taking failure for granted. Also, the tricky distinction between genetic evolution and social learning gets somewhat in the way: in the Useful Delusions model, the effect I’m trying to pin down is already present, but I’d bet that it would be even more visible if the model/simulation included some form of Social Learning[Back]

Bortolotti, L. (2014). The epistemic innocence of motivated delusions. Consciousness and cognition.

Kaznatcheev, A., Montrey, M., & Shultz, T.R. (2014). Evolving useful delusions: Subjectively rational selfishness leads to objectively irrational cooperation. Proceedings of the 36th annual conference of the Cognitive Science Society. arXiv: 1405.0041v1

Kaznatcheev, A., & Shultz, T. R. (2013). Limitations of the Dirac formalism as a descriptive framework for cognition. Behavioral and Brain Sciences, 36(03), 292-293.

McKay, R. T., & Dennett, D. C. (2009). The evolution of misbelief. Behavioral and Brain Sciences, 32(06), 493-510.

Advertisements

4 Responses to Misbeliefs, evolution and games: a positive case

  1. Hi, Sergio!

    Great thinking, I just want to share with you this imaging study I stumbled upon a couple of weeks ago concerning positive illusions. It lies very much in line with what is considered in your text. The researchers were unable to differentiate between normal activity and active deception towards positive self-image.

    http://www.sciencedirect.com/science/article/pii/S002839321400476X

    • Sergio Graziosi says:

      Thanks!
      I am really glad to appear on TheEGG, now busy thinking about what an appropriate follow-up may look like.
      Thanks also for the link, judging on the abstract alone it does seem to be very relevant, it’s always good to find some corroborating evidence!

  2. Pingback: ICYMI: my contributions elsewhere | Writing my own user manual

  3. Pingback: Cataloging a year of blogging | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s