Useful delusions, interface theory of perception, and religion
May 4, 2014 7 Comments
As you can guess from the name, evolutionary game theory (EGT) traces its roots to economics and evolutionary biology. Both of the progenitor fields assume it impossible, or unreasonably difficult, to observe the internal representations, beliefs, and preferences of the agents they model, and thus adopt a largely behaviorist view. My colleagues and I, however, are interested in looking at learning from the cognitive science tradition. In particular, we are interested in the interaction of evolution and learning. This interaction in of itself is not innovative, it has been a concern for biologists since Baldwin (1886, 1902), and Smead & Zollman (2009; Smead 2012) even brought the interaction into an EGT framework and showed that rational learning is not necessarily a ‘fixed-point of Darwinian evolution’. But all the previous work that I’ve encountered at this interface has made a simple implicit assumption, and I wanted to question it.
It is relatively clear that evolution acts objectively and without regard for individual agents’ subjective experience except in so far as that experience determines behavior. On the other hand, learning, from the cognitive sciences perspective at least, acts on the subjective experiences of the agent. There is an inherent tension here between the objective and subjective perspective that becomes most obvious in the social learning setting, but is still present for individual learners. Most previous work has sidestepped this issue by either not delving into the internal mechanism of how agents decide to act — something that is incompatible with the cognitive science perspective — or assuming that subjective representations are true to objective reality — something for which we have no a priori justification.
A couple of years ago, I decided to look at this question directly by developing the objective-subjective rationality model. Marcel and I fleshed out the model by adding a mechanism for simple Bayesian learning; this came with an extra perk of allowing us to adopt Masel’s (2007) approach to looking at quasi-magical thinking as an inferential bias. To round out the team with some cognitive science expertise, we asked Tom to join. A few days ago, after an unhurried pace and over 15 relevant blog posts, we released our first paper on the topic (Kaznatcheev, Montrey & Shultz, 2014) along with its MatLab code.
Consider Alice, an arbitrary agent in our simulations. She lives in a minimal spatially structured world of a 3-regular random graph. She interacts with neighbours like Bob via a Prisoner’s Dilemma (PD) game, the payoffs from which determine their objective fitness and drive evolution. Alice, however, does not know the objective payoffs of the game and instead relies on a heritable subjective representation that could be any cooperate-defect game. Alice’s mind holds two beliefs, one is her estimate of the probability that Bob cooperates when she cooperates, and the second is her estimate of the probability that Bob cooperates when she defects. Since the interaction is one-shot, Bob cannot actually condition his behavior on Alice’s choice — the objective probabilities are equal — but we do not hand-code this into Alice’s mind, allowing her to learn for herself. If she is susceptible to the heritable inferential bias of quasi-magical thinking (Masel, 2007) then her two subjective beliefs might be unequal. Alice acts rationally based on these heritable subjective payoffs and learned beliefs, and decided to cooperate or defect, but her decision could easily differ from Bob’s equally rational decision because Bob might have a different subjective representation of the world.
Of course, we are not the first ones to worry about the mapping of external stimuli to internal experiences, i.e. perception. For philosophers, this has been a deep concern since Plato’s allegory of the cave, and one of the first experimental developments in psychology was Gustav Fechner‘s founding of psychophysics. From Plato’s shadows and psychophysics, developed the orthodox view of perception (Yuille & Bulthoff, 1996; Palmer 1999) — critical realism: perception resembles reality, although it does not capture all of it. In our model, critical realism would correspond to to Alice and Bob evolving representations of the game that are qualitatively similar to the Prisoner’s Dilemma (i.e. V > 1 > 0 > U), although maybe not exactly the same.
Where the dynamics of our simulations actually lead depended on the competitiveness of the environment. If c is the objective cost of giving, and b is the objective benefit of receiving then we say that an environment is friendly if and it is competitive otherwise. In competitive environments, we recovered critical realism — agents evolved toward objectively correct conceptions of the underlying game. The majority of the 5000 agents we studies had internal representations correspond to Prisoner’s Dilemma, with a minority evolving toward other defection-promoting games like Hawk-Dove and Leader. The (U,V)-values of their internal representations are given in red below.
In friendly environments, however, agents evolved misrepresentations of objective reality. Most agents evolved internal representations where the only rational action is cooperation, and no agent in the friendly environment evolved the objectively correct PD representation at the end of 2000 cycles (about 200 generations). The (U,V)-values of the agents’ internal representations are given in blue below. This resulted in the agents cooperating, as can be seen from the inset above, with the blue lines corresponding to friendly environments and resulting in much higher levels of cooperation than the red lines of the highly competitive environments. Thus, through their misrepresentations the agents produced higher social welfare for the whole population. In other words, it was not always the case that the subjective representations shadowed — even approximately — objective reality.
A better philosophical grounding for this can be found in Hoffman’s (1998, 2009) interface theory of perception. For Hoffman, perception is an interface that hides complexity that is irrelevant to Alice’s goals. In the case of evolution, this means that perception need not be truthful, but instead must serve as a simple interface through which Alice can maximize her fitness. Mark et al. (2010) confirmed this insight with an evolutionary model that showed that if perception is expensive then Alice will tune it to reflect the fitness distribution — something that depends not only on objective reality, but also on its interaction with the agent.
Our result extend beyond this in two important ways. First, unlike Mark et al. (2010), we do not impose an exogenous penalty for accurate depictions of objective reality. In our case, the interface isn’t simply hiding some irreducible environmental complexity, but instead it is focusing on inclusive fitness — a complicated function of both the objective payoffs, the distribution of other agents, and interaction network — and encoding it as the subjective payoff. In fact, our agents evolve misrepresentations in the face of an implicit penalty associated with them: in the friendly environments, if Alice were to suddenly switch to accurate perceptions of payoffs then she could exploit Bob to get strictly higher fitness in the short term. Second, our perceptive tuning incorporates not just the individual agent’s, but the whole population’s interaction with the objective world. We not only strengthen the case for the interface theory of perception, but also show that the individual’s interface can serve not only the goals of that individual, but those of society as a whole.
A great example of such a social interface is religion. Typically, commentators see a tension between the praise of religion for promoting cooperative and moral behavior and the criticism for delusional beliefs. Our model resolves this tension by showing how both factors can arise from evolutionary dynamics. Previous evolutionary explanation for religion coopt complicated processes like image scoring, third party punishment, or group selection (Roes & Raymond, 2003; Johnson & Bering, 2006). In other words, the usual explanations are that you think — maybe, unconsciously — God is watching you and thus will be judge you (image scoring, reputation effects) and/or punish you (3rd party punishment). These are pretty robust ways to get cooperation, but they are also pretty complicated. Even the simpler model like tag-based cooperation — kind of like saying: “Hey, I saw you in church, let’s be friends!” — are not very satisfying because ethnocentrism is not very robustness under cognitive cost (Kaznatcheev, 2010).
All of the above can be part of an explanation since nothing ever has a simple story, but I prefer more “minimalistic” explanations. Our model, relies only on the simplest of spatial structure and can use the cooperation and religion argument much earlier and apply to both moralizing and non-moralizing gods, reaching more and older cultures. We can start to describe some of the earliest misrepresentations of reality present in ancient cultues as ways to promote cooperation and then build later versions (such as ethnocentrism, image scoring, punishment) on top of them to further refine and tighten cooperation, or let it generalize to less structured environments as tribes grow larger.
Of course, we are far from being able to make concrete connections to the cognitive science of religion, but at least we can show that agents who lack an a priori understanding of the world can evolve a misrepresentations of reality. These misrepresentation can incorporate inclusive fitness effects and encourage more cooperation than objective rationality would permit; resulting in higher social welfare.
Baldwin, J.M. (1886). A new factor in evolution. Amer. Nat., 30: 441-451, 536-553.
Baldwin, J.M. (1902). Development and evolution. Macmillan, New York.
Hoffman, D.D. (1998). Visual intelligence: How we create what we see. W.W. Norton, New York.
Hoffman, D.D. (2009). The interface theory of perception. In: Dickinson, S., Tarr, M., Leonardis, A., & Schiele, B. (Eds.), Object categorization: Computer and human vision perspectives. Cambridge University Press, Cambridge.
Johnson, D. & Bering, J. (2006). Hand of god, mind of man: Punishment and cognition in the evolution of cooperation. Evolutionary Psychology, 4.
Kaznatcheev, Artem (2010). The cognitive cost of ethnocentrism. Proceedings of the 32nd Annual Conference of the Cognitive Science Society, 967-971.
Kaznatcheev, A., Montrey, M., & Shultz, T.R. (2014). Evolving useful delusions: Subjectively rational selfishness leads to objectively irrational cooperation. Proceedings of the 36th annual conference of the cognitive science society arXiv: 1405.0041v1
Mark, J.T., Marion, B.B., & Hoffman, D.D. (2010). Natural selection and veridical perceptions. Journal of Theoretical Biology, 266(4): 504-15.
Masel, J. (2007). A Bayesian model of quasi-magical thinking can explain observed cooperation in the public good game. Journal of Economic Behavior & Organization, 64(2): 216-231.
Palmer, S. (1999). Vision science: Photons to phenomenology. The MIT Press.
Roes, F.L., & Raymond, M. (2003). Belief in moralizing gods. Evolution and Human behavior, 24(2): 126-135.
Smead, R., & Zollman, K. J. (2009). The stability of strategic plasticity. Carnegie Mellon University, Department of Philosophy, Technical Report 182.
Smead, R. (2012). Game theoretic equilibria and the evolution of learning. Journal of Experimental & Theoretical Artificial Intelligence, 3(24), 301-313.
Yuille, A., & Bulthoff, H. (1996). Bayesian decision theory and psychophysics. In D.C. Knill & W. Richards (Eds.), Perception as Bayesian inference. Cambridge University Press.