Enriching evolutionary games with trust and trustworthiness

Fairly early in my course on Computational Psychology, I like to discuss Box’s (1979) famous aphorism about models: “All models are wrong, but some are useful.” Although Box was referring to statistical models, his comment on truth and utility applies equally well to computational models attempting to simulate complex empirical phenomena. I want my students to appreciate this disclaimer from the start because it avoids endless debate about whether a model is true. Once we agree to focus on utility, we can take a more relaxed and objective view of modeling, with appropriate humility in discussing our own models. Historical consideration of models, and theories as well, should provide a strong clue that replacement by better and more useful models (or theories) is inevitable, and indeed is a standard way for science to progress. In the rapid turnover of computational modeling, this means that the best one could hope for is to have the best (most useful) model for a while, before it is pushed aside or incorporated by a more comprehensive and often more abstract model. In his recent post on three types of mathematical models, Artem characterized such models as heuristic. It is worth adding that the most useful models are often those that best cover (simulate) the empirical phenomena of interest, bringing a model closer to what Artem called insilications.

Models in evolutionary game theory (EGT) have tended towards abstractness and simplicity, with a lot of attention directed towards a few key variables such as payoff matrices, cost to benefit ratios, network structures, mutation rates, update rules, etc. A new paper by McNamara (2013) argues that it is now time to consider enriching EGT models with features like psychological mechanisms, decision making, personality variations, and novel traits. McNamara’s inclusion of Box’s aphorism at the top of his paper caught my attention and raised the possibility that I could relate to this paper. As the paper progresses, McNamara reviews several such richer models of his own to illustrate how adding these extra variables changes simulation outcomes. Such outcome changes are important because, if new features don’t matter for outcomes, we could continue to ignore them as irrelevant details. We may not completely agree with adding such richness because it can complicate results and preclude an analytic approach. McNamara nonetheless makes a strong case for some increase in model richness.

A single example is his latest simulation on the role of trust in increasing cooperation among agents (McNamara, Stephens, Dall, & Houston, 2009). In a modified trust game, pairwise interactions between agents occur in two phases. A randomly chosen agent is assigned to the role of player 1 (P1), and another agent is assigned to a trustee role (P2). In the first phase, P1 decides whether to trust P2. If P2 is not trusted by P1, both agents receive a reward d, the defector’s pay-off. If P2 is trusted by P1, the game enters a second phase in which P2 decides whether to cooperate or defect. If P2 cooperates, both agents receive the cooperator’s pay-off r. If P2 does not cooperate, then P2 receives a pay-off of 1, while P1 gets nothing. Payoff amounts satisfy 0 < d < r < 1. When P1 has no information about P2, the game has the usual evolutionarily stable outcome at mutual defection (the Nash equilibrium), wherein both players receive d and forgo the higher payoff r, which they would have received if P1 had trusted P2, and P2 had cooperated with P1 (the Pareto optimum).

Extra richness comes from allowing P1 to sample, at a cost, n previous decisions by P2 players. Trust is a heritable trait of P1 players: they always trust, never trust, or trust by sampling subject to an integer k, where 1 \leq k \leq n. Samplers will trust P2 if and only if the sampled P2s were trustworthy on at least k of the n sampled episodes. Trust is universally implemented as a value of k: those who always trust have k = 0, while those who never trust have k = n + 1. Samplers pay a cost c, where 0 \leq c < d. The unconditional strategies of always or never trusting incur no sampling cost. P2 role players also have a heritable trait – the probability of cooperating with a P1.

The reproductive fitness of an agent is the sum of their payoffs across the two roles. An infinite population is modeled, not as discrete agents but rather in mathematical equations. Results show that variability in social awareness (trusting on the basis of prior evidence of trustworthy behavior) encourages variability in trustworthiness. Such variability in trustworthiness, in turn, favors variance in social awareness, even when such awareness is costly. In other words, there is a coevolution of two personality traits (trust and trustworthiness), which can perhaps account for the extensive human variation in empirical studies, across both people and cultures (Fehr & Fischbacher, 2003; Henrich et al., 2004).
Hearing about this study from me, critics remarked that trustworthiness seems no different than reputation, which has been the focus of several EGT models and human experiments (Leimar & Hammerstein, 2001; Nowak & Sigmund, 1998). I believe that what is new in McNamara’s model is the notion of coevolution, maintenance, and mutual influence of two personality traits – trust and trustworthiness. If there is insufficient variation in trust, there would be little variation in trustworthiness and vice versa. If agents vary in trustworthiness, there is a good evolutionary reason for costly social monitoring. And if there is sufficient variation in trust, there is a sound evolutionary reason to calibrate one’s tendency to cooperate (trustworthiness). So, by introducing these two new, initially randomly-valued traits, McNamara and colleagues document a newish (for EGT) evolutionary mechanism for the emergence of cooperation, over the Nash-equilibrium baseline of mutual defection. In empirical studies with humans, trust had already been proposed as an important mechanism for sustainable management of public goods (Ostrom, 1998, 1999). McNamara and colleagues complement this work by exploring how the evolution of trust depends on personality variation, both in trust and in the complementary dimension of trustworthiness.

My own view is that trust is likely important in the emergence of novel cooperation between humans of different groups and apparent distinct interests. The remarkable new dialog between the US and Iran may be cited as a current example. Trust will make it, or mistrust will break it. Stay tuned.

McNamara (2013) reviews several other examples of how increasing richness by adding new, realistic variables change evolutionary outcomes in games for mate-selection, divorce, parental investment, and hawk-dove. Moreover, there seems to be no problem with the extra richness compromising an analytical approach as McNamara’s models tend to be mathematical as opposed to computational.

Some of the newer EGT models in our lab can fit rather comfortably with McNamara’s emphasis on variation between agents. Artem’s blog post on the possible discrepancy between objective and subjective rationality raises the possibility that agents may differ in their impression of what game is being played, and that apparent irrationality in game playing could result from rational processes being applied to subjectively perceived payoffs. Marcel’s post on quasi-magical thinking is also relevant, in which he points out that rational decision making with self-biased learning can result in irrational cooperation in the public goods game. Finally, my post on the need for social connections identifies some ways in which agents’ perceived payoffs differ from experimenter-designed payoffs.

So maybe we should all try to get a bit richer.


Box, G. E. P. (1979). Robustness in the strategy of scientific model building. In R. L. Launer & G. N. Wilkinson (Eds.), Robustness in statistics.

Fehr, E., & Fischbacher, U. (2003). The nature of human altruism. Nature, 425: 785-791.

Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., & Gintis, H. (2004). Foundations of human sociality: economic experiments and ethnographic evidence from fifteen small-scale societies. Oxford: Oxford University Press.

Leimar, O., & Hammerstein, P. (2001). Evolution of cooperation through indirect reciprocity. Proceedings of the Royal Society B, 268: 745-753.

McNamara, J.M. (2013). Towards a richer evolutionary game theory. Journal of the Royal Society, Interface / the Royal Society, 10 (88) PMID: 23966616

McNamara, J. M., Stephens, P. A., Dall, S. R. X., & Houston, A. I. (2009). Evolution of trust and trustworthiness: social awareness favours personality differences. Proceedings of the Royal Society B, 276: 605-613.

Nowak, M. A., & Sigmund, K. (1998). Evolution of indirect reciprocity by image scoring. Nature, 393: 573-577.

Ostrom, E. (1998). A behavioral approach to the rational choice theory of collective action. American Political Science Review, 92(1): 1-22.

Ostrom, E. (1999). Coping with tragedies of the commons. Annual Review of Political Science, 2: 493-535.


About Thomas Shultz
Prof. Dept. of Psychology Assoc. Member, School of Computer Science McGill U

2 Responses to Enriching evolutionary games with trust and trustworthiness

  1. Pingback: Cataloging a year of blogging: from behavior to society and mind | Theory, Evolution, and Games Group

  2. Pingback: From realism to interfaces and rationality in evolutionary games | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s