Egalitarians’ dilemma and the cognitive cost of ethnocentrism

Ethnocentrism (or contingent altruism) can be viewed as one of many mechanisms for enabling cooperation. The agents are augmented with a hereditary tag and the strategy space is extended from just cooperation/defection to behaviour that can be contingent on if the diad share or differ in their tag. The tags and strategy are not inherently correlated, but can develop local correlations due to system dynamics. This can expand the range of environments in which cooperation can be maintained, but an assortment-biasing mechanism is needed to fuel the initial emergence of cooperation (Kaznatcheev & Shultz, 2011). The resulting cooperation is extended only towards the in-group while the out-group continues to be treated with the cold rationality of defection.

Suppose that circles are the in-group and squares the out-group. The four possible strategies and their minimal representations as finite state machines is given.

Suppose that circles are the in-group and squares the out-group. The four possible strategies and their minimal representations as finite state machines is given.

The four possible strategies are depicted above, from top to bottom: humanitarian, ethnocentric, traitorous, and selfish. Humanitarians and selfish agents do not condition their behavior on the tag of their partner, and do not require the cognitive ability to categorize. Although this ability is simple, it can still merit a rich analysis (see: Beer, 2003) by students of minimal cognition. By associating a small fitness cost k with categorization, we can study how much ethnocentric (and traitorous) agents are willing to pay for their greater cognitive abilities. This cost directly changes the default probability to reproduce (\text{ptr}), with humanitarians and selfish agents having \text{ptr} = 0.11 and ethnocentrics and traitorous agents having \text{ptr} = 0.11 - k. During each cycle, the \text{ptr} is further modified by the game interactions, with each cooperative action costing c = 0.01 and providing a benefit b (that varies depending on the simulation parameters) to the partner. For more detailed presentation of the simulation and default parameter, or just to follow along on your computer, I made my code publicly available on GitHub. Pardon its roughness, the brunt of it is legacy code from when I first build this model in 2009 for Kaznatcheev (2010).

Number of agents by strategy versus evolutionary cycle. The lines represent the number of agents of each strategy: blue --  humanitarian; green -- ethnocentric; yellow -- traitorous; red -- selfish. The width of the line corresponds to standard error from averaging 30 independent runs. The two figures correspond to different costs of cognition. The left is k = 0.002 and is typical of runs before the cognitive cost phase transition. The right is k = 0.007 and is typical of runs after the cognitive cost phase transition. Figure is adapted from Kaznatcheev (2010).

Number of agents by strategy versus evolutionary cycle. The lines represent the number of agents
of each strategy: blue — humanitarian; green — ethnocentric; yellow — traitorous; red — selfish. The width of the line corresponds to standard error from averaging 30 independent runs. The two figures correspond to different
costs of cognition. The left is k = 0.002 and is typical of runs before the cognitive cost phase transition. The right is k = 0.007 and is typical of runs after the cognitive cost phase transition. Figure is adapted from Kaznatcheev (2010).

The dynamics for low k are about the same as the standard no cognitive cost model as can be seen from the left figure above. However, as k increases there is a transition to a regime where humanitarians start to dominate the population, as in the right figure above. To study this, I ran simulations with a set b/c ratio and increasing k from 0.001 to 0.02 with steps of 0.001. You can run your own with the command bcRun(2.5,0.001*(1:20)); some results are presented below, your results might differ slightly due to the stochastic nature of the simulation.

The figure presents the proportion of humanitarians (blue), ethnocentrics (red), and cooperative interactions (black) versus cognitive cost for b/c = 2.5. The dots are averages from evolutionary cycles 9000 to 10000 of 10 independent runs. The lines are best-fit sigmoids.

Proportion of humanitarians (blue), ethnocentrics (red), and cooperative interactions (black) versus cognitive cost for b/c = 2.5. Dots are averages from evolutionary cycles 9000 to 10000 of 10 independent runs. The lines are best-fit sigmoids and the dotted lines mark the steepest point; I take take this as the point for the cognitive cost phase transition. Data generated with bcRun(2.5,0.001*(1:20)) and visualized with bcPlot(2.5,0.001*(1:20),[],1)

Each data-point is the average from the last 1000 cycles of 10 independent simulations. The points suggest a phase transition from a regime of few humanitarians (blue), many ethnocentrics (red), and very high cooperation (black) to one of many humanitarians, few ethnocentrics, and slightly less cooperation. To get a better handle on exactly where the phase transition is, I fit sigmoids to the data using fitSigmoid.m. The best-fit curves are shown as solid lines; I defined the point of phase transition as the steepest (or inflection) point on the curve and plotted them with dashed lines for reference. I am not sure if this is the best approach to quantifying the point of phase transition, since the choice of sigmoid function is arbitrary and based only on the qualitative feel of the function. It might be better to fit a simpler function like a step-function or a more complicated function from which a critical exponent can be estimated. Do you know a better way to identify the phase transition? At the very least, I have to properly measure the error on the averaged data points and propogate it through the fit to get error bounds on the sigmoid parameters and make sure that — within statistical certainty — all 3 curves have their phase transition at the same point.

The most interesting feature of the phase transition, is the effect on cooperation. The world becomes more equitable; agents that treat out-groups differently from in-group (ethnocentrics) are replaced by agents that treat everyone with equal good-will and cooperation (humanitarians). However, the overall proportion of cooperative interactions decreases — it seems that humanitarians are less effective at suppressing selfish agents. This is consistent with the free-rider suppression hypothesis that Shultz et al. (2009) believed to be implausible. The result is egalitarians’ dilemma: by promoting equality among agents the world becomes less cooperative. Should one favour equality and thus individual fairness over the good of the whole population? If we expand our moral circle to eliminate out-groups will that lead to less cooperation?

In the prisoners’ dilemma, we are inclined to favor the social good over the individual. Even though it is rational for the individual to defect (securing a higher payoff for themselves than cooperating), we believe it is better for both parties to cooperate (securing a better social payoff than mutual defection). But in the egalitarians’ dilemma we are inclined to favour the individualistic strategy (fairness for each) over the social good (higher average levels of cooperative interactions). We see a similar effect in the ultimatum game: humans reject unfair offers even though that results in neither player receiving a payoff (worse for both). In some ways, we can think of the egalitarians’ dilemma as the population analogue of the ultimatum game; should humanity favor fairness over higher total cooperation?

I hinted at some of these questions in Kaznatcheev (2010) but I restrained myself to just b/c = 2.5. From this limited data, I concluded that since the phase transition happens for k less than any other parameter in the model, it must be the case that agents are not willing to invest much resources into developing larger brains capable of categorical perception just to benefit from an ethnocentric strategy. Ethnocentrism and categorical perception would not have co-evolved, the basic cognitive abilities would have to be in place by some other means (or incredibly cheap) and then tag-based strategies could emerge.

Points of phase transition

Value of k at phase transition versus b/c ratio. In blue is the transition in proportion of humanitarians, red — proportion of ethnocentrics, and black – proportion of cooperative interactions. Each data point is made from a parameter estimate done using a sigmoid best fit to 200 independent simulations over 20 values of k at a resolution of 0.001.

Here, I explored the parameter space further, by repeating the above procedure while varying the b/c ratio by changing b from 0.02 to 0.035 in increments of 0.0025 while keeping c fixed at 0.01. Unsurprisingly, the transitions for proportion of ethnocentrics and humanitarians are indistinguishable, but without a proper analysis it is not clear if the transition from high to low cooperation also always coincides. For b/c > 2.75, agents are willing to invest more than c before the phase transition to all humanitarians, this invalidates my earlier reasoning. Agents are unwilling to invest much resources in larger brains capable of categorical perception only for competitive environments (low b/c). As b increases, the agents are willing to invest more in their perception to avoid giving this large benefit to the out-group. This seems consistent with explicit out-group hostility that Kaznatcheev (2010b) observed in the harmony game. However, apart from simply presenting the data, I can’t make much more sense from this figure. Do you have any interpretations? Can we learn something from the seemingly linear relationship? Does the slope (if we plot k versus b then it is about 0.5) tell us anything? Would you still conclude that co-evolution of tag-based cooperation and categorical perception is unlikely?


Beer, Randall D. (2003). The Dynamics of Active Categorical Perception in an Evolved Model Agent. Adaptive Behavior. 11(4): 209-243.

Kaznatcheev, Artem (2010). The cognitive cost of ethnocentrism Proceedings of the 32nd annual conference of the cognitive science society

Kaznatcheev, A. (2010b). Robustness of ethnocentrism to changes in inter-personal interactions. Complex Adaptive Systems – AAAI Fall Symposium.

Kaznatcheev, A., & Shultz, T. R. (2011). Ethnocentrism maintains cooperation, but keeping one’s children close fuels it. Proceedings of the 33rd Annual Conference of the Cognitive Science Society. 3174-3179.

Shultz, T. R., Hartshorn, M., & Kaznatcheev, A. (2009). Why is ethnocentrism more common than humanitarianism? Proceedings of the 31st Annual Conference of the Cognitive Science Society. 2100-2105.


About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

12 Responses to Egalitarians’ dilemma and the cognitive cost of ethnocentrism

  1. Pingback: Social learning dilemma | Theory, Evolution, and Games Group

  2. Pingback: Cooperation and the evolution of intelligence | Theory, Evolution, and Games Group

  3. Pingback: Warburg effect and evolutionary dynamics of metastasis | Theory, Evolution, and Games Group

  4. Pingback: Baldwin effect and overcoming the rationality fetish | Theory, Evolution, and Games Group

  5. Pingback: Cooperation through useful delusions: quasi-magical thinking and subjective utility | Theory, Evolution, and Games Group

  6. Pingback: Cataloging a year of blogging: applications of evolutionary game theory | Theory, Evolution, and Games Group

  7. Epiphyte says:

    Maybe it helps to think of Noah’s ark. Noah engaged in individualistic behavior. The group certainly did not perceive any collective benefit from Noah’s behavior. Yet, they didn’t prevent him from building and boarding his boat. If they had prevented him from going his own direction…then we wouldn’t be here today. Well…it’s just a story…but it helps illustrate collective vs individual behavior.

    If somebody has a crystal ball…then there’s no debate regarding how to define the common good. But the more you embrace the reality of uncertainty…I think the more inclined you are to tolerate individualistic behavior. It allows us to hedge our bets.

    Personally…I do think the free-rider problem is a legitimate concern. So it’s not unreasonable to coerce people to contribute to the collective. But because nobody has a crystal ball…I think taxpayers should be able to choose which collective activities their contributions support.

    This means that taxpayers can’t defect from contributing to the collective…but they can defect from collective activities which they perceive to be nonsensical.

    Linus’s Law is that, given enough eyeballs, all bugs are shallow. Uncertainty means that you really don’t want members of your group being blindfolded. You want everybody to be on the lookout for hidden benefits/threats. If people can’t choose where their taxes go…then you effectively blindfold members of your group.

  8. Pingback: Useful delusions, interface theory of perception, and religion. | Theory, Evolution, and Games Group

  9. Pingback: An approach towards ethics: neuroscience and development | Theory, Evolution, and Games Group

  10. Your model description implies that it takes more cognitive ability to be “ethnocentric” (and treacherous) than to be either humanitarian or selfish. What if we take a look at the value of a cognitive system (not in terms of ability to recognize who is likely to be in-group and who is likely to be out-group) but as a function, not of increasing competition (higher b/c) but rather of greater information about the fairness or unfairness of behaviour over time? I suggest that cooperative information sharing would have evolved because it was a cheap way of tagging defectors. In that way, you could favour fairness in individual interactions, and this serves as a kind of placeholder for occasional large scale cooperative events.

    A cognitive ability to hold detailed information on a certain number of other individuals need not be limited to the number suggested by Dunbar’s number (150) since if information sharing between individuals is possible, this permits everyone of those 150 contacts to transmit information to their own contacts. The assumption is often made that social networks are limited to this size, but that is only because most people assume that everyone in a network will have the same 150 individuals in their network. But reality is not like that. There is almost always some overlap (we might each know four or five people in common and that is how we met, but our other contacts will not necessarily overlap. Look at the recent work on social networks here:

    If you look at the illustration of the kind of groups he finds most effective in spreading information around social networks, you will see that totally uniform social group is less effective than lumpy situation of some correlation causing group cohesion within a larger pool of information sharing. ““There’s a belief that the more that people interact with strangers, the more that new ideas and beliefs will spread,” Centola said. “What this study shows is that preserving group boundaries is actually necessary for complex ideas to become accepted across diverse populations”….This is especially true for adopting new solutions to hard problems”

    This corresponds very well to the way human cultures are actually organized. Human cognitive systems developed in a particular context – and it was not one of constant competition between in-groups and out-groups.

    I studied hunter-gatherers and I found three things that most other researchers also did: local groups were very fluid in composition, consisting of about 28 persons of various ages, comprising 3-5 households. The individuals were not free actors; their behaviour was contingent on their preferential (trusting) cooperation with members of their own household (husband wife, children, and any associated grandparent) and cooperative sharing of certain resources with the other households in the camp. There was division of labour which resulted in two tiers of sharing – meat resulting from hunting efforts by men were shared camp wide, vegetables, nuts, and fruits were generally shared within the household.

    Camp group (band) composition was in flux over the course of the year, and location of camping sites changed every few weeks or months. since the social networks of individual adults extended far beyond any current camp, to friends and relatives in other camping sites, social reasons (to visit a friend or relative, or to escape a conflict with another person in the present camp) were the main reason why households moved among various groups over the course of a year.

    The social networks tended thus to extend to the limits of a language or dialect grouping of between 800 and 3000 people or hundreds of miles. This corresponds to the ethnocentric boundaries between groups, and the particular dialect or language spoken effectively tags members of different groups.

    In research among hunter-gatherers however, we do not generally find hostile or competitive relationships dominating the interactions and communications between these larger groups. Nor do they do an tit-for-tat trading. Instead there is exchange of nonmaterial things like stories, songs, jokes, and mutual participation in ritual dances. Multilingualism is generalized, so information flow does happen. Some of the information given and received is about environmental conditions and game movements, but much of it is in the form of stories about human behaviour – gossip and scandal feature here. Specific instances of misconduct (selfishness, theft, violence and murder) by individuals are exchanged, so both groups benefit by warnings about particular defectors form social norms. Since there is some intermarriage between these different groups, it is possible that a selfish or treacherous individual might move between them. This exchange of information make it less likely that such a person would be able to receive permission to join a camping party in any of the neighbouring language groupings.

    I don’t know if this is useful to you in developing models that relate the development of greater cognitive ability to cooperative/altruistic strategies among humans during their evolution, but I would be interested to know your thoughts on it.

    • Thank you for this detailed comment and I am sorry that it took me so long to reply.

      Your primary point seems to be a discussion of information flow within and between groups. This has been studied in the EGT literature under indirect reciprocity through reputation sharing or image scoring. The classic papers on this are:

      [1] Nowak, M. A., & Sigmund, K. (1998). Evolution of indirect reciprocity by image scoring. Nature, 393(6685), 573-577.

      [2] Nowak, M. A., & Sigmund, K. (2005). Evolution of indirect reciprocity. Nature, 437(7063), 1291-1298.

      I think this is a fun approach and it would be interesting to see it extended to tag-based models. I am not sure to what extent this has been done, but a quick search suggests that looking at Nathan Griffiths’ work might be worthwhile. I have not read come across these papers, before thought, and am going off the abstracts:

      [3] Griffiths, N. (2008). Tags and image scoring for robust cooperation. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 2 (pp. 575-582). International Foundation for Autonomous Agents and Multiagent Systems.

      [4] Griffiths, N., & Luck, M. (2010). Norm emergence in tag-based cooperation.

      [5] Griffiths, N., & Luck, M. (2010). Changing neighbours: Improving tag-based cooperation. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1 (pp. 249-256). International Foundation for Autonomous Agents and Multiagent Systems.

      Since I treat my model as a heuristic, and not as an insilication of human evolution or behavior, I prefer to take time to understand the simplest mechanisms first. Realizing that their effects can be negligible, or even reversed, by more complex structures. That is why I start with the simplest tag-based models. These are not meant to represent humans, and I am sometimes very uncomfortable with the choice of language my colleagues and I use in describing these models.

      Paper [5], in particular, although not dealing with image-scoring directly does explore the interaction between tag-based models and movement through or adjustment of a social network. The fluidity of this network in hunter-gatherers seems to be your second main point here. For this, the classic EGT work is:

      [6] Aktipis, C. A. (2004). Know when to walk away: contingent movement and the evolution of cooperation. Journal of Theoretical Biology, 231(2), 249-260.

      I find your second point to be the more exciting one. When I first started this sort of work, I was very focused on spatial structure represented by the static networks familiar to graph theorists. The representation was comfortable given my background in graph theory, combinatorics, and discrete math and my experience with the artificial networks that structure (too) much of my life. I was also excited about some of the tools promising analytic tractability.

      However, in recent years, I realized that these tools can be generalized away from the explicit graph-like representation to one in terms of distributions. Further, such non-explicitly graph-like representations are also more likely to be useful if we want to connect to experiment in natural settings, versus the artificialness of online social-networks, web-sites, and power-grids.

      Your description of hunter-gatherers, which is consistent with what I’ve heard from other anthropologists, suggests to me that the graph-theory representation might not be the best. With a more natural model focused around nested ‘islands’ corresponding to the camps and the extended families that make them up. In the near future, I will have a blog post out on measuring in hunter-gatherers these classic social networks based on:

      [7] Apicella, C. L., Marlowe, F. W., Fowler, J. H., & Christakis, N. A. (2012). Social networks and cooperation in hunter-gatherers. Nature, 481(7382): 497-501.

      However, since I am trying to move away from this representation, and since I am not anthropologist, it would be great to have a post from an expert on the structure of social interactions among hunter-gatherers. Would you be interested in digging up some old papers and writing a guest post on this topic? Basically expanding the description in your comment of how hunter-gatherer societies are structured to a piece of about 1-1.5k words. If you are interested then let me know, either here on the blog or by email at (lastname) (dot) (firstname) (at) gmail (dot) com.

      Again, thank you for the insightful comment.

  11. Pingback: Diversity of group tags under replicator dynamics | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s