# Quasi-magical thinking and the public good

Cooperation is a puzzle because it is not obvious why cooperation, which is good for the group, is so common, despite the fact that defection is often best for the individual. Though we tend to view this issue through the lens of the prisoner’s dilemma, Artem recently pointed me to a paper by Joanna Masel, a mathematical biologist at Stanford, focusing on the public goods game [1]. In this game, each player is given 20 tokens and chooses how many of these they wish to contribute to the common pool. Once players have made their decisions, the pool is multiplied by some factor m (where mn > 1) and the pool is distributed equally back to all players. To optimize the group’s payoff, players should take advantage of the pool’s multiplicative effects by contributing all of their tokens. However, because a player’s share does not depend on the size of their contribution, it is easy to see that this is not the best individual strategy (Nash equilibrium). By contributing nothing to the common pool, a player gets a share of the pool in addition to keeping all of the tokens they initially received. This conflict captures the puzzle of cooperation, which in this case is: Why do human participants routinely contribute about half of their funds, if never contributing is individually optimal?

As Masel points out, various attempts at explaining human cooperation in this context have failed. The proposal that players have not understood the game is contradicted by the finding that cooperation perseveres when subjects play for extended periods, and even resets to high levels when a new round is started [2]. The argument that players cooperate in an attempt to evoke reciprocity also falls flat because subjects who play anonymously, and with no knowledge of their partners’ contributions, not only continue to cooperate, but do so at equal [3, 4] or even higher levels [5]. A final proposal of particular interest to us is the suggestion that players are using a utility function (i.e., considering a payoff matrix) that deviates from objective reality. In other words, they are considering subjective factors such as fairness, the group’s payoff, the rewarding nature of contributing, and so on.

To explain the data and yet stray as little as possible from the assumption of rationality, Masel proposes that human reasoning may be captured by the idea “what if everyone else thought like me?” Specifically, even though players understand there is no causal link between their own behavior and that of others, they may nevertheless recognize that a correlation exists, and this realization may be sufficient motivation to contribute. Famously proposed by Shafir and Tversky [6], this phenomenon is known as quasi-magical thinking and involves acting as if one erroneously believes (without actually believing) that one’s actions affect the behavior of others. This principle may best be captured by the sentiment often expressed by voters, who individually have very little influence on the outcome of any given election, “if I don’t vote, then who will?” In this case, players contribute because they are acting as if they believe that contributing makes others more likely to contribute.

Let’s just hope they think like me and cooperate.

(As an aside, Artem points out that this idea resembles Douglas Hofstadter’s concept of superrationality, a type of decision making where individuals assume that, in a symmetric game, both parties will arrive at the same answer. Because unilateral actions are off limits, this results in cooperation instead of defection, since cooperation is the best mutual strategy. The difference, in this case, is that players do not assume that others will mirror their actions; rather, they are simply sensitive to the fact that their behavior is likely to be correlated to some degree with the behavior of others.)

To test this idea, Masel considers agents using a Bayesian update scheme to estimate how much others contribute and how much these contributions vary. This would ordinarily result in a race to the bottom, with agents converging to the Nash equilibrium (no one contributing any of their tokens). Masel avoids this by having agents treat their own expected contribution as a data point akin to other players’ contributions. This expected contribution is weighted more heavily initially, while an agent’s confidence in its estimate of the average contribution is low, and becomes weighted less heavily relative to external data as time goes on and confidence grows. As a result, agents can increase their estimate of the average contribution simply by expecting to contribute more themselves, particularly when not enough reliable data has been collected to disagree.

In principle, such a bias seems reasonable. It would encourage cooperation, despite cooperation not being individually optimal, and avoids strongly violating the assumption of rationality by explaining the tendency to cooperate as a consequence of what data is used to predict others’ behavior. This broadly agrees with the finding that making choices influences expectations [7] and, conversely, that estimating others’ actions prior to making a choice leads to reduced contributions [8]. In short, leveraging the knowledge that “I am like them” may explain, in rational terms, seemingly irrational cooperation in the public goods game.

### References

1. Masel, J. (2007). A Bayesian model of quasi-magical thinking can explain observed cooperation in the public good game. Journal of Economic Behavior & Organization, 64 (2), 216-231 DOI: 10.1016/j.jebo.2005.07.003
2. Isaac, R. M., Walker, J. M., & Thomas, S. H. (1984). Divergent evidence on free riding: An experimental examination of possible explanations. Public Choice, 43(2), 113-149. doi: 10.1007/bf00140829
3. Brandts, J., & Schram, A. (2001). Cooperation and noise in public goods experiments: applying the contribution function approach. Journal of Public Economics, 79(2), 399-427. doi: 10.1016/s0047-2727(99)00120-6
4. Weimann, J. (1994). Individual behaviour in a free riding experiment. Journal of Public Economics, 54(2), 185-200. doi: 10.1016/0047-2727(94)90059-0
5. Andreoni, J. (1988). Why free ride? Journal of Public Economics, 37(3), 291-304. doi: 10.1016/0047-2727(88)90043-6
6. Shafir, E., & Tversky, A. (1992). Thinking through uncertainty: Nonconsequential reasoning and choice. Cognitive Psychology, 24(4), 449-474. doi: 10.1016/0010-0285(92)90015-t
7. Dawes, R. M., McTavish, J., & Shaklee, H. (1977). Behavior, communication, and assumptions about other people’s behavior in a commons dilemma situation. Journal of Personality and Social Psychology35(1), 1-11.
8. Croson, R. T. A. (2000). Thinking like a game theorist: factors affecting the frequency of equilibrium play. Journal of Economic Behavior & Organization, 41(3), 299-314. doi: 10.1016/s0167-2681(99)00078-5

Marcel Montrey is a graduate student at McGill University, working in Tom Shultz's Laboratory for Natural and Simulated Cognition (LNSC). His primary interests lie in computational models of learning and evolution.

### 12 Responses to Quasi-magical thinking and the public good

1. Thanks for the great post Marcel! As we discussed on Friday, it seems that in our case, the appropriate way to update our agents’ updating is for each agent to just to “observe themselves” during each interaction, thus the rules for calculating $p$ and $q$ become:

$p = \frac{2 n_{CC} + n_{CD} + 1}{ 2(n_{CC} + n_{CD}) + 2}$, and $q = \frac{ n_{DC} + 1}{ 2(n_{DC} + n_{DD}) + 2}$

The above would be the agent counting what they would have done once. However, we could weigh these behaviors differently by a factor $\alpha$ (self-importance?) with the above case corresponding to $\alpha = 1/2$, then our update rules would become:

$p = \frac{n_{CC} + \alpha n_{CD} + 1}{ n_{CC} + n_{CD} + 2}$, and $q = \frac{(1 - \alpha) n_{DC} + 1}{n_{DC} + n_{DD} + 2}$

With the limit of $\alpha = 1$ corresponding to Hofstadter’s superrationality.

2. Another explanation for the evolution of cooperation in the public goods game is communication [1]. Game theorists often forget that agents communicate in the real world, and thus don’t include communication in their models. These papers point out this flaw, and show that communication (e.g., punishment, [2]) is sufficient to explain the evolution of cooperation in the presence of an environment that would typically favor defection.

[1] Critical dynamics in the evolution of stochastic strategies for
the iterated Prisoner’s Dilemma. http://arxiv.org/pdf/1004.2020.pdf

[2] Punishment catalyzes the evolution of cooperation. http://arxiv.org/pdf/1210.5233v1.pdf

• Thank you for sharing your group’s papers, Randy! I agree that there are countless explanations for cooperation in public good (and PD) games. My favorite introductory survey cataloging some of these approaches is:

West, S. A., Griffin, A. S., & Gardner, A. (2007). Evolutionary explanations for cooperation. Current Biology, 17(16), R661-R672.

Figure 2 in the above paper is an amazing classification of mechanisms. Of course, the paper is not comprehensive. For instance, it does not mention spatial or density-dependent/ecological effects that are extremely popular (except in the simpler island models of limited dispersal).

This site uses Akismet to reduce spam. Learn how your comment data is processed.