# Cooperation through useful delusions: quasi-magical thinking and subjective utility

Economists that take bounded rationality seriously treat their research like a chess game and follow the reductive approach: start with all the pieces — a fully rational agent — and kill/capture/remove pieces until the game ends, i.e. see what sort of restrictions can be placed on the agents to deviate from rationality and better reflect human behavior. Sometimes these restrictions can be linked to evolution, but usually the models are independent of evolutionary arguments. In contrast, evolutionary game theory has traditionally played Go and concerned itself with the simplest agents that are only capable of behaving according to a fixed strategy specified by their genes — no learning, no reasoning, no built in rationality. If egtheorists want to approximate human behavior then they have to play new stones and take a constructuve approach: start with genetically predetermined agents and build them up to better reflect the richness and variety of human (or even other animal) behaviors (McNamara, 2013). I’ve always preferred Go over chess, and so I am partial to the constructive approach toward rationality. I like to start with replicator dynamics and work my way up, add agency, perception and deception, ethnocentrism, or emotional profiles and general condition behavior.

Most recently, my colleagues and I have been interested in the relationship between evolution and learning, both individual and social. A key realization has been that evolution takes cues from an external reality, while learning is guided by a subjective utility, and there is no a priori reason for those two incentives to align. As such, we can have agents acting rationally on their genetically specified subjective perception of the objective game. To avoid making assumptions about how agents might deal with risk, we want them to know a probability that others will cooperate with them. However, this depends on the agent’s history and local environment, so each agent should learn these probabilities for itself. In our previous presentation of results we concentrated on the case where the agents were rational Bayesian learners, but we know that this is an assumption not justified by evolutionary models or observations of human behavior. Hence, in this post we will explore the possibility that agents can have learning peculiarities like quasi-magical thinking, and how these peculiarities can co-evolve with subjective utilities.

We are interested in the evolution of cooperation: will agents decide to give a benefit b to other agents at a cost c to themselves. In this context, nothing interesting happens for inviscid populations, so we have to introduce spatial structure. We chose random k-regular graphs (instead of arbitrary other choices like grids or other kinds of lattices) because it allows us to use the Ohtsuki-Nowak (2006) transform to generate an analytic prediction for where the transition from cooperation to defection would be in a population of fixed-strategy agents. In particular, we expect cooperation whenever $\frac{1}{k + 1} > \frac{c}{b -c}$. We concentrated on the most cooperation-enticing case of 3-regular random graphs, that means that we expect to find cooperation when the inverse of the objective Prisoner’s dilemma specialization coefficient is between 0 and 1/2. When $\frac{c}{b -c} = \frac{1}{2}$, we should see a rapid phase transition from an all cooperative to an all defective regime.

The figures above plot the levels of quasi-magical thinking versus inverse of PD specialization coefficient:

• The left figure plots the proportion of quasi-magical thinkers in a setting where agent genotypes allow either standard Bayesian inference ($\alpha = 0$) or quasi-magical thinking ($\alpha = 1/2$).
• The right figure plots the average self-absorption value ($\alpha$) of the population, where the possible genotypes allow all levels of self-absorption between 0 and 1.
• Note that the left and right figure have different scales on their x-axes. In particular, to translate from the left to the right, you have to divide the x-value by two. The horizontal black lines at $\alpha = 0.25$ and $\alpha = 0.15$ (right figure), corresponding to a proportion of quasi-magical thinkers of 0.5 and 0.3 (left figure), are plotted for easy comparison.
• The red lines correspond to cases where all agents perceived game is fixed at its true value and only quasi-magical thinking evolves, and
• the green lines represent where perceived game co-evolves with quasi-magical thinking.
• Line thickness represents standard error from averaging 10 runs on random 3-regular graph with 500 agents.

When we allow all values of $\alpha$ and initialize our simulations with genotypes selected uniformly at random, the expected value under no selection is 0.5. This is what we see in the highly specialized environments ($\frac{c}{b - c} < \frac{1}{2}$) of the green line in the left figure. Together with the high variance this suggests that there is no selective pressure for self-absorption. Once we move into the low specialization regime ($\frac{c}{b - c} > \frac{1}{2}$), we see in the right figure a selective pressure for low self-absorption that is almost as strong in the co-evolutionary (green) as non-co-evolutionary (red) regime. However, if we have only two discrete $\alpha$ values corresponding to standard Bayesian and Masel’s (2007) quasi-magical thinking, then the selective pressures seem to be absent for both low and high specialization if subjective utilities are allowed to evolve (green in the left figure). This could be a coincidence based on the low specialization regime pushing toward an optimal average $\alpha = 0.25$, but it is uncommon to have a U-shaped selective pressure for the PD game. We could test this theory by changing the mutation rates from Bayesian to quasi-magical to alter the neutral distribution, and seeing if the low specialization average follows the neutral or stays around $\alpha = 0.25$. A second explanation (which is just a variant of the first) is that the region of $0 \leq \alpha \leq 0.5$ is relatively neutral, but super-rationality ($0.5 \leq \alpha \leq 1$) is selected against. We could test this by looking at the distribution (instead of just average value) of $\alpha$ in the right figure and see if the lower half looks like it is under no selection, and upper half under selection. It would be nice to have a non-eyeball test for this.

It is also important to understand how much quasi-magical thinking and subjective utilites contribute to cooperation. The best way to see this is by turning off features. In the figure at right, we have the proportion of cooperative interactions versus inverse of PD specialization coefficient. In blue we consider just the evolution of subjective utilities, in red we have just the evolution of standard Bayesian and quasi-magical thinkers, in green we have the co-evolution of both, and in black (actually yellow, but standard error is too small) we have completely rational Bayesian agents that don’t undergo evolution. Line thickness represents standard error from averaging 10 runs. If we plot the general self-absorption versus the discrete standard Bayesian vs. quasi-magical thinking case, then the results are qualitatively the same, but quantitatively less pronounced.

Unsurprisingly, the evolutionary flexibility of having genetic access to both subjective utilities and quasi-magical thinking produces a curve that best approximates the ideal transition from all cooperation (maxing at 0.9 because of shaky hand) to all defection (minimizing at 0.1 because of shaky hand) at $\frac{c}{b - c} = \frac{1}{2}$. However, it is interesting to see that subjective utility and quasi-magical thinking on their own relax the transition in different directions. In particular, quasi-magical thinking on its own cannot achieve the expected levels of cooperation in the highly specialized regime, and subjective utility on its own takes longer to transition to all defection in the unspecialized regime. Further, the distribution of subjective utilities in the co-evolutionary and purely Bayesian cases in surprisingly similar, suggesting that subjective utilities are responding to an evolutionary pressure independent of (or maybe just significantly dominant over) the pressures on self-absorption.

To get a quantitative grasp of these transitions, it would be worthwhile to fit a sigmoid to them as I’ve done before for understanding cognitive cost of ethnocentrism. Concretely, this means for each of the four cases in the figure above finding parameters p and s such that:

$f_{p,s}(x) = \dfrac{0.8}{1 + \exp\Big(\dfrac{x - p}{s}\Big)} + 0.1$

minimizes least squares error against the collected data. The parameter p will tell us where the phrase transition is happening (so we expect values near 0.5) and s will telling us how sudden it is, with $s \rightarrow 0$ meaning very sudden and $s \rightarrow \infty$ meaning very slowly. For example, from eyeball inspection it looks like the phase transitions in the green and blue lines are starting slightly after the expected 0.5 point, hinting that the quasi-delusional agents might sustain cooperation for slightly longer than evolution of fixed pure strategies. However, I will save these explorations for next week, maybe with tests on random graphs of higher degree.

### References

McNamara, J.M. (2013). Towards a richer evolutionary game theory. Journal of the Royal Society, Interface, 10(88).

Masel, J. (2007). A Bayesian model of quasi-magical thinking can explain observed cooperation in the public good game. Journal of Economic Behavior and Organization, 64 (2), 216-231 DOI: 10.1016/j.jebo.2005.07.003

Ohtsuki H, & Nowak MA (2006). The replicator equation on graphs. Journal of Theoretical Biology, 243 (1), 86-97.

From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

### 9 Responses to Cooperation through useful delusions: quasi-magical thinking and subjective utility

1. OK, but if this can’t be found in other animals – what’s the point? Just more semantic ideology. Assume subjective utility is self-reports. They are useless. The best science currently shows behavior as “determined”unconsciously in 140 ms, by all animals, of course.

This just seems another effort at top-down fitting local ideologies and words to verbal behavior – with ignorance of the biological facts. ho hum….just more curve fitting….

• I’m not sure what your comment has to do with the post. The results above are theoretical, and they are exploring how to reconcile what might seem as objectively irrational from a reductionist perspective, with an agent acting rationally on a subjective utility function that has evolved to account for inclusive fitness effects.

There is no self reporting of anything, there is only actions that are used to interact with the world. Although you could try to infer the most succinct utility consistent with the actions, as Livnat & Pippenger do.

If you are worried about seemingly (again from a reductionist perspective) behavior in animals then I am not sure what your worry is based on, because irrational behavior has been observed in everything from ants/bees, to rats, to apes, and humans.

2. you should work with Paul Cisek at your school to come up with something useful.

• Thanks for mentioning Paul Cisek of UdeM (note that he’s not at McGill), he does some cool stuff, although it is much more neural and experimental than what I am typically interested in.

• So do your ideas need to conform to experimental and biological evidence? “Reductionist” is just cheap rhetorical name-calling. Popular but intellectually dishonest.

Dude, by definition, there can be no ‘irrational” behavior, how would it pass the reproduction test of evolution? Again a clever, but dishonest semantic trick, of name calling behavior that contradicts 18th century econ ideology. Without biology and animal ethology, these kinds of ideas are simply empty, but popular – a form of magical thinking (Mind over matter.)

If a behavioral pattern that can be confined in old fashioned ideas like “utility,” they will be universal to all living things, again by definition, Not just life forms that speak English, of course.

• Reductionist in this context is very clearly defined, it is the view that takes the pairwise game as the only thing that matters to the organism. If you don’t like the name “reductionist view” then you can replace it by “local view” if you prefer. It is not meant to correspond to any popular media use of the word (since that use is far too varied), but it does reflect to how researchers often think about evolutionary games with the local interaction being the “game” and the population structure being the “external factors”. Even thought the non-reductionist game actually involves both.

Dude, you don’t know your definitions. Rationality is defined formally as doing the action that maximizes expected utility. This definition is, of course, dependent on the underlying utility function (a well understood property of the definition). So in the model I am describing objectively rational means rational with regard to the utility function given by the local pairwise interaction, and subjectively rational means rational means rational with regard to the subjective utility function encoded by the agents genome. Now, in this framework you can start to test your blind assertion that “there can be no ‘irrational’ behavior” by seeing if evolution can incorporate inclusive fitness effects (i.e. the effects from the population structure) into the subjective utility.

there can be no “irrational” behavior, …

Although having the freedom to specify an arbitrary subjective utility does give you a lot of leeway, it doesn’t let you call any arbitrary behavior as rational. For example, it is mathematically impossible to define a subjective utility on three states A, B, C such that an agent strongly prefers A over B, B over C, but C over A because it violates transitivity. However, such violations are regularly observed in human and animal subjects.

… how would it pass the reproductive test of evolution?”

This is exactly the sort of naive misunderstandings of evolutionary theory that I try to address in my research and on this blog. To begin with, to even start making that comment, you have to assume that evolutionary dynamics can reach equilibrium in a reasonable amount of time, which evolution can’t in general do even for static fitness landscapes. However, even if I grant you equilibrium, you can still show that rational behavior isn’t always an equilibrium solution.

I appreciate your feedback BMM, but I would encourage you to defend your position with evidence or interesting arguments, especially if you want to use a mean contrarian tone.

• Good, let’s unpack. Too many specific assumptions so let’s start with the basics.

We are talking about evolved behaviors in the common life problem of giving and getting resources. So anything must be universal for all species even bacteria and social insects. The “economic” problems of life have been worked on for a loooong time. Any primate/great ape/human solutions was descended from prior life forms.

But you know this right? Maybe not.

No, labeling evidence based facts as an ideology is just a rhetorical trick – but popular. There is no such thing as any “-ism” in experimental evidence work. “-isms” cannot be contradicted with data, evidence always can be. Experimental evidence is never local, by definition However, theology, magical beliefs and econ always are.

OK, how is utility defined/measured, by whom over what time frame? If any phenotype, including behavior, did not result in reproductive advantage it would not exist, duh.

So you are making the fundamental economic man error. It is economic animal, not just humans — unless, you can prove human exceptionalism. Good luck.

Of course, there is no static fitness, which is why random variation has proven the best approach. The environment is always changing. There is only equilibrium in the dreams (and profession) of economists. A great idea to sell but contrary to reality.

LOL, “mean” tone – the last refuge of ideology – idea discussions being framed as personal ones. A time worn tactic. Reality is “mean” and “Information is expensive.”
Econ is warm and cuddly – but dead wrong and just locally normed self-talk..

Econ arguments make no internally-referenced logical sense, since there is no outside, empirical referent. That’s fine but that is the domain of literature and theology. Nice work, if you can get it.

Again, if you are assuming word-based ideas like consciousness, free will, decision making, feelings, etc. best to abandon them now. The evidence is no way our subjective and cultural assumptions have anything to do with biology and physiology. The do, possibly have to do with adaption to the past local ecosystem.

In fact, the predictive value of individual subjective experiences – only available in words – is zilch. You are presuming otherwise. Prove it.

But you know that, right?