## Individual versus systemic risk in asset allocation

Proponents of free markets often believe in an “invisible hand” that guides an economic system without external controls like government regulations. Therefore a highly efficient economic equilibrium can be created if all market participants act purely out of self-interest. In the paper titled “Individual versus systemic risk and the Regulator’s Dilemma.”, Beale et al. (2011) applied agent-based simulations to show that a system of financial institutions attempting to minimize their own risk of failure may not minimize the risk of failure of the entire system. In addition, the authors have suggested several ways to limit the financial institutions in order to lower the risk of failure for the financial system. Their suggestion responds directly to the regulatory challenges during the recent financial crisis where failures of some institutions have endangered the financial system and even the global economy.

It’s easy to get tangled up trying to regulate banks.

To illustrate the point of individual optimality versus the system optimality, the paper makes simple assumptions of the financial system and its participants. In a world of $N$ independent banks and $M$ assets, each of the $N$ banks seeks to invest its resources into these $M$ assets from time 0 to time 1. The $M$ returns on assets are assumed to be independently and identically distributed following a student’s t-distribution with a degree of freedom of 1.5. If a bank’s loss exceeds a certain threshold, it fails. Due to this assumption, each bank’s optimal allocation (to minimize it’s chance of failure) is to invest equally in each asset.

However, a regulator is concerned with the failure of the financial system instead of the failure of any individual bank. To incorporate this idea, the paper suggests a cost function for the regulator: $c = k^s$ where $k$ is the number of failed banks. This cost function is the only coupling between banks in the model. If $s > 1$, this cost function implies each additional bank failure “costs” the system more (one can tell by taking the derivative $s \cdot k^{(s-1)}$, which is an increasing function in $k$ if $s > 1$). As $s$ increases from 1, the systematic optimal allocation of all the banks starts to deviate further away from the individual optimal allocation of the banks. When $s$ is 2, the systematic optimal allocation for each bank is to invest entirely in one asset, a drastic contrast to the individual optimal allocation (investing equally in each asset). In this situation, the safest investment allocation for the system leads to the riskiest investment allocation for the individual bank.

While the idea demonstrated above is interesting, the procedure is unnecessarily complex. The assumption of student t distribution with degree of freedom of 1.5 is far too broad of an assumption for the distribution of financial assets. However, the distribution does not have the simplicity of Bernoulli or Gaussian distributions to arrive at analytical solutions (See Artem’s question on toy models of asset returns for more discussion). One simple example would be bonds whose principal and coupon payments of the bond are either paid to the bondholder in full or partially paid in the event of a default. Therefore bond is not close to a t-distribution. Other common assets such as mortgages and consumer loans are not t-distribution either. Therefore the assumption of t-distribution does not come close to capturing the probabilistic nature of many major financial assets. The assumption of t-distribution does not provide any additional accuracy to simpler assumptions of Gaussian or Bernoulli distributions. Assumption of Gaussian distribution or Bernoulli distributions, on the other hand, is at least capable of providing analytical solutions without the tedious simulations.

The authors define two parameters $D$ and $G$ in an attempt to constrain the banks to have systematically optimal allocations. $D$ denotes the average distance of asset allocations between each pair of banks. $G$ denotes the distance between the average allocations across banks and the individual optimal allocation. When $s$ is increasing from 1, it was found that bank allocations with a higher $D$ and a near-zero $G$ are best for the system. To show the robustness of these two parameters, the authors varied other parameters such as number of assets, number of banks, the distributions of the assets, correlation between the assets, and the form of the regulator’s cost function. They found lower systematic risk for the banking system by enforcing a near zero $G$ and higher $D$. This result implies that the banks should concentrate in their own niche of financial assets, but the aggregate system should still have optimal asset allocations.

Based on the paper, it may appear that the systematic risk of failure can be reduced in the financial system by controlling for parameters $D$ and $G$ (though without analytical solutions). Such controls have to be enforced by an omnipotent “regulator” with perfect information on the exact probabilistic nature of the financial products and the individual optimal allocations. Moreover, this “regulator” must also have unlimited political power to enforce its envisioned allocations. This is far from reality. Financial products are different in size and riskiness, and there is continuous creation of new financial products. Regulators such as the Department of Treasury, SEC, and Federal Reserve also have very limited political power. For example, these regulators were not legally allowed to rescue Lehman Brothers whose failure led to the subsequent global credit and economic crisis. The entire paper can be boiled down to one simple idea: optimal actions for the individuals might not be optimal for the system, but if there is an all-powerful regulator who forces the individuals to act optimally for the system, the system will be more stable. This should come as no surprise to regular readers of this blog, since evolutionary game theory deals with this exact dilemma when looking for cooperation. This main result is rather trivial, but opens ideas for more realistic simulations. One idea would be to remove or weaken the element of “regulator” and add incentives for banks to act more systematically optimal. It would be interesting to look at how banks can act under these circumstances and whether or not their actions can lead to a systematically optimal equilibrium.

One key aspect of a systematic failure is not the simultaneous failure of many assets. There are two important aspects of bank operations. Banks operate by taking funding from clients to invest in riskier assets. This operation requires strong confidence in the bank’s strength to avoid unexpected withdrawals or the ability to sell these assets to pay back the clients. Secondly banks obtain short-term loans from each other by putting up assets as collateral. This connects the banks more strongly than a simple regulator cost function, creating a banking ecosystem.

The strength of the inter-bank connections depends on value of the collateral. In the event of catastrophic losses of subprime loans by some banks, confidence in these banks are shaken and the value of assets start to come down. A bank’s clients may start to withdraw their money and the bank sells its assets to meet its clients’ demands, further depressing the prices of the assets sometimes leading to a fire sale. Other banks would start asking for more and higher quality collateral due to the depressed prices from the sell-off. The bank’s high-quality assets and cash may subsequently become strained leading to further worry about the bank’s health and more client withdrawals and collateral demands. Lack of confidence in one bank’s survival leads to worries about the other banks that have lent to that bank triggering a fresh wave of withdrawals and collateral demands. Even healthy banks can be ruined in a matter of days by a widespread panic.

As a result, the inter-bank dealings are instrumental in the event of systematic failure. Beale et al. (2011) intentionally sidestepped this inter-bank link to arrive at their result purely from a perspective of asset failure. But, the inter-bank link was the most important factor in creating mass failure of the financial system. It is because of this link that failure of one asset (subprime mortgages) managed to nearly bring down the financial systems in the entire developed world. Not addressing the inter-bank link is simply not addressing the financial crisis at all.

Beale N., Rand D.G., Battey H., Croxson K., May R.M. & Nowak M.A. (2011). Individual versus systemic risk and the Regulator’s Dilemma, Proceedings of the National Academy of Sciences, 108 (31) 12647-12652. DOI:

## Where did the love come from? Inclusive fitness vs. group selection

Altruism is widespread in the animal world, yet it seems to conflict with the picture of nature “red in tooth and claw” often associated with Evolution. One solution to this apparent paradox is to remember that the unit of selection is never the individual itself but the genes  it carries. Thus, altruism may be explained if the altruist shares genes with the individual it helps in such a way that, while harming itself as an individual, it favors the spread of its genes. This idea of analyzing selection at the level of genes rather than the individual dates back to the 1930s, when Darwin’s theory and Mendelian genetics were first combined to form a unified framework now known as the neo-Darwinian synthesis.

Altruism is a common feature of animal behaviour. In this picture, a chimp mother helping another down a tree. Source: The Selfishness of Giving by Frans de Waal on Huffington Post.

Read more of this post

## Environmental austerity and the anarchist Prince of Mutual Aid

Prince Pyotr Alexeyevich Kropotkin

Any good story starts with a colourful character, a complicated character, and — to be complacent with modern leftist literature — an anarchist intellectual well-versed in (but critical and questioning of) Marxism; enter Pyotr Alexeyevich Kropotkin. Today he is best known as one of the founders and leading theorist of anarcho-communism, but in his time he was better known as an anti-Tsarist revolutionary, zoologist, geographer and explorer. Kropotkin was born to the Prince of Smolensk, a descendant of the Rurik dynasty that ruled and eventually unified many of the Principalities and Duchies of Rus into the Tsardom of Russia. By the 9 December 1842 birth of our protagonist, Russia had been under Romanov rule for over 200 years, but the house of Rurik still held great importance. Even though the young boy renounced his Princely title at age 12, he was well-off and educated in the prestigious Corps of Pages. There he rose to the highest ranks and became the personal page of Tsar Alexander II. Upon graduation this entitled Kropotkin to his choice of post, and our first plot twist.

Analogous to the irresistible pull of critical theory on modern liberal-arts students, the young Kropotkin was seduced by the leftist thought of his day: French encyclopédistes, the rise of Russian liberal-revolutionary literature, and his personal disenfranchisement with and doubt of the Tsar’s “liberal” reputation. Instead of choosing a comfortable position in European Russia, the recent graduate requested to be sent to the newly annexed Siberian provinces and in 1862 was off to Chita. This city has a personal significance to me, it is where my grandfather was stationed over 100 years later and most of my mother’s childhood was spent there. Chita has become a minor place of pilgrimage for modern anarchists, but it (and the other Siberian administrative centre at Irkutsk) did not hold Kropotkin’s attention for long.

Unable to enact substantial change as an administrator, he followed his passion as a naturalist. In 1864, he took command of a geographic survey expedition into Manchuria. Having read Darwin’s On the Origin of Species when it was published 5 years earlier, Kropotkin embarked on a distinctly Siberian variant of the HMS Beagle — sleigh dogs instead of wind to power his way. His hope was to observe the same ‘tooth and claw’ competition as Darwin, but instead he saw primarily cooperation. In the harsh environment of Siberia, it wasn’t a struggle of beast versus beast, but animal against environment.

From 1890 to 1896, Kropotkin published his Siberian observations as a series of essays in the British monthly literary magazine, Nineteenth Century. Motivated as a response to Huxley’s “The Struggle for Existence”, the essays highlighted cooperation among nonhuman animals, in primitive societies and medieval cities, and in contemporary times. He concluded that not competition, but cooperation, were the most important factors in survival and the evolution of species. Kropotkin assembled the essays into book form, and in 1902 published Mutual Aid: A Factor of Evolution. A magnum opus on cooperation, much like E.O. Wilson Sociobiology of nearly 75 years later, Kropotkin started from the social insects and traced a common thread to the human society around him; he was the first student of cooperation.

Unfortunately, his mechanism for cooperation did not extend beyond group selection. Kropotkin left it to modern researchers to find more basic engines of altruism. Only now are we starting to build mathematical, computational, and living models. To study cooperation in the laboratory, especially when looking at the effect of environmental austerity, Strassmann & Queller (2011) have proposed the social microbe Dictyostelium discoideum or slime mold as the perfect model. These single-cell soil-dwelling amoeba are capable of working together under austere conditions, and even display rudimentary swarm intelligence. A long time expert on slime molds, John Bonner of Princeton University, made a video of them during his undergraduate years at Harvard:

Under plentiful conditions, D. discoideum are solitary predators of bacteria, which they consume by engulfment. If the environment deteriorates and the amoebae begin to starve, then they enter a social stage. Using their quorum-sensing mechanism they check if enough other amoebae are present in the area and then aggregate into a mound. They coat themselves with a slime (that gives them their name) and move together as a unit, until they find a good location to fruit. The slime mold then extends a stalk up from the soil with most cells forming a spore at the top. At a certain height, the spore is released, allowing the amoebae at the top to disperse to greener pastures; the cells in the stalk die. Since all the cells are free-living independent organisms during the non-social stage, this shows the clearest form of altruism: fellow D. discoideum sacrificing their own lives in order to give their brethren a chance at a future.

Sadly, most evolutionary game theory models assume constant population size and no resource variability. In these models, it is difficult to introduce a parameter analogous to environmental austerity. To allow for resource limitations, we need to introduce variable population sizes and thus create an ecological game. I explored this modification for a Hammond & Axelrod-like model back in the summer of 2009 and thought I would share some results here.

The agents inhabit a toroidal lattice, and each round the agents interacts with their 4 adjacent neighbours via the Prisoner’s dilemma. The payoffs are added to their default birth rate, reproduction is asexual and into adjacent empty sites. At each time step, each agent has a fixed (0.25) probability of expiring and vacating its site. The worlds start empty and are gradually filled with agents.

This figure has three graphs; in each figure the line thickness represents standard error from averaging 30 independent runs. The leftmost graph is the proportion of cooperation versus cycle, with two conditions for default birth rate: 0.24 (high austerity; top line) and 0.28 (low austerity; bottom line). The two figures on the right show the total number of cooperators (blue) and defectors (red). The rightmost graph has time flowing from right to left. The left panel is high austerity (def ptr = 0.24) and the right panel is low austerity (def ptr = 0.28).

Above are the results for a Prisoner’s dilemma interaction with $c/b = 0.5$ — a rather competitive environment. Matching Shultz, Hartshorn, & Kaznatcheev (2009) and consistent with Milbiner, Cremer, & Frey (2010), we can see an early spike in the number of cooperators as the world reaches its carrying capacity. After this transient period, the dynamics shift and defection becomes more competitive. The dynamics settle to a stable distribution of cooperators and defectors. The proportion of cooperation depends heavily on the environmental austerity. In a harsh environment with a low default birth rate of 0.24, the agents band together and cooperate and in a plentiful environment with high default birth rate of 0.28, defection dominates. As Kropotkin observed: cooperation is essential to surviving environmental austerity.

Analogous to the results from the Hauert, Homles, & Doebeli (2006) ecological public-goods game the proportion of cooperation tends to bifurcate around default birth rate equal to to the death rate (0.25), although I don’t present the visuals here. The increase in default birth rate results in a slight increase in the world population at saturation, but even by raw number there are more cooperators in the high austerity than the low austerity setting. Thus, it is not simply defectors benefiting more from the decrease in austerity (since defectors go from a regime where clusters are not-self sustaining (def ptr = 0.24) to one where it is (def ptr = 0.28)), but also an effect of defectors out-competing and disproportionately exploiting and crowding out the cooperators.

Each graph is evolutionary cycles versus proportion of cooperation, line thickness is standard error from averaging 30 independent runs. Environmental austerity decreases from the left graph (where default birth rate is equal to death rate) to the right (where their ratio is 1.1). The blue line is the model where agents can discriminate based on arbitrary non-strategy related tag (the green-beard effect/ethnocentrism are possible) and the green line is simulations where no conditional strategy is possible.

If agents are allowed to condition their behavior on an arbitrary tag then the ethnocentric population is better able to maintain higher levels of cooperation as environmental austerity decreases. In the tag-based model, it would be interesting to know if there is a parameter range where varying environmental austerity can take us from a regime of humanitarian (unconditional cooperator) dominance, to ethnocentric dominance (cooperate with in-group, defect from out-group), to a selfish (unconditional defection) world. I am also curious to know how the irrational hostility I observed in the tag-based harmony game (Kaznatcheev, 2010) would fare as the environment turns hostile. Will groups overcome their biases against each other, or will they compete even more for the more limited resource? Nearly 150 years after Peter Kropotkin’s Siberian expedition, the curtain is still up and basic questions on mutual aid in austere environments remain!

### References

Hauert, C., Holmes, M., & Doebeli, M. (2006). Evolutionary games and population dynamics: maintenance of cooperation in public goods games. Proceedings of the Royal Society B: Biological Sciences, 273(1600): 2565-2571

Kaznatcheev, A. (2010). Robustness of ethnocentrism to changes in inter-personal interactions. Complex Adaptive Systems – AAAI Fall Symposium. [pdf]

Melbinger, A., Cremer, J., & Frey, E. (2010). Evolutionary game theory in growing populations. Physical Review Letters, 105(17): 178101. [arXiv pdf]

Shultz, T. R., Hartshorn, M., & Kaznatcheev, A. (2009). Why is ethnocentrism more common than humanitarianism? In N. A. Taatgen & H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society (pp. 2100-2105). Austin, TX: Cognitive Science Society. [pdf]

Strassmann, J., & Queller, D. (2011). Evolution of cooperation and control of cheating in a social microbe Proceedings of the National Academy of Sciences, 108 (2), 10855-10862 DOI: 10.1073/pnas.1102451108

## Mathematical models in finance and ecology

Theoretical physicists have the reputation of an invasive species — penetrating into other fields and forcing their methods. Usually these efforts simply irritate the local researchers, building a general ambivalence towards field-hopping physicists. With my undergraduate training primarily in computer science and physics, I’ve experienced this skepticism first hand. During my time in Waterloo, I tried to supplement my quantum computing work by engaging with ecologists. My advances were met with a very dismissive response:

But at the risk of sounding curmudgeonly, it is my experience that many folks working in physics and comp sci are more or less uninformed regarding the theoretical ecology, and tend to reinvent the wheel.

On rare occasion though, a theorist will move into a field of sledges & rollers, and help introduce the first wheel. This was the case 40 years before my ill-fated courtship of Waterloo ecologists, when Robert May published “Stability in multispecies community models” (1971) and transitioned from theoretical physics (PhD 1959, University of Sydney) to ecology. He helped transform the field from shunning equations to a vibrant community of observation, experiments, and mathematical models.

Lord Robert May of Oxford. Photo is from the donor’s page of Sydney High School Old Boys Union where he attended secondary school.

Robert M. May, Lord May of Oxford, is a professor in the Department of Zoology at University of Oxford. I usually associate him with two accomplishments inspired by (but independent of) ecology. First, he explored the logistic map $x_{t + 1} = r x_t(1 - x_t)$ and its chaotic behavior (May, 1976), becoming one of the co-founders of modern chaos theory. Although the origins of chaos theory can be traced back to another great cross-disciplinary scholar — Henri Poincaré; it wasn’t until the efforts of May and colleagues in the 1970s that the field gained significant traction outside of mathematics and gripped the popular psyche. Second, he worked with his post-doc Martin A. Nowak to popularize the spatial Prisoner’s Dilemma and computer simulation as an approach to the evolution of cooperation (Nowak & May, 1992). This launched the sub-field that I find myself most comfortable in and stressed the importance of spatial structure in EGT. May is pivoting yet again, he is harnessing his knowledge of ecology and epidemiology to study the financial ecosystem (May, Levin, & Sugihara, 2008).

After the 2008 crises, finance became a hot topic for academics and May, Levin, & Sugihara (2008) suggested mathematical ecology as a source of inspiration. Questions of systemic risk, or failure of the whole banking system (as opposed to a single constituent bank), grabbed researchers’ attention. In many ways, these questions were analogous to the failure of ecosystems. In fisheries research there was a similar history to that of finance. Early research on fisheries would fixate on single species, the equivalent of a bank worrying only about its own risk-management strategy. However, the fishes were intertwined in an ecological network like banks are connected through an inter-bank loan network. The external stresses fish species experiences were not independent, something like a change in local currents or temperature would effect many species at once. Analogously, the devaluation of an external asset class like the housing market effects many banks at once. As over-consumption depleted fisheries in spire of ecologists’ predictions, the researchers realized that they must switch to a holistic view; they switched their attention to the whole ecological network and examined how the structure of species’ interactions could aid or hamper the survival of the ecosystem. Regulators have to view systemic risk in financial systems through the same lens by considering a holistic approach to managing risk.

Once a shock is underway, ideas from epidemiology can help to contain it. As one individual becomes sick, he has the risk of passing on that illness to his social contacts. In finance, if a bank fails then the loans it defaulted on can cause its lenders to fail and propagate through the inter-bank loan network. Unlike engineered networks like electrical grids, an epidemiologist does not have control over how humans interact with each other, she can’t design our social network. Instead, she has to deter the spread of disease through selective immunization or through encouraging behavior that individuals in the population might or might not adopt. Similarly, central bankers cannot simply tell all other banks who to loan to, instead they must target specific banks for intervention (say through bail-out) or by implementing policies that individual banks might or might not follow (depending on how these align with their interests). The financial regulator can view bank failure as a contagion (Gai & Kapadia, 2010) and adapt ideas from public health.

The best part of mathematical models is that the preceding commonalities are not restricted to analogy and metaphor. May and colleagues make these connections precise by building analytic models for toy financial systems and then using their experience and tools from theoretical ecology to solve these models. Further, the cross-fertilization is not one-sided. In exchange for mathematical tools, finance provides ecology with a wealth of data. Studies like the one commissioned by the Federal Reserve Bank of New York (Soramäki et al., 2007) can look at the interaction of 9500 banks with a total of 700000 transfers to reveal the topology of inter-bank payment flows. Ecologists can only dream of such detailed data on which to test their theories. For entertainment and more information, watch Robert May’s hour-long snarky presentation of his work with Arinaminpathy, Haldane, and Kapadia (May & Arinaminpathy 2010; Haldane & May, 2011; Arinaminpathy, Kapadia, & May, 2012) during the 2012 Stanislaw Ulam Memorial Lectures at the Santa Fe Institute:

### References

Arinaminpathy, N., Kapadia, S., & May, R. M. (2012). Size and complexity in model financial systems. Proceedings of the National Academy of Sciences, 109(45), 18338-18343.

Gai, P., & Kapadia, S. (2010). Contagion in financial networks. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science, 466(2120), 2401-2423.

Haldane, A. G., & May, R. M. (2011). Systemic risk in banking ecosystems. Nature, 469(7330), 351-355.

May, R. M. (1971). Stability in multispecies community models. Mathematical Biosciences, 12(1), 59-79.

May, R. M. (1976). Simple mathematical models with very complicated dynamics. Nature, 261(5560), 459-467.

May RM, Levin SA, & Sugihara G (2008). Ecology for bankers. Nature, 451 (7181), 893-5 PMID: 18288170

May, R. M., & Arinaminpathy, N. (2010). Systemic risk: the dynamics of model banking systems. Journal of the Royal Society Interface, 7(46), 823-838.

Nowak, M. A., & May, R. M. (1992). Evolutionary games and spatial chaos. Nature, 359(6398), 826-829.

Soramäki, K., Bech, M. L., Arnold, J., Glass, R. J., & Beyeler, W. E. (2007). The topology of interbank payment flows. Physica A: Statistical Mechanics and its Applications, 379(1), 317-333.

## Games, culture, and the Turing test (Part II)

This post is a continuation of Part 1 from last week that introduced and motivated the economic Turing test.

Joseph Henrich

When discussing culture, the first person that springs to mind is Joseph Henrich. He is the Canada Research Chair in Culture, Cognition and Coevolution, and Professor at the Departments of Psychology and Economics at the University of British Columbia. My most salient association with him is the cultural brain hypothesis (CBH) that suggests that the human brain developed its size and complexity in order to better transmit cultural information. This idea seems like a nice continuation of Dunbar’s (1998) Social Brain hypothesis (SBH; see Dunbar & Shultz (2007) for a recent review or this EvoAnth blog post for an overview), although I am still unaware of strong evidence for the importance of gene-culture co-evolution — a requisite for CBH. Both hypotheses are also essential to studying intelligence; in animals intelligence is usually associated with (properly normalized) brain size and complexity, and social and cultural structure is usually associated with higher intellect.

To most evolutionary game theorists, Henrich is know not for how culture shapes brain development but how behavior in games and concepts of fairness vary across cultures. Henrich et al. (2001) studied the behavior of people from 15 small-scale societies in the prototypical test of fairness: the ultimatum game. They showed a great variability in how fairness is conceived and what operationalist results the conceptions produce across the societies they studied.

In general, the ‘universals’ that researchers learnt from studying western university students were not very universal. The groups studied fell into four categories:

• Three foraging societies,
• Six practicing slash-and-burn horticulture,
• Four nomadic herding groups, and
• Three small-scale farming societies.

These add up to sixteen, since the Sangu of Tanzania were split into farmers and herders. In fact, in the full analysis presented in table 1, the authors consider a total of 18 groups; splitting the Hadza of Tanzania into big and small camp, and the villagers of Zimbabwe into unsettled and resettled. Henrich et al. (2001) conclude that neither homoeconomicus nor the western university student (WEIRD; see Henrich, Heine, & Norenzaya (2010) for a definition and discussion) models accurately describe any of these groups. I am not sure why I should trust this result given a complete lack of statistical analysis, small sample size, and what seems like arithmetic mistakes in the table (for instance the resettled villagers rejected 12 out of 86 offers, but the authors list the rate as 7%). However, even without a detailed statistical analysis it is clear that there is a large variance across societies, and at least some of the societies don’t match economically rational behavior or the behavior of WEIRD participants.

The ultimatum game is an interaction between two participants, one is randomly assigned to be Alice and the other is Bob. Alice is given a couple of days wage in money (either the local currency or other common units of exchange like tobacco) and can decide what proportion of it to offer to Bob. She can choose to offer as little or as much as she wants. Bob is then told what proportion Alice offered and can decide to accept or reject. If Bob accepts then the game ends and each party receives their fraction of the goods. If Bob declines then both Alice and Bob receive nothing and the game terminates. The interaction is completely anonymous and happens only once to avoid effects of reputation or direct reciprocity. In this setting, homoeconomicus would give the lowest possible offer if playing as Alice and accept any non-zero offer as Bob (any money is better than no money).

The groups that most closely match the economists’ model are the Machiguenga of Peru, Quichua of Ecuador, and small camp Hadza. They provide the lowest average offers of 26%-27%. They reject offers 5%, 15%, and 28% of the time, respectively. Only the Tsimane of Bolivia (70 interactions), Achuar of Ecuador (16 interactions), and Ache of Paraguay (51 interactions) have zero offer rejection rates. However, members of all three societies offer a sizeable initial offer, averaging 37%, 42%, and 51%, respectively. A particularly surprising group is the Lamelara of Indonesia that offered on average 58% of their goods, and still rejected 3 out of 8 offers (they also generated 4 out of 20 experimenter generated low offers, since no low offers were given by group members). This behavior is drastically different from rational, and not very close to WEIRD participants that tend to offer around 50% and reject offers below 20% about 40% to 60% of the time. If we are to narrow our lens of human behavior to that of weird participants or economic theorizing than it is easy for us to miss the big picture of the drastic variability of behavior across human cultures.

It’s easy to see what we want instead of the truth when we focus too narrowly.

What does this mean for the economic Turing test? We cannot assume that the judge is able to decide how distinguish man from machine without also mistaking people of different cultures for machines. Without very careful selection of games, a judge can only distinguish members of its own culture from members of others. Thus, it is not a test of rationality but of conformation to social norms. I expect this flaw to extend to the traditional Turing test as well. Even if we eliminate the obvious cultural barrier of language by introducing a universal translator, I suspect that there will still be cultural norms that might force the judge to classify members of other cultures as machines. The operationalization of the Turing test has to be carefully studied with how it interacts with different cultures. More importantly, we need to question if a universal definition of intelligence is possible, or if it is inherently dependent on the culture that defines it.

What does this mean for evolutionary game theory? As an evolutionary game theorist, I often take an engineering perspective: pick a departure from objective rationality observed by the psychologists and design a simple model that reproduces this effect. The dependence of game behavior on culture means that I need to introduce a “culture knob” (either as a free or structural parameter) that can be used to tune my model to capture the variance in behavior observed across cultures. This also means that modelers must remain agnostic to the method of inheritance to allow for both genetic and cultural transmission (see Lansing & Cox (2011) for further considerations on how to use EGT when studying culture). Any conclusions or arguments for biological plausibility made from simulations must be examined carefully and compared to existing cross-cultural data. For example, it doesn’t make sense to conclude that fairness is a biologically evolved universal (Nowak, Page, & Sigmund, 2000) if we see such great variance in the concepts of fairness across different cultures of genetically similar humans.

### References

Dunbar, R.I.M. (1998) The social brain hypothesis. Evolutionary Anthropology 6(5): 179-190. [pdf]

Dunbar, R.I.M., & Shultz, S. (2007) Evolution in the Social Brain. Science 317. [pdf]

Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001). In Search of Homo Economicus: Behavioral Experiments in 15 Small-Scale Societies American Economic Review, 91 (2), 73-78 DOI: 10.1257/aer.91.2.73

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world. Behavioral and Brain Sciences, 33(2-3), 61-83.

Lansing, J. S., & Cox, M. P. (2011). The Domain of the Replicators. Current Anthropology, 52(1), 105-125.

Nowak, M. A., Page, K. M., & Sigmund, K. (2000). Fairness versus reason in the ultimatum game. Science, 289(5485), 1773-1775.

## EGT Reading Group 36 – 40, G+ and StackExchange

Around a month and a half ago, I founded an evolutionary game theory Google plus community. Due to my confusion on how G+ works, the community is private and you won’t be able to see any posts until you join. If you have a G+ account then request to join and I will add you to the group. We have several active members that mostly share and comment on new (and sometimes classic) articles. If you don’t have a G+ account then you should make one right away. G+ is much more professional than Facebook or Twitter and much easier to control privacy settings. It is more an interest sharing than general social networking website.

Another community I recommend is StackExchange, which I mentioned previously. Since last year, the Cognitive Sciences SE went live and has been in beta. I’ve been a very active participant (3rd by reputation and voting, and 1st by number of edits) with “The effects of bilingualism on colour perception” as my most popular question (still unanswered) and my most popular answer an explanation of why people subscribe to pseudoscientific theories. Unfortunately, the site is not research level like I had hoped, but it is still a great way to learn and share your knowledge.

If you are a member of SE, or want to make an account, the please follow the up-and-coming Systems Science and Game Theory proposals. It will take a while for those sites to reach an active status, but you can help out by up-voting good questions with fewer than 10 votes or suggesting new ones. Both proposals are extremely relevant to agent-based modeling and game theory.

The final purpose of this post is to celebrate the 40th EGT reading group; I will continue the trend of posting on groups 31-35 and update you on the last five meetings:

 2013 March 12 Hauert, C., Holmes, M., & Doebeli, M. [2006] “Evolutionary games and population dynamics: maintenance of cooperation in public goods games.” Proc. R. Soc. B. 273(1600): 2565-2571. Hauert, C., Wakano, J.Y., & Doebeli, M. [2008] “Ecological public goods games: cooperation and bifurcation.” Theor. Popul. Biol. 73(2): 257-263. February 12 Beale, N., Rand, D.G., Battey, H., Croxson, K., May, R.M.,& Nowak, M.A. [2011] “Individual versus systemic risk and the Regulator’s Dilemma.” Proceedings of the National Academy of Sciences, 108(31), 12647-12652. 2012 November 28 Davies, A. P., Watson, R. A., Mills, R., Buckley, C. L., & Noble, J. [2011]. ““If You Can’t Be With the One You Love, Love the One You’re With”: How Individual Habituation of Agent Interactions Improves Global Utility.” Artificial Life, 17(3), 167-181 November 21 Klug, H., & Bonsall, M. B. [2009]. “Life history and the evolution of parental care.” Evolution, 64(3), 823-835. October 24 Antal, T., Traulsen, A., Ohtsuki, H., Tarnita, C.E., & Nowak, M.A. [2009] “Mutation-selection equilibrium in games with multiple strategies.” Journal of Theoretical Biology 258(4): 614-622. Antal, T., Nowak, M.A., & Traulsen, A. [2009] “Strategy abundance in games for arbitrary mutation rates.” Journal of Theoretical Biology, 257 (2), 340-344.

The meetings have been sporadic, but very informative. For three of them I brought guest presenters: Kyler Brown (University of Chicago) for EGT37, Peter Helfer for EGT38, and Yunjun Yang (University of Waterloo) for EGT39. i owe a big thanks to the guys for presenting! If you would like to receive email updates whenever we read a new paper then please contact me (by email or the comments of this post) and I will add you to the list. If you are in Montreal and want to attend or present then you are also welcome to!

## Ecological public goods game

As an evolutionary game theorist working on cooperation, I sometimes feel like a minimalist engineer. I spend my time thinking about ways to design the simplest mechanisms possible to promote cooperation. One such mechanism that I accidentally noticed (see bottom left graph of results from summer 2009) is the importance of free space, or — more formally — population dynamics. Of course, I was inadvertently reinventing a wheel that Hauert, Holmes, & Doebeli (2006) started building years earlier, except my version was too crooked to drive places.

One of the standard assumptions in analytic treatments of EGT is fixed population size. We either assume that for every birth there is a death (or vice versa) when working with finite populations, or that all relevant effects are captured by the strategy frequency and independent of actual population size (when working with replicator dynamics). This approach only considers evolutionary effects and ignores population dynamics. Hauert, Holmes, & Doebeli (2006) overcome this by building the ecological public goods game.

The authors track three proportions: cooperators ($x$), defector ($y$), and free space ($z$). Since these are still proportions, they must add up to $1 = x + y + z$. Reproduction is modified from standard replicator dynamics, by being restricted to occur only if a free-space is found to reproduce into. An agents fitness is discounted by a multiplicative factor of $z$ — the probability of finding a free space for child placement in an inviscid population. This defined the dynamic system:

\begin{aligned} \dot{x} & = x(zf_C - d) \\ \dot{y} & = y(zf_D - d) \end{aligned}

Where $f_C$ is the average fitness of cooperators, $f_D$ — defectors, and $d$ is a shared death rate. We don’t need to include $\dot{z}$ since we know that $z = 1 - x - y$. Note that if we pick a death rate such that the population density remains constant (by setting $d = z\frac{xf_C + yf_D}{1 - z}$) then we will recover standard replicator dynamics with the constant $z$ as a time-scale parameter.

For interactions, the authors use the public goods game, with max group size $N$ and benefit multiplier $r$. A group is formed to play the game by sampling $N$ times from the distribution $(x,y,z)$ — with probability $x$ a spot is filled by a cooperator, $y$ — defector, and $z = 1 - x - y$ — left empty. This means that the expected number of agents per group is given by a binomial distribution $B(N,z)$. Thus, the average group size is $S = N(1 - z)$.

Each cooperator invests 1 unit of fitness in the public good, and all units invested are multiplied by a constant factor $r$ and uniformly distributed among the agents playing. Defectors invest nothing, but still receive their fraction of the split. Thus, for an agent interacting with $S - 1$ other agents, the expected fitness of being a defector of cooperator are:

\begin{aligned} f_D & = b + \frac{rp(S - 1)}{S} \\ f_C & = b + \frac{r(p(S - 1) + 1)}{S} - 1 \\ & = b + \frac{r(p(S - 1)} + \frac{r}{S} - 1 \\ & = f_D + \frac{r}{S} - 1 \end{aligned}

Where $b$ is the default birth rate and $p = \frac{x}{x + y}$ is the proportion of cooperators among the agents. For the fitness to make sense, we need $f_C > 0$ and so $b > 1 - \frac{r}{S}$, and the strength of selection is given by $\frac{1}{b}$. Note that, unlike the Prisoner’s dilemma (which is dynamically equivalent to PG in the limit of $S \rightarrow \infty$), it becomes rational to cooperate (and irrational to defect) when $r > S$ — this is the regime of weak altruism.

This results in an interesting feedback between the population size ($1 - z$) and the proportion of cooperators. As there are more cooperators in the population, the average fitness becomes higher and the population grows, $\dot{z} < 0$, As the population increased we get $S = (1 - z)N > r$ and then defectors fare better than cooperators, causing the proportion of cooperators (relative to defectors) to decrease $\dot{p} 0$). When the population is small enough ($1 - z > \frac{r}{N}$), then it becomes rational to cooperate ($f_C > f_D$) and the proportion of cooperators starts to grown $\dot{p} > 0$ — restarting the cycle.

Hauert, Holmes, & Doebeli (2006) and later Hauert, Wakano, & Doebeli (2008) carefully analyzed these dynamics. They focus on 4 qualitatively distinct regimes (depending on parameter values). In all settings, if the initial population is too small or has too few cooperators then it will go extinct, however in two of the regimes it is possible to maintain the population leading either to the co-existence of cooperators and defectors or even cooperate dominance.

Four possible phase profiles for the ecological public goods game in groups of at most N = 8. Individuals. The figure plots proportion of cooperation $p = \frac{x}{x + y}$ versus total population density $x + y$. The left hand side is extenction and the top is cooperator dominance. Stable fixed points are in solid black, while unstable fixed points are not coloured in. The four figures differ in their values of (r,d) with (a: 3, 0.5), (b: 5, 1.6), (c: 2.7, 0.5), (d: 2.1, 0.5). Figure 2 in Hauert, Holmes, & Doebeli (2006).

As $r$ is increased, the interaction becomes less competitive and easier for cooperators; Hauert, Wakano, & Doebeli (2008) discuss this dependence on $r$ in detail. The population goes from

1. regime of total extinction (since we assume $b < d$; figure d), to
2. extinction from an unstable fixed point, to
3. oscillations around an unstable focus that lead to extinction (figure c), to
4. Hopf bifurcation resulting in the stable focus, resulting in co-existance of cooperator and defectors through oscillations, to
5. stable fixed point with static distribution of cooperator and defectors co-existing (figure a), to
6. cooperator dominance (figure b)

This means that cooperation in the public goods is possible with just free-space added to the model. Unfortunately, the results do not hold for the inviscid Prisoner’s dilemma. However, Zhang & Hui (2011) showed that in a viscous population, similar dynamics are possible for the Prisoner’s dilemma. The ecological public goods has also been extended to the spatial setting (Wakano, Nowak, & Hauert, 2009; Wakano, & Hauert, 2011), but we will discuss that extension in a future post.

Although cooperation in the ecological public goods game emerges for $N > r$, I don’t think the cooperation can be called strong altruism. The emergence depends on the population density being occasionally low enough that the effective group size $S < r$, which puts us in the weak altruism range. The authors showed the evolution of cooperation driven by weak altruism. The underlying mechanism is similar to Killingback, Bieri, and Flatt (2006) result that we read in EGT Reading group 6, except the ecological variant uses free-space where Killingback et al. uses group structure.

### References

Hauert, C., Holmes, M., & Doebeli, M. (2006). Evolutionary games and population dynamics: maintenance of cooperation in public goods games Proceedings of the Royal Society B: Biological Sciences, 273 (1600), 2565-2571 DOI: 10.1098/rspb.2006.3600

Hauert, C., Wakano, J. Y., & Doebeli, M. (2008). Ecological public goods games: cooperation and bifurcation. Theoretical Population Biology, 73(2), 257.

Killingback, T., Bieri, J., & Flatt, T. (2006). Evolution in group-structured populations can resolve the tragedy of the commons. Proceedings of the Royal Society B: Biological Sciences, 273(1593), 1477-1481.

Wakano, J. Y., Nowak, M. A., & Hauert, C. (2009). Spatial dynamics of ecological public goods. Proceedings of the National Academy of Sciences, 106(19), 7910-7914.

Wakano, J. Y., & Hauert, C. (2011). Pattern formation and chaos in spatial ecological public goods games. Journal of Theoretical Biology, 268(1), 30-38.

Zhang, F., & Hui, C. (2011) Eco-evolutionary feedback and the invasion of cooperation in the prisoner’s dilemma games. PLoS One, 6(11): e27523.

## Games, culture, and the Turing test (Part I)

Intelligence is one of the most loaded terms that I encounter. A common association is the popular psychometric definition — IQ. For many psychologists, this definition is too restrictive and the g factor is preferred for getting at the ‘core’ of intelligence tests. Even geneticists have latched on to g for looking at heritability of intelligence, and inadvertently helping us see that g might be too general a measure. Still, for some, these tests are not general enough since they miss the emotional aspects of being human, and tests of emotional intelligence have been developed. Unfortunately, the bar for intelligence is a moving one, whether it is the Flynn effect in IQ or more commonly: constant redefinitions of ‘intelligence’.

Does being good at memorizing make one intelligent? Maybe in the 1800s, but not when my laptop can load Google. Does being good at chess make one intelligent? Maybe before Deep Blue beat Kasparov, but not when my laptop can run a chess program that beats grand-masters. Does being good at Jeopardy make one intelligent? Maybe before IBM Watson easily defeated Jennings and Rutter. The common trend here seems to be that as soon as computers outperform humans on a given act, that act and associated skills are no longer considered central to intelligence. As such, if you believe that talking about an intelligent machine is reasonable then you want to agree on an operational benchmark of intelligence that won’t change as you develop your artificial intelligence. Alan Turing did exactly this and launched the field of AI.

I’ve stressed Turing’s greatest achievement as assembling an algorithmic lens and turning it on the world around him, and previously highlighted it’s application to biology. In the popular culture, he is probably best known for the application of the algorithmic lens to the mind — the Turing test (Turing, 1950). The test has three participants: a judge, a human, and a machine. The judge uses an instant messaging program to chat with the human and the machine, without knowing which is which. At the end of a discussion (which can be about anything the judge desires), she has to determine which is man and which is machine. If judges cannot distinguish the machine more than 50% of the time then it is said to pass the test. For Turing, this meant that the machine could “think” and for many AI researchers this is equated with intelligence.

You might have noticed a certain arbitrarity in the chosen mode of communication between judge and candidates. Text based chat seems to be a very general mode, but is general always better? Instead, we could just as easily define a psychometric Turing test by restriction the judge to only give IQ tests. Strannegård and co-authors did this by designing a program that could be tested on the mathematical sequences part of IQ tests (Strannegård, Amirghasemi, & Ulfsbäcker, 2012) and Raven’s progressive matrices (Strannegård, Cirillo, & Ström, 2012). The authors’ anthropomorphic method could match humans on either task (IQ of 100) and on the mathematical sequences greatly outperform most humans if desired (IQ of 140+). In other words, a machine can pass the psychometric Turing test and if IQ is a valid measure of intelligence then your laptop is probably smarter than you.

Of course, there is no reason to stop restricting our mode of communication. A natural continuation is to switch to the domain of game theory. The judge sets a two-player game for the human and computer to play. To decide which player is human, the judge only has access to the history of actions the players chose. This is the economic Turing test suggested by Boris Bukh and shared by Ariel Procaccia. The test can be viewed as part of the program of linking intelligence and rationality.

Procaccia raises the good point that in this game it is not clear if it is more difficult to program the computer or be the judge. Before the work of Tversky & Kahneman (1974), a judge would not even know how to distinguish a human from a rational player. Forty year later, I still don’t know of a reliable survey or meta-analysis of well-controlled experiments of human behavior in the restricted case of one-shot perfect information games. But we do know that judge designed payoffs are not the only source of variation in human strategies and I even suggest the subjective-rationality framework as I way to use evolutionary game theory to study these deviations from objective rationality. Understanding these departures is far from a settled question for psychologists and behavioral economist. In many ways, the programmer in the economic Turing test is a job description for a researcher in computational behavioral economy and the judge is an experimental psychologists. Both tasks are incredibly difficult.

For me, the key limitation of the economic (and similarly, standard) Turing test is not the difficult of judging. The fundamental flaw is the assumption that game behavior is a human universal. Much like the unreasonable assumption of objective rationality, we cannot simply assume uniformity in the heuristics and biases that shape human decision making. Before we take anything as general or universal, we have to show its consistency not only across the participants we chose, but also across different demographics and cultures. Unfortunately, much of game behavior (for instance, the irrational concept of fairness) is not consistent across cultures, even if it has a large consistency within a single culture. What a typical westerner university students considers a reasonable offer in the ultimatum game is not typical for a member of the Hadza group of Tanzania or Lamelara of Indonesia (Henrich et al., 2001). Game behavior is not a human universal, but is highly dependent of culture. We will discuss this dependence in part II of this series, and explore what it means for the Turing test and evolutionary game theory.

Until next time, I leave you with some questions that I wish I knew the answer to: Can we ever define intelligence? Can intelligence be operationalized? Do universal that are central to intelligence exist? Is intelligence a cultural construct? If there are intelligence universals then how should we modify the mode of interface used by the Turing test to focus only on them?

This post continues with a review of Henrich et al. (2001) in Part 2

### References

Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001). In Search of Homo Economicus: Behavioral Experiments in 15 Small-Scale Societies. American Economic Review, 91 (2), 73-78

Strannegård, C., Amirghasemi, M., & Ulfsbäcker, S. (2013). An anthropomorphic method for number sequence problems Cognitive Systems Research, 22-23, 27-34 DOI: 10.1016/j.cogsys.2012.05.003

Strannegård, C., Cirillo, S., & Ström, V. (2012). An anthropomorphic method for progressive matrix problems. Cognitive Systems Research.

Turing, A. M. (1950) Computing Machinery and Intelligence. Mind.

Tversky, A.; Kahneman, D. (1974) Judgment under uncertainty: Heuristics and biases. Science 185 (4157): 1124–1131.

## Asking Amanda Palmer about cooperation in the public goods game

In the late summer of 2010 I was homeless — living in hostels, dorms, and on the couches of friends as I toured academic events: a total of 2 summer schools, and 4 conferences over a two and a half month period. By early September I was ready to return to a sedentary life of research. I had just settled into my new office in the Department of Combinatorics & Optimization at the University of Waterloo and made myself comfortable with a manic 60 hour research spree. This meant no food or sleep — just sunflower seeds, Arizona iced tea, and leaving my desk only to use the washroom. I was committing all the inspiration of the summer to paper, finishing up old articles, and launching new projects.

A key ingredient to inducing insomnia and hushing hunger was the steady rhythm of music. In this case, it was a song that a burlesque dancer (also, good fencer and friend) had just introduced me to: “Runs in the Family” by Amanda Palmer. The computer pumped out the consistent staccato rhythm on loop as it ran my stochastic models in the background.

After finishing my research spree, I hunted down more of Palmer’s music and realized that I enjoyed all her work and the story behind her art. For two and a half years, I thought that the connection between the artist and my research would be confined to the motivational power of her music. Today, I watched her TED talk and realized the connection is much deeper:

As Amanda Palmer tells her story, she stresses the importance of human connection, intimacy, trust, fairness, and cooperation. All are key questions to an evolutionary game theorist. We study cooperation by looking at the prisoner’s dilemma and public goods game (Nowak, 2006). We look at fairness through the ultimatum and dictator game (Henrich et al., 2001). We explore trust with direct and indirect reciprocity (Axelrod, 1981; Nowak & Sigmund, 1998). We look at human connections and intimacy through games on graphs and social networks (Szabo & Fath, 2007).

As a musician that promotes music ‘piracy’ and crowdfunding, she raises a question that is a perfect candidate for being modeled as a variant of the public goods game. A musician that I enjoy is an amplifier of utility: if I give the musician ten dollars then I receive back a performance or record that provides me more than ten dollars worth of enjoyment. It used to be that you could force me to always pay before receiving music, this is equivalent to not allowing your agent to defect. However, with the easy of free access to music, the record industry cannot continue to forbid defection. I can chose to pay or not pay for my music, and the industry fears that people will always tend to the Nash equilibrium: defecting by not paying for music.

From the population level this is a public goods game. Every fan of Amanda Palmer has a choice to either pay (cooperate) or not (defect) for her music. If we all pay then she can turn that money into music that all the fans can enjoy. However, if not enough of us pay then she has to go back to her day job as a human statue which will decrease the time she can devote to music and result in less enjoyable songs or at least less frequent releases of new songs. If none of us pay her then it becomes impossible for Palmer and her band to record and distribute their music, and none of the fans gain utility.

The record industry believes in homo economicus and concludes that the population will converge to all defection. The industry fears that if left to their own devices, no fans will chose to pay for music. For the highly inviscid environment of detached mass-produced pop music, I would not be surprised if this was true.
The record industry has come up with only one mechanism to overcome this: punishment. If I do not pay (cooperate) then an external agent will punish me, and reduce my net utility to lower than if I had simply paid for the music. Fehr & Gachter (1999) showed that this is one way to establish cooperation. If the industry can produce a proper punishment scheme then they can make people pay for music. However, as evolutionary game theorists, we know that there are many other mechanisms with which to promote cooperation in the public good’s game. Amanda Palmer realizes this, too, and closes her talk with:

I think people have been obsessed with the wrong question, which is: “how do we make people pay for music?” What if we started asking: “how do we let people pay for music?”

As a modeler of cooperation, in some ways my work is as an engineer. In order to publish, I need to design novel mechanisms that allow cooperation to emerge in a population. In this way, there is a much deeper connection between my research and one of the questions asked by Amanda Palmer. So I ask you: What are your favorite non-punishment mechanisms for allowing cooperation in the public goods game?

### References

Axelrod, R. (1981). The emergence of cooperation among egoists. The American Political Science Review, 306-318.

Fehr, E., & Gächter, S. (2000). Cooperation and Punishment in Public Goods Experiments American Economic Review, 90 (4), 980-994 DOI: 10.1257/aer.90.4.980

Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001). In Search of Homo Economicus: Behavioral Experiments in 15 Small-Scale Societies American Economic Review, 91 (2), 73-78.

Nowak, M. A., & Sigmund, K. (1998). Evolution of indirect reciprocity by image scoring. Nature, 393(6685), 573-577.

Nowak, M. A. (2006). Five rules for the evolution of cooperation. science, 314(5805), 1560-1563.

Szabo, G., & Fath, G. (2007). Evolutionary games on graphs Physics Reports, 446 (4-6), 97-216