Hadza hunter-gatherers, social networks, and models of cooperation

At the heart of the Great Lakes region of East Africa is Tanzania — a republic comprised of 30 mikoa, or provinces. Its border is marked off by the giant lakes Victoria, Tanganyika, and Malawi. But the lake that interests me the most is an internal one: 200 km from the border with Kenya at the junction of mikao Arusha, Manyara, Simiyu and Singed is Lake Eyasi. It is a temperamental lake that can dry up almost entirely — becoming crossable on foot — in some years and in others — like the El Nino years — flood its banks enough to attract hippos from the Serengeti.

For the Hadza, it is home.

The Hadza number around a thousand people, with around 300 living as traditional nomadic hunter-gatherers (Marlow, 2002; 2010). A life style that is believed to be a useful model of societies in our own evolutionary heritage. An empirical model of particular interest for the evolution of cooperation. But a model that requires much more effort to explore than running a few parameter settings on your computer. In the summer of 2010, Coren Apicella explored this model by traveling between Hadza camps throughout the Lake Eyasi region to gain insights into their social network and cooperative behavior.

Here is a video abstract where Coren describes her work:

The data she collected with her colleagues (Apicella et al., 2012) provides our best proxy for the social organization of early humans. In this post, I want to talk about the Hadza, the data set of their social network, and how it can inform other models of cooperation. In other words, I want to freeride on Apicella et al. (2012) and allow myself and other theorists to explore computational models informed by the empirical Hadza model without having to hike around Lake Eyasi for ourselves.

Read more of this post

Enriching evolutionary games with trust and trustworthiness

Fairly early in my course on Computational Psychology, I like to discuss Box’s (1979) famous aphorism about models: “All models are wrong, but some are useful.” Although Box was referring to statistical models, his comment on truth and utility applies equally well to computational models attempting to simulate complex empirical phenomena. I want my students to appreciate this disclaimer from the start because it avoids endless debate about whether a model is true. Once we agree to focus on utility, we can take a more relaxed and objective view of modeling, with appropriate humility in discussing our own models. Historical consideration of models, and theories as well, should provide a strong clue that replacement by better and more useful models (or theories) is inevitable, and indeed is a standard way for science to progress. In the rapid turnover of computational modeling, this means that the best one could hope for is to have the best (most useful) model for a while, before it is pushed aside or incorporated by a more comprehensive and often more abstract model. In his recent post on three types of mathematical models, Artem characterized such models as heuristic. It is worth adding that the most useful models are often those that best cover (simulate) the empirical phenomena of interest, bringing a model closer to what Artem called insilications.
Read more of this post

Games, culture, and the Turing test (Part II)

This post is a continuation of Part 1 from last week that introduced and motivated the economic Turing test.

Joseph Henrich

Joseph Henrich

When discussing culture, the first person that springs to mind is Joseph Henrich. He is the Canada Research Chair in Culture, Cognition and Coevolution, and Professor at the Departments of Psychology and Economics at the University of British Columbia. My most salient association with him is the cultural brain hypothesis (CBH) that suggests that the human brain developed its size and complexity in order to better transmit cultural information. This idea seems like a nice continuation of Dunbar’s (1998) Social Brain hypothesis (SBH; see Dunbar & Shultz (2007) for a recent review or this EvoAnth blog post for an overview), although I am still unaware of strong evidence for the importance of gene-culture co-evolution — a requisite for CBH. Both hypotheses are also essential to studying intelligence; in animals intelligence is usually associated with (properly normalized) brain size and complexity, and social and cultural structure is usually associated with higher intellect.

To most evolutionary game theorists, Henrich is know not for how culture shapes brain development but how behavior in games and concepts of fairness vary across cultures. Henrich et al. (2001) studied the behavior of people from 15 small-scale societies in the prototypical test of fairness: the ultimatum game. They showed a great variability in how fairness is conceived and what operationalist results the conceptions produce across the societies they studied.

In general, the ‘universals’ that researchers learnt from studying western university students were not very universal. The groups studied fell into four categories:

  • Three foraging societies,
  • Six practicing slash-and-burn horticulture,
  • Four nomadic herding groups, and
  • Three small-scale farming societies.

These add up to sixteen, since the Sangu of Tanzania were split into farmers and herders. In fact, in the full analysis presented in table 1, the authors consider a total of 18 groups; splitting the Hadza of Tanzania into big and small camp, and the villagers of Zimbabwe into unsettled and resettled. Henrich et al. (2001) conclude that neither homoeconomicus nor the western university student (WEIRD; see Henrich, Heine, & Norenzaya (2010) for a definition and discussion) models accurately describe any of these groups. I am not sure why I should trust this result given a complete lack of statistical analysis, small sample size, and what seems like arithmetic mistakes in the table (for instance the resettled villagers rejected 12 out of 86 offers, but the authors list the rate as 7%). However, even without a detailed statistical analysis it is clear that there is a large variance across societies, and at least some of the societies don’t match economically rational behavior or the behavior of WEIRD participants.

The ultimatum game is an interaction between two participants, one is randomly assigned to be Alice and the other is Bob. Alice is given a couple of days wage in money (either the local currency or other common units of exchange like tobacco) and can decide what proportion of it to offer to Bob. She can choose to offer as little or as much as she wants. Bob is then told what proportion Alice offered and can decide to accept or reject. If Bob accepts then the game ends and each party receives their fraction of the goods. If Bob declines then both Alice and Bob receive nothing and the game terminates. The interaction is completely anonymous and happens only once to avoid effects of reputation or direct reciprocity. In this setting, homoeconomicus would give the lowest possible offer if playing as Alice and accept any non-zero offer as Bob (any money is better than no money).

The groups that most closely match the economists’ model are the Machiguenga of Peru, Quichua of Ecuador, and small camp Hadza. They provide the lowest average offers of 26%-27%. They reject offers 5%, 15%, and 28% of the time, respectively. Only the Tsimane of Bolivia (70 interactions), Achuar of Ecuador (16 interactions), and Ache of Paraguay (51 interactions) have zero offer rejection rates. However, members of all three societies offer a sizeable initial offer, averaging 37%, 42%, and 51%, respectively. A particularly surprising group is the Lamelara of Indonesia that offered on average 58% of their goods, and still rejected 3 out of 8 offers (they also generated 4 out of 20 experimenter generated low offers, since no low offers were given by group members). This behavior is drastically different from rational, and not very close to WEIRD participants that tend to offer around 50% and reject offers below 20% about 40% to 60% of the time. If we are to narrow our lens of human behavior to that of weird participants or economic theorizing than it is easy for us to miss the big picture of the drastic variability of behavior across human cultures.

It's easy to see what we want instead of the truth when we focus too narrowly.

It’s easy to see what we want instead of the truth when we focus too narrowly.

What does this mean for the economic Turing test? We cannot assume that the judge is able to decide how distinguish man from machine without also mistaking people of different cultures for machines. Without very careful selection of games, a judge can only distinguish members of its own culture from members of others. Thus, it is not a test of rationality but of conformation to social norms. I expect this flaw to extend to the traditional Turing test as well. Even if we eliminate the obvious cultural barrier of language by introducing a universal translator, I suspect that there will still be cultural norms that might force the judge to classify members of other cultures as machines. The operationalization of the Turing test has to be carefully studied with how it interacts with different cultures. More importantly, we need to question if a universal definition of intelligence is possible, or if it is inherently dependent on the culture that defines it.

What does this mean for evolutionary game theory? As an evolutionary game theorist, I often take an engineering perspective: pick a departure from objective rationality observed by the psychologists and design a simple model that reproduces this effect. The dependence of game behavior on culture means that I need to introduce a “culture knob” (either as a free or structural parameter) that can be used to tune my model to capture the variance in behavior observed across cultures. This also means that modelers must remain agnostic to the method of inheritance to allow for both genetic and cultural transmission (see Lansing & Cox (2011) for further considerations on how to use EGT when studying culture). Any conclusions or arguments for biological plausibility made from simulations must be examined carefully and compared to existing cross-cultural data. For example, it doesn’t make sense to conclude that fairness is a biologically evolved universal (Nowak, Page, & Sigmund, 2000) if we see such great variance in the concepts of fairness across different cultures of genetically similar humans.


Dunbar, R.I.M. (1998) The social brain hypothesis. Evolutionary Anthropology 6(5): 179-190. [pdf]

Dunbar, R.I.M., & Shultz, S. (2007) Evolution in the Social Brain. Science 317. [pdf]

Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001). In Search of Homo Economicus: Behavioral Experiments in 15 Small-Scale Societies American Economic Review, 91 (2), 73-78 DOI: 10.1257/aer.91.2.73

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world. Behavioral and Brain Sciences, 33(2-3), 61-83.

Lansing, J. S., & Cox, M. P. (2011). The Domain of the Replicators. Current Anthropology, 52(1), 105-125.

Nowak, M. A., Page, K. M., & Sigmund, K. (2000). Fairness versus reason in the ultimatum game. Science, 289(5485), 1773-1775.


Games, culture, and the Turing test (Part I)

Intelligence is one of the most loaded terms that I encounter. A common association is the popular psychometric definition — IQ. For many psychologists, this definition is too restrictive and the g factor is preferred for getting at the ‘core’ of intelligence tests. Even geneticists have latched on to g for looking at heritability of intelligence, and inadvertently helping us see that g might be too general a measure. Still, for some, these tests are not general enough since they miss the emotional aspects of being human, and tests of emotional intelligence have been developed. Unfortunately, the bar for intelligence is a moving one, whether it is the Flynn effect in IQ or more commonly: constant redefinitions of ‘intelligence’.

Does being good at memorizing make one intelligent? Maybe in the 1800s, but not when my laptop can load Google. Does being good at chess make one intelligent? Maybe before Deep Blue beat Kasparov, but not when my laptop can run a chess program that beats grand-masters. Does being good at Jeopardy make one intelligent? Maybe before IBM Watson easily defeated Jennings and Rutter. The common trend here seems to be that as soon as computers outperform humans on a given act, that act and associated skills are no longer considered central to intelligence. As such, if you believe that talking about an intelligent machine is reasonable then you want to agree on an operational benchmark of intelligence that won’t change as you develop your artificial intelligence. Alan Turing did exactly this and launched the field of AI.

I’ve stressed Turing’s greatest achievement as assembling an algorithmic lens and turning it on the world around him, and previously highlighted it’s application to biology. In the popular culture, he is probably best known for the application of the algorithmic lens to the mind — the Turing test (Turing, 1950). The test has three participants: a judge, a human, and a machine. The judge uses an instant messaging program to chat with the human and the machine, without knowing which is which. At the end of a discussion (which can be about anything the judge desires), she has to determine which is man and which is machine. If judges cannot distinguish the machine more than 50% of the time then it is said to pass the test. For Turing, this meant that the machine could “think” and for many AI researchers this is equated with intelligence.

Hit Turing right in the test-ees

You might have noticed a certain arbitrarity in the chosen mode of communication between judge and candidates. Text based chat seems to be a very general mode, but is general always better? Instead, we could just as easily define a psychometric Turing test by restriction the judge to only give IQ tests. Strannegård and co-authors did this by designing a program that could be tested on the mathematical sequences part of IQ tests (Strannegård, Amirghasemi, & Ulfsbäcker, 2012) and Raven’s progressive matrices (Strannegård, Cirillo, & Ström, 2012). The authors’ anthropomorphic method could match humans on either task (IQ of 100) and on the mathematical sequences greatly outperform most humans if desired (IQ of 140+). In other words, a machine can pass the psychometric Turing test and if IQ is a valid measure of intelligence then your laptop is probably smarter than you.

Of course, there is no reason to stop restricting our mode of communication. A natural continuation is to switch to the domain of game theory. The judge sets a two-player game for the human and computer to play. To decide which player is human, the judge only has access to the history of actions the players chose. This is the economic Turing test suggested by Boris Bukh and shared by Ariel Procaccia. The test can be viewed as part of the program of linking intelligence and rationality.

Procaccia raises the good point that in this game it is not clear if it is more difficult to program the computer or be the judge. Before the work of Tversky & Kahneman (1974), a judge would not even know how to distinguish a human from a rational player. Forty year later, I still don’t know of a reliable survey or meta-analysis of well-controlled experiments of human behavior in the restricted case of one-shot perfect information games. But we do know that judge designed payoffs are not the only source of variation in human strategies and I even suggest the subjective-rationality framework as I way to use evolutionary game theory to study these deviations from objective rationality. Understanding these departures is far from a settled question for psychologists and behavioral economist. In many ways, the programmer in the economic Turing test is a job description for a researcher in computational behavioral economy and the judge is an experimental psychologists. Both tasks are incredibly difficult.

For me, the key limitation of the economic (and similarly, standard) Turing test is not the difficult of judging. The fundamental flaw is the assumption that game behavior is a human universal. Much like the unreasonable assumption of objective rationality, we cannot simply assume uniformity in the heuristics and biases that shape human decision making. Before we take anything as general or universal, we have to show its consistency not only across the participants we chose, but also across different demographics and cultures. Unfortunately, much of game behavior (for instance, the irrational concept of fairness) is not consistent across cultures, even if it has a large consistency within a single culture. What a typical westerner university students considers a reasonable offer in the ultimatum game is not typical for a member of the Hadza group of Tanzania or Lamelara of Indonesia (Henrich et al., 2001). Game behavior is not a human universal, but is highly dependent of culture. We will discuss this dependence in part II of this series, and explore what it means for the Turing test and evolutionary game theory.

Until next time, I leave you with some questions that I wish I knew the answer to: Can we ever define intelligence? Can intelligence be operationalized? Do universal that are central to intelligence exist? Is intelligence a cultural construct? If there are intelligence universals then how should we modify the mode of interface used by the Turing test to focus only on them?

This post continues with a review of Henrich et al. (2001) in Part 2


Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001). In Search of Homo Economicus: Behavioral Experiments in 15 Small-Scale Societies. American Economic Review, 91 (2), 73-78

Strannegård, C., Amirghasemi, M., & Ulfsbäcker, S. (2013). An anthropomorphic method for number sequence problems Cognitive Systems Research, 22-23, 27-34 DOI: 10.1016/j.cogsys.2012.05.003

Strannegård, C., Cirillo, S., & Ström, V. (2012). An anthropomorphic method for progressive matrix problems. Cognitive Systems Research.

Turing, A. M. (1950) Computing Machinery and Intelligence. Mind.

Tversky, A.; Kahneman, D. (1974) Judgment under uncertainty: Heuristics and biases. Science 185 (4157): 1124–1131.

Asking Amanda Palmer about cooperation in the public goods game

In the late summer of 2010 I was homeless — living in hostels, dorms, and on the couches of friends as I toured academic events: a total of 2 summer schools, and 4 conferences over a two and a half month period. By early September I was ready to return to a sedentary life of research. I had just settled into my new office in the Department of Combinatorics & Optimization at the University of Waterloo and made myself comfortable with a manic 60 hour research spree. This meant no food or sleep — just sunflower seeds, Arizona iced tea, and leaving my desk only to use the washroom. I was committing all the inspiration of the summer to paper, finishing up old articles, and launching new projects.

A key ingredient to inducing insomnia and hushing hunger was the steady rhythm of music. In this case, it was a song that a burlesque dancer (also, good fencer and friend) had just introduced me to: “Runs in the Family” by Amanda Palmer. The computer pumped out the consistent staccato rhythm on loop as it ran my stochastic models in the background.

After finishing my research spree, I hunted down more of Palmer’s music and realized that I enjoyed all her work and the story behind her art. For two and a half years, I thought that the connection between the artist and my research would be confined to the motivational power of her music. Today, I watched her TED talk and realized the connection is much deeper:

As Amanda Palmer tells her story, she stresses the importance of human connection, intimacy, trust, fairness, and cooperation. All are key questions to an evolutionary game theorist. We study cooperation by looking at the prisoner’s dilemma and public goods game (Nowak, 2006). We look at fairness through the ultimatum and dictator game (Henrich et al., 2001). We explore trust with direct and indirect reciprocity (Axelrod, 1981; Nowak & Sigmund, 1998). We look at human connections and intimacy through games on graphs and social networks (Szabo & Fath, 2007).

As a musician that promotes music ‘piracy’ and crowdfunding, she raises a question that is a perfect candidate for being modeled as a variant of the public goods game. A musician that I enjoy is an amplifier of utility: if I give the musician ten dollars then I receive back a performance or record that provides me more than ten dollars worth of enjoyment. It used to be that you could force me to always pay before receiving music, this is equivalent to not allowing your agent to defect. However, with the easy of free access to music, the record industry cannot continue to forbid defection. I can chose to pay or not pay for my music, and the industry fears that people will always tend to the Nash equilibrium: defecting by not paying for music.

From the population level this is a public goods game. Every fan of Amanda Palmer has a choice to either pay (cooperate) or not (defect) for her music. If we all pay then she can turn that money into music that all the fans can enjoy. However, if not enough of us pay then she has to go back to her day job as a human statue which will decrease the time she can devote to music and result in less enjoyable songs or at least less frequent releases of new songs. If none of us pay her then it becomes impossible for Palmer and her band to record and distribute their music, and none of the fans gain utility.

The record industry believes in homo economicus and concludes that the population will converge to all defection. The industry fears that if left to their own devices, no fans will chose to pay for music. For the highly inviscid environment of detached mass-produced pop music, I would not be surprised if this was true.
The record industry has come up with only one mechanism to overcome this: punishment. If I do not pay (cooperate) then an external agent will punish me, and reduce my net utility to lower than if I had simply paid for the music. Fehr & Gachter (1999) showed that this is one way to establish cooperation. If the industry can produce a proper punishment scheme then they can make people pay for music. However, as evolutionary game theorists, we know that there are many other mechanisms with which to promote cooperation in the public good’s game. Amanda Palmer realizes this, too, and closes her talk with:

I think people have been obsessed with the wrong question, which is: “how do we make people pay for music?” What if we started asking: “how do we let people pay for music?”

As a modeler of cooperation, in some ways my work is as an engineer. In order to publish, I need to design novel mechanisms that allow cooperation to emerge in a population. In this way, there is a much deeper connection between my research and one of the questions asked by Amanda Palmer. So I ask you: What are your favorite non-punishment mechanisms for allowing cooperation in the public goods game?


Axelrod, R. (1981). The emergence of cooperation among egoists. The American Political Science Review, 306-318.

Fehr, E., & Gächter, S. (2000). Cooperation and Punishment in Public Goods Experiments American Economic Review, 90 (4), 980-994 DOI: 10.1257/aer.90.4.980

Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001). In Search of Homo Economicus: Behavioral Experiments in 15 Small-Scale Societies American Economic Review, 91 (2), 73-78.

Nowak, M. A., & Sigmund, K. (1998). Evolution of indirect reciprocity by image scoring. Nature, 393(6685), 573-577.

Nowak, M. A. (2006). Five rules for the evolution of cooperation. science, 314(5805), 1560-1563.

Szabo, G., & Fath, G. (2007). Evolutionary games on graphs Physics Reports, 446 (4-6), 97-216

Spatial structure

In evolutionary game theory, the spatial structure of a game can be as important in determining the evolutionary success of a given strategy as is the strategy itself [1]. This is intuitive if we consider that strategies do not work in a vacuum: An agent’s payoff is a function of both its strategy and the context in which that strategy was executed. In light of this, a brief discussion concerning the role of spatial structure in simulations is warranted. Additionally, several common network types (random graphs, small-world and scale-free networks), as well as network properties (degree, clustering and path length) are considered. Readers are also encouraged to consult Albert and Barabási’s review, as it forms the basis for the latter summary [2].

To understand what is meant by spatial structure, something should first be said about what it means to lack one. It is interesting to note that the earliest applications of evolutionary game theory often favored a non-spatial approach [3], and that many continue to do so to this day. In such settings, populations are treated as inviscid (free-mixing), meaning that any agent can potentially pair with any other agent in an interaction; agents are, in other words, inherently unconstrained in their pairings. This extends to reproduction as well: Because new agents are not “placed” anywhere in particular (e.g., next to their parents), reproducing successfully has a more quantitative rather than qualitative effect on the population (e.g., there are more agents of your type, but you are not helping to generate a homogeneous neighborhood).

Reasons for preferring a non-spatial approach may be numerous, but one of the most apparent is the relative simplicity of analysis. Often, only the strategy proportions and the payoff matrix, which describes how fitness changes when these strategies interact, need be considered in order to predict population trends [4]. Nevertheless, spatial structure remains interesting both due to its potentially strong effect, as well as for theoretical reasons. Psychological research, for instance, has shown evidence for adaptive biases that lead to the formation of particular social networks, and it is often argued that such biases are a product of evolution [5]. Therefore, if we wish to use evolutionary game theory to reason about behavior, understanding spatial structure may, in certain cases, be a necessary first step.

At its simplest, spatial structure is a set of constraints on how agents interact. These constraints are typically described in terms of a graph, where agents form the vertices (i.e., nodes) of the graph, and potential interactions are represented by the edges connecting them. The lack of an edge indicates that the given agents cannot pair up for an interaction, and thus a simulation which does not impose a spatial structure may be described as a complete (i.e., fully-connected) graph. When spatial position matters, reproduction is also affected: The researcher must decide where an offspring is placed (e.g., next to the parent, far from the parent, at random?) – and even this decision may have a significant effect. Typically, the same spatial structure is applied to both interactions and reproduction.

To describe how graphs vary, several key terms must be defined. The first of these is degree, represented by k. The degree of a vertex is the number of connections it forms, and as such represents an agent’s connectivity. The higher the degree, the more potential partners the agent has for interaction, and the more spaces it has to potentially place an offspring. In most cases, it is not the degree of a given vertex, but the average degree or the degree distribution of the whole graph that is of interest. Next, clustering, as captured by a clustering coefficient, reflects the extent to which local, highly interconnected groups form. Another way to characterize this is as a measure of agent “cliquishness”. Finally, path length is the number of edges that must be traversed to move from one particular vertex to another. A low average path length suggests that long distance connections are common, whereas a high average path length implies that connections tend to be strongly localized.

Unfortunately, real-world networks often have sophisticated topologies that lack clear design principles. As a result, it is commonly thought that the simplest way to represent such networks is to treat them as random graphs. The most straightforward approach is described by the Erdõs-Rényi model: Here, we start with an empty graph, then consider every pair of points, connecting each with some probability p. The result is a graph of n vertices, degree k and approximately pn(n – 1)/2 randomly distributed edges. Though the underlying principle is simple, in practice, such graphs are more complicated than they first appear. For instance, there are threshold phenomena whereby, as p varies, various properties (e.g., every pair of vertices being connected at least indirectly) emerge very suddenly, yet with remarkable consistency. Variations on this approach are abundant and frequently used [2].

Though the random graph paradigm remains common, others have emerged as well, such as a cross between random graphs and highly clustered regular lattices: the small-world network. These networks are defined by a low average path length between any pair of vertices, a concept that was famously captured by Stanley Milgram’s “six degrees of separation” experiment, where he argued that, on average, any two individuals in the United States are only separated by approximately six social relationships. The implications of this property are that, while local, highly connected groupings are possible, long distance connections make it easy to move anywhere on the graph. This is worth noting, since virtually all real-world networks have a higher clustering coefficient than random graphs do, yet small-world properties abound; examples include various social networks and the Internet. Nevertheless, these properties in and of themselves are not sufficient to specify any particular organizational structure, and, in fact, even random graphs may be regarded as a type of small-world network [2].

The third and final paradigm is the scale-free network. Whereas the degree distribution of a random network is a Poisson distribution, scale-free networks are defined by the fact that their degree distribution has a power-law tail. In other words, vertices with very high numbers of edges are uncommon, and such vertices tend to form “hubs” around which less-connected vertices then gather. Unsurprisingly, scale-free networks have small-world properties, since traversing from any vertex to another is often as simple as connecting to the nearest hub and then using this to access a hub near the target vertex. Like small-world networks, scale-free networks are also quite prevalent across a variety of contexts, ranging from the Internet to metabolic networks [2].

One spatial structure that has proven particularly useful from the point of view of evolutionary game theory is a variation on the random graph: the random k-regular graph. Although random, in that the vertices are randomly connected, these are also regular, in that each vertex has the same degree k. This offers several advantages: First, the random nature of the graph means that no deliberate assumptions about properties such as clustering or average path length are made, which makes them easier to justify at a theoretical level (e.g., if we don’t know what a particular network should look like, this is a relatively neutral place to start). Second, the graph’s regularity means that agents always have an equal number of neighbors, and thus an equal number of interactions and consistent structural constraints on reproductive potential. This precludes scenarios where a more highly connected agent receives higher or lower fitness payoffs, or reproduces more or less, simply as a result of its level of connectivity, rather than the effectiveness of its strategy. One final advantage is that Ohtsuki and Nowak have proposed a means by which more traditional ways of reasoning about non-spatial models may be applied to random k-regular graphs [6]. This has the potential to make games on such graphs more mathematically tractable than other models.

Creating a random k-regular graph is theoretically quite straightforward: We simply consider the entire possible set of graphs with n vertices and k degree, and then pick a graph at random. The question of how this is done in practice is more difficult however, since computational constraints almost always necessitate the use of a faster algorithm. The practical considerations of generating such graphs will be discussed in a future entry.


  1. Killingback, T., & Doebeli, M. (1996). Spatial Evolutionary Game Theory: Hawks and Doves Revisited. Proceedings of the Royal Society of London. Series B: Biological Sciences263(1374), 1135–1144.
  2. Reka Albert, & Albert-Laszlo Barabasi (2002). Statistical mechanics of complex networks Reviews of Modern Physics, 74 (1), 47-97 arXiv: cond-mat/0106096v1
  3. Smith, J. M. (1982). Evolution and the theory of games. Cambridge University Press.
  4. Nowak, M. A. (2006). Evolutionary dynamics: exploring the equations of life. Harvard University Press.
  5. Henrich, J., & Broesch, J. (2011). On the nature of cultural transmission networks: evidence from Fijian villages for adaptive learning biases. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences366(1567), 1139–1148.
  6. Ohtsuki, H., & Nowak, M. A. (2006). The replicator equation on graphs. Journal of Theoretical Biology243(1), 86–97.