Deadlock & Leader as deformations of Prisoner’s dilemma & Hawk-Dove games

Recently, I’ve been working on revisions for our paper on measuring the games that cancer plays. One of the concerns raised by the editor is that we don’t spend enough time introducing game theory and in particular the Deadlock and Leader games that we observed. This is in large part due to the fact that these are not the most exciting games and not much theoretic efforts have been spent on them in the past. In fact, none that I know of in mathematical oncology.

With that said, I think it is possible to relate the Deadlock and Leader games to more famous games like Prisoner’s dilemma and the Hawk-Dove games; both that I’ve discussed at length on TheEGG. Given that I am currently at the Lorentz Center in Leiden for a workshop on Understanding Cancer Through Evolutionary Game Theory (follow along on twitter via #cancerEGT), I thought it’d be a good time to give this description here. Maybe it’ll inspire some mathematical oncologists to play with these games.

Read more of this post

Advertisements

PSYC 532 (2012): Evolutionary Game Theory and Cognition

Past Thursday was my fourth time guest lecturing for Tom’s Cognitive Science course. I owe Tom and the students a big thank you. I had a great time presenting, and hope I was able to share some of my enthusiasm for evolutionary games.

I modified the presentation (pdf slides) by combining lecture and discussion. Before the lecture the students read my evolving cooperation post and watched Robert Wright’s “Evolution of compassion”. Based on this, they prepared discussion points and answers to:

  1. What is kin selection? What is green-beard effect or ethnocentrism? How do you think kin selection could be related to the green-beard effect or ethnocentrism?
  2. What does Wright say compassion is from a biological point of view? Do you think this is a reasonable definition?
  3. Can a rational agent be compassionate? Is understanding the indirect benefits (to yourself or your genes) that your actions produce essential for compassion?
  4. What simplifying assumptions does evolutionary game theory make when modeling agents? Are these assumptions reasonable?
  5. Can compassion or cooperation evolve in an inviscid environment? What about a spatially structured one?
  6. What is reciprocal altruism, direct reciprocity and indirect reciprocity?
  7. What is a zero-sum game? Does a non-zero-sum relationship guarantee that compassion will emerge?
  8. Is the Prisoner’s dilemma a zero-sum game? Can you have a competitive environment that is non-zero sum?

During the lecture, we would pause to discuss these questions. As always, the class was enthusiastic and shared many unique viewpoints on the topics. Unfortunately, I did not sufficiently reduce the material from last year and with the discussion we ran out of time. This means that we did not get to the ethnocentrism section of the slides. For students that want to understand that section, I recommend: Evolution of ethnocentrism in the Hammond and Axelrod model.

To the students: thank you for being a great audience and I encourage you to continue discussing the questions above in the comments of this post.

Risk-dominance and a general evolutionary rule in finite populations

In coordination games, the players have to choose between two possible strategies, and their goal is to coordinate their choice without communication. In a classic game theory setting, coordination games are the prototypical setting for discussing the problem of equilibrium selection: if a game has more than one equilibrium then how do the players know which one to equilibrate on? The standard solution concept for this is the idea of a risk dominance. If you were playing a symmetric coordination game:

\begin{pmatrix}R & S \\ T & P \end{pmatrix}

where R > T and P > S, then how would you chose your strategy not knowing what your opponent is going to do? Since the two pure strategy Nash equilibria are the top left and bottom right corner, you would know that you want to end up coordinating with your partner. However, given no means to do so, you could assume that your partner is going to pick one of the two strategies at random. In this case, you would want to maximize your expected payoff. Assuming that each strategy of your parner is equally probably, simple arithmetic would lead you to conclude that you should chose the first strategy (first row, call it C) given the condition:

R + S > T + P

Congratulations, through your reasoning you have arrived at the idea of a risk dominant strategy. If the above equation is satisfied then C is risk dominant over D (the second strategy choice), more likely to be selected, and the ‘better’ Nash equilibrium.

Since many view evolutionary game theory as a study of equilibrium selection, it is not surprising to see risk-dominance appear in evolution. In particular, if the risk dominance condition is met, then (for a coordination game and replicator dynamics) C will have a larger basin of attraction than D. If we pick initial levels of cooperators at random, then in your well-mixed and extremely large population, the risk-dominant strategy will dominate the population more often. If you are feeling adventurous, then I recommend as exercise to calculate the exact probability of C dominating in this setting.

From our experience with ethnocentrism and discrete populations, we know that replicator dynamics is not the end of the story. The second step is to consider finite inviscid populations where we can’t ignore dynamical stochasticity. Kandari et al. (1993) studies this setting and for a population of size N concluded that C would be a more likely than D if:

R(N - 2) + SN > TN + P(N - 2)

Nowak et al. (2004) looked at this problem from the biological perspective of Moran processes. In a Moran process, there is no mutation, and thus the conclusion of dynamics is the absorbing state of either all C or all D. The quantity of interest becomes the fixation probability: the fixation probability of C is the probability that a single C mutant invades (leads to an all C absorbing state) a population of all D (vice-virsa for
fixation of D). Nowak et al. (2004) found that the fixation probability of C (in the weak selection limit) is higher than that of D in a population of N agents if and only if the above equation is satisfied.

Antal et al. (2009) concluded this research program. They showed that the above condition was necessary for the case of arbitrary mutations, a wide range of evolutionary processes, and any two player, two strategy game. It is true for pair-comparison (Fermi rule), exponential Moran processes, and weak-selection Moral processes with arbitrary mutation rates. In general, any update process that satisfies the two requirements: (i) additive payoffs, and (ii) evolutionary dynamics depend only on the payoff differences.

Let us visualize this result. As we learnt in a previous post we know that a two strategy cooperate-defect games do not need 4 parameters to specify, and can be rewritten with just two. The symmetry arguments we applied before preserve the authors’ result, so let’s apply the transformation:

\begin{pmatrix}R & S \\ T & P \end{pmatrix} \Rightarrow \begin{pmatrix}1 & U \\ V & 0 \end{pmatrix}

This lets us simplify the risk-dominance and finite population rules to:

1 > V - U \quad \text{and} \quad \frac{N - 2}{N} > V - U

Now it clear why we discussed risk-dominance before diving into finite populations. As the population size gets arbitrarily large (N \rightarrow \infty), our finite population rule reduces to risk-dominance and replicator dynamics. In the other extreme case is N = 2 (can’t have a game with smaller populations) the rule becomes U > V.

In the above picture of U-V space, we can see the two extreme conditions. In the green region, C is more likely than D for any population size, and in the blue it is true in the limit of infinite population. For particular N > 2 you get a different division line in the blue region parallel to the two current ones. Give a specific game in the blue region, you can calculate the threshold:

N^* = \frac{2}{1 - (V - U)}

For games in the blue region, if your population exceeds the N^* threshold then C with be more likely than D.

For those interested in the mathematical details, I recommend sections 2.3 and 4 of Antal et al. (2009). In particular, I enjoy their approach in section 2.3 of showing that when the game is on the dividing line then we have a symmetric distribution around N/2 and due to the well-behaved nature of deformations of the game matrix we can extend to the non knife-edge case. The only missing study in Antal et al. (2009) is a study of the second moment of the population. In regions 5, 9, and 10 we expect a bimodal distribution, and in 2-4 and 6-8 a unimodal. Can we use the probability of mutation to bound the distance between the peaks in the former, and the variance of the peak in the latter? Another exercise for the highly enthusiastic reader.

References

Antal, T., Nowak, M.A., & Traulsen, A. (2009). Strategy abundance in games for arbitrary mutation rates Journal of Theoretical Biology, 257 (2), 340-344 DOI: 10.1016/j.jtbi.2008.11.023

Kandori, M., Mailath, G.J., & Rob, R. (1993). Learning, mutation, and long run equilibria in games. Econometrica 61(1): 29-56.

Nowak, M.A., Sasak, A., Taylor, C., & Fudenberg, D. (2004). Emergence of cooperation and evolutionary stability in finite populations. Nature 428: 646-650.