Ohtsuki-Nowak transform for replicator dynamics on random graphs

We have seen that the replicator equation can be a useful tool in understanding evolutionary games. We’ve already used it for a first look at perception and deception, and the cognitive cost of agency. However, the replicator equation comes with a number of inherent assumptions and limitations. The limitation Hisashi Ohtsuki and Martin Nowak wanted to address in their 2006 paper “The replicator equation on graphs” is the assumption of a totally inviscid population. The goal is to introduce a morsel of spatial structure, while still retaining the analytic tractability and easy-of-use of replicator dynamics.

Replicator dynamics assume that every individual in population is equally likely to interact with every other individual in the population. However, in most settings the possible interactions are not as simple. Usually, there is some sort of structure to the population. The way we usually model structure, is by representing individuals as vertexes in a graph, and if two vertexes are adjacent (connected by an edge) then the two individuals can interact. For replicator dynamics the model would be a complete graphs. Ohtsuki and Nowak consider a structure which is a bit more complicated: a random k-regular graph.

There are several ways to think about random k-regular graphs. We call a graph k-regular if every vertex has exactly k edges incident on it (and thus, exactly k neighbors). If we consider a sample from the uniform distribution over all k-regular graphs then we get a random k-regular graph. Of course, this is an inefficient approach, but captures the main statistical qualities of these graphs. To see how to efficiently generate a graph, take a look at Marcel’s post on generating random k-regular graphs.

With the spatial structure in place, the next important decision is the update or evolutionary dynamics to use. Ohtsuki & Nowak (2006) consider four different rules: birth-death (BD), death-birth (DB), imitation (IM), and pairwise comparison (PC). For BD, player is chosen for reproduction in proportion to fitness and the offspring replaces a neighbor chosen uniformly at random. In DB, a focal agent is chosen uniformly at random and is replaced by one of his neighbors, who are sampled randomly in proportion to fitness. IM is the same as DB, except the focal agents counts as one of its own neighbors. For PC, both the focal individual and his partner are chosen uniformly at random, but the focal agent replaces his partner with probability that depends (according to the Fermi rule) on their payoff difference, otherwise the partner replaces the focal individual.

Throughout the paper, fitness is calculated in the limit of weak selection. To account for the spatial structure, the authors use pair-approximation. This method is perfect for Bethe lattice and accurate for any graph that is locally tree like. Thankfully, random graphs are locally tree like (out to distance on the order of \log n) and hence the analysis is reasonable for large n. However, most real-world networks tend to have high clustering coefficients, and thus are not tree-like. For such networks, the Ohtsuki-Nowak transform would be a bad choice, but still a step closer than the completely inviscid replicator dynamics.

The stunning aspect of the paper, is that in the limit of very large populations they get an exceptionally simple rule. Give a game matrix A, you just have to compute the Ohtsuki-Nowak transform A' = \text{ON}_{k,\nu}(A) and then you recover the dynamics of your game by simply looking at the replicator equation with A'. In will present the authors in slightly different notation than their paper, in order to stress the most important qualitative aspects of the transform. That the purpose I need to define the column vector \Delta which is the diagonal of the game matrix A = [a_{ij}], i.e. \Delta_i = a_{ii}. The transform becomes:

\text{ON}_{k,\nu}(A) = A + \frac{1}{k-2}(\Delta\vec{1}^T - \vec{1}\Delta^T) + \frac{\nu}{k - 2}(A - A^T)

Where \vec{1} is the all ones vector; thus \Delta\vec{1}^T (\vec{1}\Delta^T) is a matrix with diagonal elements repeated to fill each row (column). I call the last element (\nu) the neighborhood update parameter. Ohtsuki & Nowak show that for birth-death and pairwise comparison \nu = 1, for imitation it is \nu = \frac{3}{k + 3}, for death-birth \nu = \frac{1}{k + 1}. I used this suggestive notation, because I think the transform can be refined to allow all values of \nu between 1 and \frac{1}{k + 1}. In particular, I think the easiest way to achieve this is by selecting between BD and DB with some probability at each update step.

My other reason for rewriting in this condensed form is to gain an intuition for each term of the sum. The first term is the original game, and is expected to be there with some summation because we are doing a small perturbation in the limit of weak selection, thus a linear correction to A is not surprising. The second term contains a difference of only diagonal elements between the focal agent and the competing strategy. Thus, the second term is local-aggregation terms and it is not surprising to see it lose strength as k increases and the focal agent has a smaller and smaller effect on its neighborhood. Ohtsuki & Nowak think of the third term as an assortment factor, however this intuition is lost on me. Hilbe (2011) studies the replicator equation with finite sampling, and achieves the same modification (with some massaging) as the third term of ON_{k,\nu}(A) which we will discuss in a future post. Thus, I find it better to think of the third term as a finite sampling term. It is then clear why the term decreases from BD to DB: the sampling neighborhood for calculating fitness is quadratically larger in the latter update rule.

As case studies, Ohtsuki & Nowak (2006) apply their transform to the prisoner’s dilemma, snowdrift, coordination, and rock-paper scissors games. I refer the interested reader to the original paper for the latter example and I will deal with the former three at once by looking at all cooperate-defect games:

\begin{aligned}  \text{ON}_{k,\nu}( \begin{pmatrix} 1 & U \\ V & 0 \end{pmatrix} )  & = \begin{pmatrix} 1 & U \\ V & 0 \end{pmatrix} + \frac{1}{k - 2} \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} + \frac{\nu}{k - 2} \begin{pmatrix} 0 & U - V \\ V - U & 0 \end{pmatrix} \\  & = \begin{pmatrix} 1 & X \\ Y & 0 \end{pmatrix}  \end{aligned}

Where X-Y is a new coordinate system given by the linear transform:

\begin{pmatrix}X \\ Y \end{pmatrix} = \frac{1}{k - 2}\begin{pmatrix}k - 2 + \nu & - \nu \\ - \nu & k - 2 + \nu \end{pmatrix} \begin{pmatrix} U \\ V \end{pmatrix} + \frac{1}{k - 2}\begin{pmatrix}1 \\ - 1\end{pmatrix}

The effects of this coordinate transform on UV-space can be seen in the figure above for k = 3. The green region is where cooperation is the only evolutionary stable strategy, the red is where defection is the only ESS. The uncoloured regions have a fixed point of C-D dynamics; in the top right region of game space, the point is an attractor, in the bottom left it is a point of bifurcation. We can see that BD cannot create cooperation in the PD game, while DB creates a small slither on white where cooperation can co-exist with defection. In both conditions, we can see large effects for the Hawk-Dove game. In the non-transformed game, the only type of equilibria in HD is an attractive fixed point where cooperators and defectors co-exists. However, under both the BD and DB dynamics, regions of full cooperator and full defector dominance appear in the HD game. Similar for coordination games, usually there is only bifurcation around a fixed point, but under both BD and DB regions of all cooperators and all defectors appear. Thus, spatial structure can resolve the coordination equilibrium selection dilemma for some game parameters.

ResearchBlogging.org

Hilbe, C. (2011). Local replicator dynamics: a simple link between deterministic and stochastic models of evolutionary game theory. Bull Math Biol, 73: 2068-2097.

Ohtsuki H, & Nowak MA (2006). The replicator equation on graphs. Journal of Theoretical Biology, 243 (1), 86-97 PMID: 16860343

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

21 Responses to Ohtsuki-Nowak transform for replicator dynamics on random graphs

  1. kylerjbrown says:

    Would other update rules fall in between those two in UV space, or might there be even wilder changes due to update rule?

    • IM falls in between them, as I suspect would the probabilistic rule I suggest. However, I don’t think you can reasonable take \mu < \frac{1}{k + 1} without making odd update rules.

      As far as I can tell, that term is related to the number of agents you sample to get the most important info for an update. It is unusual to look more than two steps away from a focal individual to calculate where his child goes, and the number of agents within two steps is k(k - 1) and the finite-sampling discount factor according to Hilbe would go as \frac{1}{N - 2}, so having more than \frac{1}{(k - 2)(k -+1)} of a discount seems unlike to me and DB achieves this.

      However, I will think about it more carefully and try to expand on the ideas when I post about Hilbe's paper. Thanks for the suggestion!

  2. Pingback: Social learning dilemma | Theory, Evolution, and Games Group

  3. Pingback: Evolutionary games in set structured populations | Theory, Evolution, and Games Group

  4. Pingback: Natural algorithms and the sciences | Theory, Evolution, and Games Group

  5. Pingback: Game theoretic analysis of motility in cancer metastasis | Theory, Evolution, and Games Group

  6. Pingback: Microenvironmental effects in prostate cancer dynamics | Theory, Evolution, and Games Group

  7. Pingback: Edge effects on the invasiveness of solid tumours | Theory, Evolution, and Games Group

  8. Pingback: Computational complexity of evolutionary stable strategies | Theory, Evolution, and Games Group

  9. Pingback: Cooperation through useful delusions: quasi-magical thinking and subjective utility | Theory, Evolution, and Games Group

  10. Pingback: Approximating spatial structure with the Ohtsuki-Nowak transform | Theory, Evolution, and Games Group

  11. Pingback: Cataloging a year of blogging: cancer and biology | Theory, Evolution, and Games Group

  12. Pingback: Space and stochasticity in evolutionary games | Theory, Evolution, and Games Group

  13. Pingback: Seeing edge effects in tumour histology | Theory, Evolution, and Games Group

  14. Pingback: Making model assumptions clear | CancerEvo

  15. Pingback: Measuring games in the Petri dish | Theory, Evolution, and Games Group

  16. Pingback: Spatializing the Go-vs-Grow game with the Ohtsuki-Nowak transform | Theory, Evolution, and Games Group

  17. Pingback: Token vs type fitness and abstraction in evolutionary biology | Theory, Evolution, and Games Group

  18. Pingback: Abstract is not the opposite of empirical: case of the game assay | Theory, Evolution, and Games Group

  19. Pingback: Heuristic models as inspiration-for and falsifiers-of abstractions | Theory, Evolution, and Games Group

  20. Pingback: Effective games from spatial structure | Theory, Evolution, and Games Group

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.