## Computational complexity of evolutionary stable strategies

Yesterday, I shared a video of John Maynard Smith introducing evolutionary game theory (EGT) to the London Mathematical Society. I suggested that at its foundation, EGT was just like classical game theory, and based on equilibrium analysis — the evolutionary stable strategy (Maynard Smith & Price, 1973). Given a utility function $U(\sigma,\sigma')$ that gives the expected payoff for mixed strategy $\sigma$ interacting with mixed strategy $\sigma$, we say that $\sigma^*$ is an evolutionary stable strategy (ESS) if:

1. For all $\sigma' \neq \sigma^*$ we have $\quad U(\sigma^*,\sigma^*) \geq U(\sigma',\sigma^*)$, and
2. If $U(\sigma^*,\sigma^*) = U(\sigma',\sigma^*)$ then $U(\sigma^*,\sigma') > U(\sigma',\sigma')$.

Or we look at the contrapositive: a strategy $\sigma'$ can invade a host strategy $\sigma$ if (1) it has a better payoff in a population of $\sigma$s than $\sigma$s have against themselves, or (2) if they are neutral then a small population of $\sigma'$ benefits themselves more than they benefit the host population. If some strategy $\sigma^*$ has no invading strategy $\sigma'$ then it is an evolutionary stable strategy.

If you are an observant reader and familiar with classical game theory then you probably noticed that the first condition of the direct definition is equivalent to the definition of a Nash equilibrium (NE). The second clause joined with an “and” means that every NE is an ESS, but not every ESS is an NE. However, that second condition seems innocuous enough, surely it can’t change the the qualitative nature of the equilibrium concept?