Multiplicative versus additive fitness and the limit of weak selection

Previously, I have discussed the importance of understanding how fitness is defined in a given model. So far, I’ve focused on how mathematically equivalent formulations can have different ontological commitments. In this post, I want to touch briefly on another concern: two different types of mathematical definitions of fitness. In particular, I will discuss additive fitness versus multiplicative fitness.[1] You often see the former in continuous time replicator dynamics and the latter in discrete time models.

In some ways, these versions are equivalent: there is a natural bijection between them through the exponential map or by taking the limit of infinitesimally small time-steps. A special case of more general Lie theory. But in practice, they are used differently in models. Implicitly changing which definition one uses throughout a model — without running back and forth through the isomorphism — can lead to silly mistakes. Thankfully, there is usually a quick fix for this in the limit of weak selection.

I suspect that this post is common knowledge. However, I didn’t have a quick reference to give to Pranav Warman, so I am writing this.

Fitness is the currency of evolution. A summary statistic and useful simplification. A single quantity that we invented to describe the aspects of an organism essential to evolutionary dynamics. We usually want this quantity to be totally-ordered — so we can compare any two fitnesses — and continuous. Thus, we pick the real numbers as the meter-stick of fitness. We usually want our models to be both backward-looking — to infer common ancestors, prior populations, etc — and forward-looking — to describe where evolution will go; so the fitnesses need to be compossible and invertible. We imagine that equilibria are at least conceptually possible — although an equilibrium might not exist in any particular model — so we need a static element of fitness. So we usually have composition of fitness, a static element of fitness, and inverses. Abstractly, we use a group over the reals to represent fitness.

But there are two popular groups over the reals: the additive group \mathbb{R}_+ = (\mathbb{R}, 0, +) and the multiplicative group \mathbb{R}_\times = (\mathbb{R}_{>0}, 1, \times). For the former, a population of type X can have any real number w_X \in \mathbb{R} as a fitness; at statis the population has w_X = 0 fitness; if it is growing then w_X > 0; and if shrinking then w_X < 0. For the latter, a population can have any positive real number W_X \in \mathbb{R}_{>0} as a fitness; at statis the population has W_X = 1 fitness; if it is growing then W_X > 1; and if shrinking then 1 > W_X > 0. For consistency, I will use lower case letters for the former, and upper case for the latter.

Let’s consider the simplest population dynamics usually considered for these representations of fitness. Continuous time dynamics for additive fitness, and discrete time for multiplicative fitness.[2]

\frac{dN_X}{dt} = w_X N_X \quad \text{and} \quad N_X(t + 1) = W_X N_X(t)

The issue with these equations is that we defined time and fitness together. This matters more for the discrete time equation. If we still want our fitness to be per unit time, but want to use a step size other than a single step then we need to modify the discrete time equation as:

N_X(t + dt) = W_X^{dt} N_X(t)

Note that since the fitness is multiplicative, time rescaling is done by powering instead of the multiplication we’d use for additive fitness. To get an intuition for why this is the case, suppose that we wanted to look at 2dt timesteps in the future then we would have:

\begin{aligned}  N_X(t + 2dt) & = N_X((t + dt) + dt) \\  & = W_X^{dt} N_X(t + dt) \\  & = W_X^{2dt} N_X(t)  \end{aligned}

And in general, if you had an integer n = 1/dt then you’d have:

\begin{aligned}  N_X(t + 1) & = N_X(t + ndt) \\  & = W_X^{dt} N_X(t + (n - 1)dt) \\  & ... \\  & = W_X^{ndt} N_X(t) \\  & = W_X N_X(t)  \end{aligned}

With this more robust discrete time equation, we can now use the definition of derivative to show how additive fitness generates multiplicative fitness:

\begin{aligned}  \frac{dN_X}{dt} & = \lim_{dt \rightarrow 0} \frac{N_X(t + dt) - N_X(t)}{dt} \\  & = N_X(t) \lim_{dt \rightarrow 0} \frac{W_X^{dt} - 1}{dt}  \end{aligned}

Combining this with the definition of the continuous model, we get:

w_X = \lim_{dt \rightarrow 0} \frac{W_X^{dt} - 1}{dt}

Rearranging the above equation for W_X and using the limit definition of the exponential map (i.e. that e = \lim_{n \rightarrow \infty} (1 + \frac{1}{n})^n), we get:

\begin{aligned}  W_X & = \lim_{dt \rightarrow 0} (1 + w_X dt)^{1/dt} \\  & = e^{w_X}  \end{aligned}

Given that the exponential map is analytic, any physicists in the audience are at this point probably itching to Taylor expand it:

\begin{aligned}  W_X & = e^{w_X} \\  & = 1 + w_X + O(w_X^2)  \end{aligned}

where O(\cdot) is big-O notation that covers the quadratic and higher terms of w_X. When w_X are very small (i.e. close to zero in absolute value) then we can ignore the terms that are quadratic or higher in w_X, giving us just W_X \approx 1 + w_X.[2] This is the limit of weak selection.

If there is no natural time-scale in the model then this limit can always be obtained by rescaling time (and thus redefining w_X, W_X as smaller and smaller). However, in most evolutionary game theory models there is often a natural time-scale: a single game interaction. In evolutionary game theory, it is customary to assume that the effects of the game interaction are very small compared to the base fitness and thus work in the weak selection limit. This often allows a simple analysis of the dynamic regimes by solving a (often linear) system of equations for when w_X = 0. However, if we are interested in stochastic models of finite populations then Wu et al. (2010, 2013) caution us that even the qualitative rank-ordering of strategy success might not carry over from weak to strong selection when the payoffs aren’t small compared to the base fitness. The higher order terms of the Taylor expansion can start to matter.

Mathematically, this discussion is just a trivial case of the more general Lie theory. In particular, we can think of additive fitness as the Lie algebra corresponding to the Lie group that is the multiplicative fitness. Bringing in such heavy mathematical machinery is not particularly useful in this case — and why I leave it as an after-thought — but it can help us see the way when we are considering more complicated models like Moran processes.

Notes and References

  1. I don’t know what is the standard terminology for this; if there is any. And I am calling the two fitnesses additive versus multiplicative because of which group operation they use on the reals. However, I realize that there might be some confusion with additive versus multiplicative fitness effects, which is used in the context of gene epistasis. Although these two topics are not completely independent, I will avoid talking about how exactly fitness is calculated from a genotype in this post; and focus exclusively on the first topic of how fitnesses define dynamics.
  2. If we want to now then we can rewrite out weak-selection discrete time equation as:

    N_X(t + 1) = (1 + w_X)N_X(t)

    Of course, we could have also arrived at the same process not from weak selection but from a different set of micro-dynamical assumptions. For example, an overlapping generations model.

ResearchBlogging.orgWu, B., Altrock, P. M., Wang, L., & Traulsen, A. (2010). Universality of weak selection. Phys. Rev. E, 82(4): 046106.

Wu B, García J, Hauert C, & Traulsen A (2013). Extrapolating weak selection in evolutionary games. PLoS Computational Biology, 9 (12) PMID: 24339769

Advertisements

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s