# Evolution as a risk-averse investor

I don’t know about you, but most of my money is in my savings account and not in more volatile assets like property, bonds, or stocks. This is a consequence of either laziness to explore my options, or — the more comforting alternative — extreme risk-aversion. Although it would be nice to have a few thousand dollars more to my name, it would be devastating to have a few thousand dollars less. As such if I was given a lottery where I had a 50% chance of loosing $990 or a 50% chance of winning$1000 then I would probably choose not to play, even though there is an expected gain of $10; I am risk averse, the extra variance of the bet versus the certainty of maintaining my current holdings is not worth$10 for me. I most cases, so are most investors, although the degree of expected profit to variance trade-off differs between agents.

Daniel Bernoulli (8 February 1700 – 17 March 1782) was one of the mathematicians in the famous Bernoulli family of Basal, Switzerland, and contemporary and friend of Euler and Goldbach. He is probably most famous for Bernoulli’s principle in hydrodynamics that his hyper-competitive father Johann publishing in a book he pre-dated by ten years to try and claim credit. One of Daniel’s most productive times was working alongside Euler and Goldbach in the golden days (1724-1732) of the St. Petersburg Academy. It was in Russia that he developed his solution to the St. Petersburg paradox by introducing risk-aversion, and made his contribution to probability, finance, and — as we will see — evolution.

To add to the family confusion, the St. Petersburg paradox was introduced by Nicolaus Bernoulli in a 1713 letter to de Montmort. It consists of a lottery that starts with a pot of $2; at each step, a fair coin is flipped and if the it comes out heads then the pot is double and if tails then the players wins the whole pot. The question is: how much should somebody pay for a chance to play this lottery? The expected payoff for this lottery is infinite, so if you were maximizing expected utility then you should pay all of the money you have and can manage to borrow for a chance to play. Yet I probably wouldn’t pay more than$8 to play, and few would pay more than $100 — never-mind the rational choice of all your life savings. To rectify this discrepancy with actual decision making, Daniel Bernoulli suggested the concept of a utility function. What matters to you is not the expected payoff, but the expected utility — a measure of how the payoff would make you feel. In particular, he suggested the logarithmic utility function: if you have a current wealth w then this provides you with an amount ln w of utility, and if you win$p then your utility will increase ln(w + p). Under this utility function, if you are a millionaire then you should be willing to pay up to about \$11 to play. Unfortunately, this solution is only a band aid. If instead of making the payoff for n heads be $2^n$, we chose $e^{2^n}$ then the paradox would come right back. In general, it is easy to see that for any unbounded utility function, we can always choose a payoff function so that the St. Petersburg paradox yields an infinite expected utility — given a utility function f, simply choose the payoff for n heads to be any x such that $f(x) \geq 2^n$; this is always possible when f is unbounded. However, they key point is that concave utility functions produce risk aversion, regardless of if the St. Petersburg paradox can be tweeked to be profitable enough to overcome most of them.

What does this have to do with evolution? Evolution provides us with a very convenient bounded concave utility function. The equivalent of money in evolution is fecundity or absolute fitness (not exactly the same, but for convenience we will assume simple life histories; the mathematical details are identical for more complicated life histories, but we just have to change the descriptive words). Suppose you are a seasonal organism that lives for one breeding season, your fecundity is the number of offspring that your produce. If you could alter you reproductive strategy from being able to produce 2 children for sure, or 1 child with 55% probability and 4 with 45% then which would you choose? Well, a lot of people think that evolution tries to maximize fitness and the sure-bet has an expected fitness of 2 while the randomized one has 2.35; surely the latter is the better bet. Unfortunately, this intuition would mislead you if you are competing against other organisms with the same evolutionary choices.

In particular, as the world approaches carrying capacity the only thing you really care about is your proportion of the population. Suppose an agent type make up a proportion p of the population and receives a payoff $W_1$ while the average payoff across the rest of population is $W_2$. At the next time step their proportion is given by $p' = \frac{pW_1}{pW_1 + (1 - p)W_2}$. Since we care about which agent type comes to dominate the population, this is the relevant utility function and for all initial wealth 0 < p < 1 it is concave and bounded. Hence, evolution will choose to minimize variance when all else is equal (this makes it natural to find things like risk-dominance in evolutionary game theory). In the case I gave before, however, the non-variable mean was strictly lower, so all else wasn’t equal. This means that the arithmetic mean was not a good way of averaging.

Instead Orr (2007; following Gillespie, 1977) advocated for using the geometric mean for calculating average fitness, instead. The advantage of the geometric mean is that it takes into account the variance, since $G \approx \mu - \frac{\sigma^2}{\mu}$ where $G$ is the geometric mean, $\mu$ is the arithmetic mean, and $\sigma^2$ is the variance. Another way to think of the geometric mean is as the arithmetic mean of the log of fitness. Unfortunately, the logarithm of the update rule is still not linear in the log of probability, instead, we should look at the log of the odds ratio (or logit):

\begin{aligned} \mathrm{logit}(p') & = \log \frac{p'}{1 - p'} \\ & = \log(\frac{pW_1}{pW_1 + (1 - p)W_2}\frac{pW_1 + (1 - p)W_2}{(1 - p)W_2}) \\ & = \mathrm{logit}(p) + (\log(W_1) - \log(W_2)) \end{aligned}

Thus the logit of p’ is a linear function in the logit of p and the log of fitnesses. This means, we can defined the logistic average (logistic function is the inverse of logit) as the arithmetic mean of the logit of the variables. In that case, we can say that two organisms with the same geometric mean fitness will have the same logistic average proportion in the population after selection. Just like in economics, it is important to figure out what the relevant utility function in order to predict evolutionary outcomes. Even though selection is risk averse if we look at fitness and the relative proportion of organisms, it is risk neutral if we look at the logarithm of fitness and the log odds ratio of sampling the organism. Thus, I would argue that the latter units are the more natural choice for thinking about evolution.

Gillespie, J.H. (1977). Natural selection for variance in offspring number: a new evolutionary principle. Am. Nat. 111:1010-1014.

Orr, H.A. (2007). (2007). Absolute fitness, relative fitness, and utility. Evolution, 61(12), 2997-3000. Evolution, 61 (12), 2997-3000 DOI: 10.1111/j.1558-5646.2007.00237.x