# Evolution as a risk-averse investor

December 22, 2013 5 Comments

I don’t know about you, but most of my money is in my savings account and not in more volatile assets like property, bonds, or stocks. This is a consequence of either laziness to explore my options, or — the more comforting alternative — extreme risk-aversion. Although it would be nice to have a few thousand dollars more to my name, it would be devastating to have a few thousand dollars less. As such if I was given a lottery where I had a 50% chance of loosing $990 or a 50% chance of winning $1000 then I would probably choose not to play, even though there is an expected gain of $10; I am risk averse, the extra variance of the bet versus the certainty of maintaining my current holdings is not worth $10 for me. I most cases, so are most investors, although the degree of expected profit to variance trade-off differs between agents.

Daniel Bernoulli (8 February 1700 – 17 March 1782) was one of the mathematicians in the famous Bernoulli family of Basal, Switzerland, and contemporary and friend of Euler and Goldbach. He is probably most famous for Bernoulli’s principle in hydrodynamics that his hyper-competitive father Johann publishing in a book he pre-dated by ten years to try and claim credit. One of Daniel’s most productive times was working alongside Euler and Goldbach in the golden days (1724-1732) of the St. Petersburg Academy. It was in Russia that he developed his solution to the St. Petersburg paradox by introducing risk-aversion, and made his contribution to probability, finance, and — as we will see — evolution.

To add to the family confusion, the St. Petersburg paradox was introduced by Nicolaus Bernoulli in a 1713 letter to de Montmort. It consists of a lottery that starts with a pot of $2; at each step, a fair coin is flipped and if the it comes out heads then the pot is double and if tails then the players wins the whole pot. The question is: how much should somebody pay for a chance to play this lottery? The expected payoff for this lottery is infinite, so if you were maximizing expected utility then you should pay all of the money you have and can manage to borrow for a chance to play. Yet I probably wouldn’t pay more than $8 to play, and few would pay more than $100 — never-mind the rational choice of all your life savings.

To rectify this discrepancy with actual decision making, Daniel Bernoulli suggested the concept of a utility function. What matters to you is not the expected payoff, but the expected utility — a measure of how the payoff would make you feel. In particular, he suggested the logarithmic utility function: if you have a current wealth *w* then this provides you with an amount *ln w* of utility, and if you win $*p* then your utility will increase *ln(w + p)*. Under this utility function, if you are a millionaire then you should be willing to pay up to about $11 to play. Unfortunately, this solution is only a band aid. If instead of making the payoff for *n* heads be , we chose then the paradox would come right back. In general, it is easy to see that for any unbounded utility function, we can always choose a payoff function so that the St. Petersburg paradox yields an infinite expected utility — given a utility function *f*, simply choose the payoff for *n* heads to be any *x* such that ; this is always possible when *f* is unbounded. However, they key point is that concave utility functions produce risk aversion, regardless of if the St. Petersburg paradox can be tweeked to be profitable enough to overcome most of them.

What does this have to do with evolution? Evolution provides us with a very convenient bounded concave utility function. The equivalent of money in evolution is fecundity or absolute fitness (not exactly the same, but for convenience we will assume simple life histories; the mathematical details are identical for more complicated life histories, but we just have to change the descriptive words). Suppose you are a seasonal organism that lives for one breeding season, your fecundity is the number of offspring that your produce. If you could alter you reproductive strategy from being able to produce 2 children for sure, or 1 child with 55% probability and 4 with 45% then which would you choose? Well, a lot of people think that evolution tries to maximize fitness and the sure-bet has an expected fitness of 2 while the randomized one has 2.35; surely the latter is the better bet. Unfortunately, this intuition would mislead you if you are competing against other organisms with the same evolutionary choices.

In particular, as the world approaches carrying capacity the only thing you really care about is your proportion of the population. Suppose an agent type make up a proportion *p* of the population and receives a payoff while the average payoff across the rest of population is . At the next time step their proportion is given by . Since we care about which agent type comes to dominate the population, this is the relevant utility function and for all initial wealth *0 < p < 1* it is concave and bounded. Hence, evolution will choose to minimize variance when all else is equal (this makes it natural to find things like risk-dominance in evolutionary game theory). In the case I gave before, however, the non-variable mean was strictly lower, so all else wasn’t equal. This means that the arithmetic mean was not a good way of averaging.

Instead Orr (2007; following Gillespie, 1977) advocated for using the geometric mean for calculating average fitness, instead. The advantage of the geometric mean is that it takes into account the variance, since where is the geometric mean, is the arithmetic mean, and is the variance. Another way to think of the geometric mean is as the arithmetic mean of the log of fitness. Unfortunately, the logarithm of the update rule is still not linear in the log of probability, instead, we should look at the log of the odds ratio (or logit):

Thus the logit of *p’* is a linear function in the logit of *p* and the log of fitnesses. This means, we can defined the logistic average (logistic function is the inverse of logit) as the arithmetic mean of the logit of the variables. In that case, we can say that two organisms with the same geometric mean fitness will have the same logistic average proportion in the population after selection. Just like in economics, it is important to figure out what the relevant utility function in order to predict evolutionary outcomes. Even though selection is risk averse if we look at fitness and the relative proportion of organisms, it is risk neutral if we look at the logarithm of fitness and the log odds ratio of sampling the organism. Thus, I would argue that the latter units are the more natural choice for thinking about evolution.

Gillespie, J.H. (1977). Natural selection for variance in offspring number: a new evolutionary principle. *Am. Nat.* 111:1010-1014.

Orr, H.A. (2007). (2007). Absolute fitness, relative fitness, and utility. Evolution, 61(12), 2997-3000. Evolution, 61 (12), 2997-3000 DOI: 10.1111/j.1558-5646.2007.00237.x

Pingback: Cataloging a year of blogging: from behavior to society and mind | Theory, Evolution, and Games Group

Thanks for yet another brilliant post Artem. Regarding the Saint-Petersburg lottery, I argue on my blog (http://blog.thegrandlocus.com/2012/03/Drunk-man-walking) that the infinite expected gain is often mis-interpreted. In particular, it is not clear that rational players should borrow all the money they can to play the lottery. Your initial fortune is always finite, which means that you can be bankrupt before you hit the jackpot. Depending on this fortune, it may be rational to play the Saint-Petersburg lottery… or not.

In the context of iterated games (which covers a lot of practical applications) risk aversion has an effect on the average gain, not only on the variance. For instance, if you are the best poker player at the table, you should not bet “all-in” because your chances of losing everything are too high. The longer you stay in the game, the more your higher skills will make a difference and the more you are sure of winning the game. Reciprocally, if you are dominated, you should bet high amounts. This is somewhat analogous to the effect of population size on the probability of fixation. In a huge population only the allele with highest fitness can be fixed because the changes in allele frequency are tiny, while in a small population the probability of fixation is smaller (because the stakes are higher, relatively speaking).

Quite a lengthy introduction to come to the question: is utility the best way to look at risk aversion? Does not the theory of stochastic processes and random walks give a more rational readout?

Wow! Nice blog, I wasn’t familiar with it before, but now your feed has an extra follower and I have another blog to catch up on reading (I am way behind).

I liked your post, but I don’t think you are really addressing the St. Peterburg paradox fairly. It really is meant as a one-time lottery, but I am willing to entertain repeated lotteries. Further, if I read your code correctly (I only skimmed it briefly), you are kind of throwing away the main part of the paradox by saying that the only outcome is either the player doubles their investment in an iteration or looses it. But why did you pick doubling as your threshold? Why not quadruple? Or just add the payoff to the net worth instead of calculating the average number of doublings?

However, I do agree with the general idea that iterated processes are much more interesting, especially if we are looking at evolution. Unfortunately, this still doesn’t get us away from issues of utility, or even offer us anything new, since you basically defined a utility function, where having a net worth < 0 is equal to negative inf utility. Also, even with a simple iterated process, you still have to argue what you want to minimize: time to extinction? Integral of all money before extinction? Max money achieved before extinction? etc… things only get more complicated, but definitely worth thinking about.

Pingback: Evolution is a special kind of (machine) learning | Theory, Evolution, and Games Group

Pingback: Space and stochasticity in evolutionary games | Theory, Evolution, and Games Group