Proximal vs ultimate constraints on evolution

For a mathematician — like John D. Cook, for example — objectives and constraints are duals of each other. But sometimes the objectives are easier to see than the constraints. This is certainly the case for evolution. Here, most students would point you to fitness as the objective to be maximized. And at least at a heuristic level — under a sufficiently nuanced definition of fitness — biologists would agree. So let’s take the objective as known.

This leaves us with the harder to see constraints.

Ever since the microscope, biologists have been expert at studying the hard to see. So, of course — as an editor at Proceedings of the Royal Society: B reminded me — they have looked at constraints on evolution. In particular, departures from an expected evolutionary equilibrium is where biologists see constraints on evolution. An evolutionary constraint is anything that prevents a population from being at a fitness peak.

Winding path in a hard semi-smooth landscape

In this post, I want to follow a bit of a winding path. First, I’ll appeal to Mayr’s ultimate-proximate distinction as a motivation for why biologists care about evolutionary constraints. Second, I’ll introduce the constraints on evolution that have been already studied, and argue that these are mostly proximal constraints. Third, I’ll introduce the notion of ultimate constraints and interpret my work on the computational complexity of evolutionary equilibria as an ultimate constraint. Finally, I’ll point at a particularly important consequence of the computational constraint of evolution: the possibility of open-ended evolution.

In a way, this post can be read as an overview of the change in focus between Kaznatcheev (2013) and (2018).

Proximal vs ultimate causes, just-so stories, and constraints

Biology is a deceptively uniform label for a broad range of questions and approaches to answering them. For Mayr (1961), there was a natural joint at which biology could be cut in two: questions that asked ‘how?’ versus questions that asked ‘why?’. For him, this epistemological division corresponded roughly to the disciplinary division between functional biology and evolutionary biology. The ideal functional biologist, on the one hand, sees some property or trait of an individual organisms and asks how the laws of chemistry and physics implement that property or trait. To borrow a metaphor from the central dogma of biology, she is concerned with how the genetic code of the organism is decoded into a given function. She is eager to isolate the phenomena and study it through well controlled experiments. She is closer to an engineer than a historian. The ideal evolutionary biologist, on the other hand, sees some property or trait of organisms and asks why it exists. Or more specifically: ‘how come’? In the metaphor of codes, he is interested in how the genetic code was written. He is eager to tell the story of origins. He is closer to a historian than an engineer. Of course, any particular biologist is some combination of these two extremes.

Note that both the functional and evolutionary biologist are asking questions that have causal answers. But the answers are of a different form. The causes of the functional biologist are proximal; and the causes of the evolutionary biologist are ultimate. Given an interesting trait, it is important to understand both its proximal and ultimate cause.

The Mayr (1961) distinction between proximal and ultimate causes has been very influential in biology, but also extends to other domains like psychology. I want to turn briefly to evolutionary psychology — the field concerned with ultimate explanations of human behavior — for a motivation of why constraints on evolution are important. Evolutionary psychology is often — and I think rightly — accused of just-so stories. Given almost any human behavior, a creative psychologist can come up with a story for how that behavior was adaptive for our ancestors. Such a story is hard to challenge, with little direct empirical evidence that can be brought to bear on it. As such, they can enter our understanding of ourselves as ‘science-approved’ etiological myths. At times, these myths can be dangerous and detrimental to society: just think about the lobsters.

More importantly for this post, a just-so story seems scientifically unsatisfying. Why? It is tempting to jump at the unfalsifiabiltiy of a just-so story, but this needs to be done carefully. At the surface, an answer to “how come” is not making a prediction. Since it is explaining a trait that we observe, that trait has some evolutionary history. You can’t falsify a true statement. The issue is rather in the kind of history. In my description of just-so stories in psychology, I snuck in a loaded word: adaptive. What makes etiological myths powerful is that they feel purposeful. And they feel purposeful because adaptation feels purposeful: adapted to. If our evolutionary explanation was simply random drift then it would not be a dangerous explanation. In fact, to most it wouldn’t even feel like much of an explanation: this happened by chance.

But chance is a valid explanation. In fact, Ariew (2003) argues that ignoring chance and other non-adaptive aspects of evolution as causal factors is a big oversight of Mayr’s. He suggests that Mayr’s ultimate causes are causes due to natural selection. But natural selection is just one of many evolutionary forces, so we need to generalize to evolutionary causes. Evolutionary causes are answers to ‘how come?’ that consider all the forces of evolution, not just chance.

If there were no constraints on evolution, however, these other forces would be irrelevant. If populations could always quickly reach fitness optima then we would only care about natural selection. So understanding constraints on evolution is an important tool in helping avoid a historical account of traits that is always a just-so adaptationist story.

Chickens and proximal constraints on evolution

For those that view evolution as a sum of forces, with natural selection being only one of them, it is possible for the other forces to overpower natural selection and keep the population away from a local fitness peak. Such cases of maladaptation are usually attributed to mechanisms like mutational meltdown, mutation bias, recombination, genetic constraints due to lack of variation, or explicit physical or developmental constraints of a particular physiology. For a nice overview, see either Crespi (2000) or Barton & Partridge (2000).

To be specific: consider chickens — a more practical reason for a non-vegetarian to worry about evolutionary constraints. If I am breeding hens for agriculture then I want two properties. First, I want the hens to often lay delicious eggs. Given that I mostly use eggs for cake making, let me call this the ‘have-my-cake’ property. Second, I want the hens themselves to get fat and tasty so that at the end of the season I can eat their flesh. Call this the ‘eat-it’ property. Naturally, I should select the fattest, most egg-laying chickens to reproduce. The goal is to use evolution to maximize agricultural yield. Of course, people have tried this — and failed. At some point, you hit an evolutionary constraint. This constraint forces you to make a trade-off between the hen laying more eggs or producing more flesh. No matter how hard you select, there just won’t be an available mutant (genetic constraint) that is amazing at both tasks. At the reductive level, this is due to how animals grow (developmental constraints) and how metabolic energy has to be distributed (physical constraint). I simply can’t have-my-cake and eat-it, too.

At the surface, these constraints don’t seem to be directly linked to fitness and selection. Instead, they are linked to evolutionary considerations that operate at the level of a particular population (factors like population size and lack of variation) or individuals (factors like developmental and physical constraints). I will refer to such situations, where non-selection forces (and/or aspects internal to the population) keep the population from reaching a local fitness peak, as proximal constraints on evolution. A lot of these constraints are from factors that functional biologists would focus on, but clearly all constraints are of interest to evolutionary biologists. So in this way, I am stretching Mayr’s terminology a bit. In contrast, a constraint is ultimate if it is due exclusively to features of the fitness landscape. Here, I am embracing Mayr’s restriction (noted by Ariew) and associating the word ultimate only with factors arising solely from natural selection. Of course, Mayr was concerned with naming causes not constraints, so I am stepping out a bit for both proximal and ultimate constraints.

Ultimate constraints and open-ended evolution

Most constraints that have been studied already are proximal because it makes little sense to try to ‘stop’ or ‘hinder’ natural selection by using natural selection. However, not all constraints are proximal. One candidate for an ultimate constraint on evolution is already widely recognized: historicity or path-dependence. Sometimes it is also known as lack of fit intermediaries (although that is a bigger assumption). The intuition is that a local peak might not be the tallest in the mountain range of fitness, so reaching it can prevent us from walking uphill to the tallest peak. Thus natural selection can guide the population to a local peak that isn’t very tall and thus stop natural selection from finding another taller local peak.

This ultimate constraint has directed much of the work on fitness landscapes toward how to avoid sub-optimal peaks or how a population might move from one peak to another. Usually, these two questions are answered with appeals to the strength of other evolutionary forces. A prototypical example is a small population size that allows random drift to introduce deleterious mutations that push the population off a peak and into a fitness valley then — if the population is lucky — natural selection carries the population out of the valley to a new and higher peak.

The above example reveals an important feature of constraints and the reason I used scare quotes around ‘stop’ and ‘hinder’. From common use, we expect constraints to be impediments: we expect constraints to stop us or to make something worse. But sometimes, constraining ourselves can do the opposite. In this case, the constraint of random drift allowed a population to side-step a constraint of a low local peak.

But both questions about avoiding sub-optimal local peaks or moving from one to another implicitly assume that local peaks are the norm for natural selection and easy to reach. Further, under the strict definition of an evolutionary constraint as something that stops us from reaching a local fitness peak, the existence of higher peaks is irrelevant. So in some way, historicity is a not technically a constraint since a local peak is still reached. The big issue is that we seldom consider that even reaching a local peak might be impossible in any reasonable amount of time. This comes from relying too much on our low-dimensional intuitions.

To avoid this, we can return to the math that was hinted in the opening to this post: combinatorial optimization and computational complexity.

What I show in Kaznatcheev (2018) is that computational complexity can be an ultimate constraint on evolution. Using the tools of theoretical computer science, I introduce a distinction between the easy landscapes of our intuitions — where local fitness peaks can be found in a moderate number of steps — and hard landscapes where finding local optima requires an infeasible amount of time. As I’ve discussed before in my post on abstraction, I prove this result for progressively more general and abstract evolutionary dynamics. For this generality, we pay with increasing complication in the corresponding fitness landscapes. In particular, in the kind of epistasis. If we restrict our evolutionary dynamics to fitter or fittest mutant strong-selection weak-mutation, then just sign epistasis is sufficient to ensure the existence of hard landscapes (Theorems 15, 20, 24). If we allow any adaptive evolutionary dynamics — i.e. when natural selection overpowers other forces, but in a non-obvious way — then reciprocal sign epistasis in the NK model with K > 1 is sufficient for hard landscapes (Corollary 28). If we want to show that arbitrary evolutionary dynamics cannot find local fitness optima — i.e. even if other forces conspire to help or hinder the force of natural selection in arbitrary ways — then we need K > 1 and the standard conjecture that FP != PLS (Theorem 27). If this is interesting to you then I recommend the paper, where the discussion is much more detailed.

To finish, let me return to our earlier observation: constraints can be a positive. If we look at many of the proximal constraints on evolution, they usually replace the stasis of a fitness peak by the stasis of a balance of forces — maybe with some drift. In this way, they don’t really give us the thing that is most exciting about evolution: it’s continuing, on-going nature. The computational constraint, however, gives us this. Since the population is not able to find a local fitness peak, it always has beneficial mutations available and so can continue to climb in fitness. In fact, on hard landscapes, the trace of selection strengths drops off like a power-law but never reaches zero — at least not on a cosmologically feasible time-scale. This allows us to return to the biologist’s microscope. For the power-law drop-off is consistent with the rule of declining adaptability observed in various microbial long-term evolution experiments (for examples, see Wiser et al., 2013; Lenski et al., 2015; Couce & Tenaillon, 2015). In fact, Wiser et al. (2013) associate this with unbounded growth in fitness or what I would call open-ended evolution. This can have fun consequences for various theoretical puzzles… but that’s a blog post for another time.


Ariew, A. (2003). Ernst Mayr’s’ ultimate/proximate’distinction reconsidered and reconstructed. Biology and Philosophy, 18(4): 553-565.

Barton, N., & Partridge, L. (2000). Limits to natural selection. BioEssays, 22(12): 1075-1084.

Couce, A., & Tenaillon, O. A. (2015). The rule of declining adaptability in microbial evolution experiments. Frontiers in Genetics, 6: 99.

Crespi, B. J. (2000). The evolution of maladaptation. Heredity, 84(6), 623.

Kaznatcheev, A. (2013). Complexity of evolutionary equilibria in static fitness landscapes. arXiv: 1308.5094.

Kaznatcheev, A. (2018). Computational complexity as an ultimate constraint on evolution. bioRxiv: 187682.

Lenski, R. E., Wiser, M. J., Ribeck, N., Blount, Z. D., Nahum, J. R., Morris, J. J., … & Burmeister, A. R. (2015). Sustained fitness gains and variability in fitness trajectories in the long-term evolution experiment with Escherichia coli. Proc. R. Soc. B, 282(1821), 20152292.

Mayr, E. (1961). Cause and effect in biology. Science, 134(3489): 1501-1506.

Wiser, M. J., Ribeck, N., & Lenski, R. E. (2013). Long-term dynamics of adaptation in asexual populations. Science, 1243357.

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

5 Responses to Proximal vs ultimate constraints on evolution

  1. Pingback: Local peaks and clinical resistance at negative cost | Theory, Evolution, and Games Group

  2. Pingback: Cataloging a year of blogging: cancer and fitness landscapes | Theory, Evolution, and Games Group

  3. Zhunping Julian Xue says:

    Hey Artem,

    I get that your post makes the distinction between proximal and ultimate constraint by saying that the former is due to not being able to make it to the local fitness maximum, while the latter is due to historicity — one is caught on a local fitness maximum without access to the global one (which may be theoretically impossible, as you show).

    But aren’t these two constraints one and the same? They’re both due to the lack of generation of novelty. In the proximal constraint, the organism cannot generate the mutants that would theoretically take it to the local fitness maximum (due to mutation bias, developmental constraints, etc)., but the same is true of the ultimate constraints. If at all times all possible variants — even those very far away, are generated, then there will be no historicity. There will be a mutant at or near the global fitness peak, and then we’re done. So why is ultimate constraint different from proximal constraint?

    It seems that there are two ways in which mutations are limited: one in which the shape of the mutational space around the organism is distorted (that is, not a nice sphere in trait space), and another in which the mutation space is simply limited in size (that is, not infinitely sized), and you make a distinction between the two — am I correct here?

    On another point, we conceptually divide between constraints and forces based on some intuition that natural selection is a force moving populations in trait space, but there are parts of that space which are inaccessible, leading to constraints. But this metaphor, I think, should not be taken too far. Something that is a constraint in one view can be a driving force in another, and vice versa. For example, natural selection is a constraint if we consider mutation bias a driving force. Take your yummy chicken example: your force is selection (artificial now) and your constraint is the generation of novelty. But it can be flipped around: let’s say that mutationally, we have lots of mutants that have neither good flesh nor yummy eggs. Thus, in the absence of selection, there is a force that pushes chicken population to taste awful (and their eggs are no better). Then, our selection becomes a constraint: chickens will tend to have the worst flesh and worst eggs that are selectionally possible.

  4. Pingback: Fitness distributions versus fitness as a summary statistic: algorithmic Darwinism and supply-driven evolution | Theory, Evolution, and Games Group

  5. Pingback: Introduction to Algorithmic Biology: Evolution as Algorithm | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: