Semi-smooth fitness landscapes and the simplex algorithm

Leonid_KantorovichAs you might have guessed from my strange and complicated name, I’m Russian. One of the weird features of this is that even though I have never had to experience war, I still feel a strong cultural war-weariness. This stems from an ancestoral memory of the Second World War, a conflict that had an extremely disruptive affect on Russian society. None of my great-grandfathers survived the war; one of them was a train engineer that died trying to drive a train of provisions over the Road of Life to resuply Leningrad during its 29 month seige. Since the Germans blocked all the land routes, part of road ran over the ice on Lake Ladoga — trucks had to be optimally spaced to not crack the ice that separated them from a watery grave while maximizing the amount of supplies transported into the city. Leonid Kantorovich — the Russian mathematician and economist that developed linear programming as the war was starting in western Europe — ensured safety by calculating the optimal distance between cars depending on the ice thickness and air temperature. In the first winter of the road, Kantorovich would personally walk between trucks on the ice to ensure his guidelines were followed and to reassure the men of the reliability of mathematical programming. Like his British counterpart, Kantorovich was aplying the algorithmic lens to help the Allied war effort and the safety of his people. Although I can never reciprocate the heroism of these great men, stories like this convince me that the algorithmic lens can provide a powerful perspective in economics, engineering, or science. This is one of the many inspirations behind my most recent project (Kaznatcheev, 2013) applying the tools of theoretical computer science and mathematical optimization — such as linear programming — to better understand the rate of evolutionary dynamics.

Epistasis in fitness graphs of two loci. Arrows point from lower fitness to higher fitness, and AB always has higher fitness than ab. From left to right no epistasis, sign epistasis, reverse sign epistasis.

Epistasis in fitness graphs of two loci. Arrows point from lower fitness to higher fitness, and AB always has higher fitness than ab. From left to right no epistasis, sign epistasis, reverse sign epistasis.

Computer science relies heavily on discrete math and so it is best suited for studying structural rather than statistical features of evolution. As I discussed previously, the central structure of interest for biologists studying fitness landscapes is epistasis of which there are three kinds: magnitude, sign, and reciprocal sign. The latter two can have a qualitative effect on dynamics, while the former can only have a less pronounced quantitative effect.

Getting around reciprocal sign epistasis

Shortest paths from abc to ABc are blocked by the reciprocal sign epistasis of the red edges. However, an alternative adaptive path exists along the green edges that first introduces the C allele to reach ABC, but then removes it to return to ABc.

For example, in a loci pair with sign epistasis: if the second locus is b then the mutation from a to A is not adaptive, but if the second locus is B then the mutation from a to A is adaptive. A system has reciprocal sign epistasis if we have sign epistasis on both loci (Poelwijk et al. 2007). The presence of reciprocal sign epistasis is a necessary but not sufficient (see figure at right) condition for multiple peaks (Poelwijk et al. 2007, 2011). In fact, there is no local property in terms of just reciprocal sign epistasis that is sufficient for the existence of multiple-peaks (Crona et al., 2013) and I conjecture that there is absolutely no local (or polynomial time testable) property that characterizes multiple-peaks.

If there is no sign epistasis in the fitness landscape then every shortest path (looking at 1-mutant neighbors but ignoring fitness) is also an adaptive path (i.e. monotonically increasing in fitness). On such a landscape, it is trivial to see that the global fitness peak is found in fewer than n adaptive steps. In terms of epistasis, to make the landscape non-trivial we need to introduce at least some sign epistasis. I define a semi-smooth landscape as one of the following three definitions (that I show to be equivalent):

  1. No reciprocal sign epistasis,
  2. Every genotype has at least one shortest path (ignoring fitness) that is also an adaptive path (i.e. monotonically increasing in fitness) to the unique global fitness peak, and
  3. Every subcube of the fitness landscape has a unique fitness peak.

The last condition is the same as the definition of acyclic unique sink orientations (AUSOs) of graphs from the analysis of simplex algorithms — a class of algorithms developed by George Dantzig after the war to solve Kantorovich’s linear programs. This allows me to use the decades of rigorous results on linear programming and the simplex algorithm to analyze semi-smooth fitness landscapes. In particular, it lets me restate Matousek & Szabo’s (2006) theorem in biological terms:

Theorem 12 (Kaznatcheev, 2013):
There are positive constants c_1, c_2 such that for all sufficiently large n there exists a semi-smooth landscape on \{0,1\}^n such that strong selection weak mutation (SSWM) dynamics starting from a random wild type, with probability at least 1 - e^{-c_1n^{1/3}} follows an adaptive path of at least e^{c_2n^{1/3}} adaptive steps to the fitness peak.

Where SSWM dynamics is interpreted as selecting a mutant uniformly at random from the set of fitter 1-mutation neighbours. In other words, even though a short path exists, evolution spends exponentially long wandering around trying to find it. At first hand this might seem like an artifact of the random decisions made by SSWM, a biologist expressed this intuition very cleanly in a recent email:

If the population is large enough, it will take better steps with higher probability (see Orr’s and Gillespie’s papers). If the population is very large, it will behave almost as a greedy walker (although not quite). My intuition is that if the population takes better steps with higher probability, it will reach the peak much sooner than your theorem 12 suggests.

The power of computer science is to show that this intuition is not true. In fact, if we take the limit of extremely large populations and say that the population always selects the fittest adjacent neighbour, then it is even easier to build semi-smooth landscapes where evolution will require an exponential number of steps. Dantzig’s original version of the simplex algorithm, actually followed this infinite population strategy by always picking the best pivot (fittest mutant), and the first time a hardness result was shown was for this pivot rule!

In 1972, Klee and Minty (also: Jeroslow 1973) constructed what is now known as the Klee-Minty cube as an example of a hypercube on which Dantzig’s original simplex algorithm would take an exponential number of steps. In fact, if you started at the lowest-fitness vertex and always moved to the fittest neighbour then you would actually visit all 2^n vertexes of the hypercube before finding the peak. In biological terms, it means that always selecting the fittest mutant is not better at finding a fitness peak in semi-smooth landscape. Starting at a random vertex on the semi-smooth Klee-Minty cube, it is expected to take (2^n + 1)/2 adaptive steps. The proof of this is much simpler than Matousek & Szabo’s (2006) for random fitter mutant, but Matousek & Szabo’s constructions is still inspired by the Klee-Minty cube, as are most of the known counter-examples for other simplex pivot rules (Amenta & Ziegler, 1996).

Of course, we could pick some other rule for picking mutants that hasn’t been studied before by computer scientists and assert that “this rule doesn’t take exponential time”. In particular consider the following rule, if the fitness of the current vertex is f(u) and we select a fitter mutant v with probability proportional to f(v) - f(u). This particular rule was preferred by Gillespie (1984; Orr, 2005), and has not been explicitly studied by computer scientists, yet. However, for every simplex rule that has been analyzed so far (and there have been many, see Amenta & Ziegler (1996) and Goldfarb (1994)), we have been able to generate counter-examples with exponentially long paths. Why should we expect otherwise for Gillespie’s rule? Especially since it sits so obviously between selecting improving step uniformly at random, and selecting the best one. In fact, the conjecture I would enjoy going after is:

Given a selection rule that given a wild type u and adjacent fitter mutant v selects v with probability proportional to (f(v) - f(u))^r for some r \geq 0. There exists a semi-smooth landscape where this selection rule results in an expected number of adaptive steps that is super-polynomial.

This conjecture is independent of biology, is true for r = 0 and r \rightarrow \infty, captures Gillespie’s rule for r = 1, and seems within reach of current proof techniques in the discrete math literature. Let me know if you know how to prove it! In general, I think that for any population that considers only point-mutants, there are semi-smooth fitness landscapes on which it will not find the fitness peak in any reasonable amount of time.

This is the sixth of a series of posts on computational considerations of empirical and theoretical fitness landscapes. In the previous post I gave a historic overview of fitness landscapes as mental and mathematical metaphors, highlighted our empirical knowledge of them, explained the Orr-Gillespie theory for minimizing statistical assumptions, introduced the NK and block models, and presented an executive summary of the computational complexity of evolutionary equilibria.

References

Amenta, N, & Ziegler, G.M. (1996). Deformed products and maximal shadows of polytopes. Advances in discrete and computational geometry. 10-19.

Crona, K., Greene, D., & Barlow, M. (2013). The peaks and geometry of fitness landscapes. Journal of Theoretical Biology.

Goldfarb, D. (1994). On the complexity of the simplex algorithm. In Advances in optimization and numerical analysis. Dordrecht, Kluwer. 25-38.

Gillespie, J.H. (1983). A simple stochastic gene substitution model. Theor. Pop. Biol. 23: 202-215.

Jeroslow, R.G. (1973). The simplex algorithm with the pivot rule of maximizing criterion improvement. Discrete Mathematics 4(4): 367-377.

Kaznatcheev, Artem (2013). Complexity of evolutionary equilibria in static fitness landscapes. arXiv preprint arXiv: 1308.5094v1.

Klee, V., & Minty, G.J. (1972). How good is the simplex algorithm?. In Shisha, Oved. Inequalities III (Proceedings of the Third Symposium on Inequalities). New York-London: Academic Press. 159–175.

Matousek, J., & Szabo, T. (2006). RANDOM EDGE can be exponential on abstract cubes. Advances in Mathematics, 204, 262-277 DOI: 10.1016/j.aim.2005.05.021

Orr H.A. (2005). The genetic theory of adaptation: a brief history. Nature Reviews Genetics, 6 (2), 119-27

Poelwijk, F.J., Kiviet, D.J., Weinreich, D.M., & Tans, S.J. (2007). Empirical fitness landscapes reveal accessible evolutionary paths. Nature 445: 383-386.

Poelwijk, F.J., Sorin, T.-N., Kiviet, D.J., Tans, S.J. (2011). Reciprocal sign epistasis is a necessary condition for multi-peaked fitness landscapes. Journal of Theoretical Biology 272: 141-144.

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

13 Responses to Semi-smooth fitness landscapes and the simplex algorithm

  1. Pingback: Semi-smooth fitness landscapes and the simplex ...

  2. Pingback: Semi-smooth fitness landscapes and the simplex algorithm | Theory … | Survival of Fittest

  3. Pingback: Cataloging a year of blogging: the algorithmic world | Theory, Evolution, and Games Group

  4. Pingback: Misleading models: “How learning can guide evolution” | Theory, Evolution, and Games Group

  5. Pingback: Evolutionary non-commutativity suggests novel treatment strategies | Theory, Evolution, and Games Group

  6. Pingback: Five motivations for theoretical computer science | Theory, Evolution, and Games Group

  7. Pingback: Pairing tools and problems: a lesson from the methods of mathematics and the Entscheidungsproblem | Theory, Evolution, and Games Group

  8. Pingback: Fusion and sex in protocells & the start of evolution | Theory, Evolution, and Games Group

  9. Pingback: Abstract is not the opposite of empirical: case of the game assay | Theory, Evolution, and Games Group

  10. Pingback: Local maxima and the fallacy of jumping to fixed-points | Theory, Evolution, and Games Group

  11. Pingback: Game landscapes: from fitness scalars to fitness functions | Theory, Evolution, and Games Group

  12. Pingback: Introduction to Algorithmic Biology: Evolution as Algorithm | Theory, Evolution, and Games Group

  13. Pingback: The gene-interaction networks of easy fitness landscapes | Theory, Evolution, and Games Group

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.