Fitness landscapes as mental and mathematical models of evolution

A figure from Wright's 1932 paper introducing fitness landscapes. I am surprised at how modern it looks.

A figure from Wright’s 1932 paper introducing fitness landscapes. I am surprised at how modern it looks.

As Jacob Scott pointed out, everybody — theorist or experimentalist — “has a logical construct (a model) in his or her head” when studying anything. This model might be mathematically explicit or implicit in the mind, but it is there and if the world is mechanistic (or if we only want to consider mechanistic theories of the world) then so is the model. One of the goals of philosophy (as well as theoretical parts of science) is to study these implicit (or explicit) models and understand if they have any fundamental limitations or introduce biases that might be independent of the empirical world that we hope they represent. Since theoretical computer science is a natural extension of the analytic approach to philosophy, and since it is perfectly adapted for studying abstract mechanistic models, it is my hope to use computer science to enlighten our understanding of our mental models. In the case of evolution, the prevalent mental (and later mathematical) model that I want to study was introduced in 1932 by Sewall Wright — the fitness landscape.
Read more of this post

Critique of Chaitin’s algorithmic mutation

Last week, I reviewed Gregory Chaitin’s Proving Darwin. I concentrated on a broad overview of the book and metabiology, and did not touch on the mathematical details; This post will start touching. The goal is to summarize Chaitin’s ideas more rigorously and to address Chaitin’s comment:

I disagree with your characterization of algorithmic mutations as directed.
They are in fact random.
Rgds,
GJC

Specifically, I need to (a) summarize the mathematical details of algorithmic mutations, (b) clarify the distinction between random mutation and (what I called) randomized directed mutation, and (c) defend my assertion that algorithmic mutations are not purely random but directed. I need to fully explain my earlier claim:

The algorithmic mutation actually combine the act of mutating and selecting into one step, it is a n-bit program that takes the current organism A and outputs a new organism B of higher fitness (note, that it needs an oracle call for the Halting-problem to do so). The probability that A is replaced by B is then 2^{-n}. There is no way to decouple the selection step from the mutation step in an algorithmic mutation, although this is not clear without the technical details which I will postpone until a future post. Chaitin’s model does not have random mutations, it has randomized directed mutations.

The high level description of algorithmic mutations given in Chapter 5 is not necessarily inconsistent with random mutations, but it is not detailed enough to make a concrete judgement. We have to look at Appendix 2 (The Heart of the Proof) to really understand how Chaitin’s mutations act. As before, an organism A is a halting self-delimiting program, and A‘s fitness \phi(A) is the integer A outputs upon halting. I will use t(A) to mean the running time of A and will set t(P) = \infty for a non-halting program P.

A mutation M_K works as follows. M_K(A) calculates a lowerbound \Omega_{\phi(A)} = \sum_{A \in S(\phi(A))} \frac{1}{2^{|A|}} of Chaitin’s constant \Omega, where S(N) is the set of programs smaller than N bits in length and halting by N steps. M_K calculates a new number \beta = \Omega_{\phi(A)} + 2^{-K} and mutates A into A' = \pi\beta. Here \pi is a self-delimiting prefix that finds the first N such that \Omega_N \geq \beta, outputs N, and halts. If \beta < \Omega then A' a halts and is thus an organism with fitness \phi(A') = \beta. If \beta \geq \Omega then A' is a non-halting program and thus not an organism (or at best an organism with an undefined fitness \phi(A')). Chaitin’s mutations either return a strictly more fit organism, or do not return one at all (equivalently: an organism with undefined fitness).

To simulate evolution (as opposed to a deterministic picking of mutations), Chaitin selects the mutations randomly (with mutation M_K being selected in proportion to 2^{-|M_K|} ). This is probably why he writes them as ‘random’, however picking a directed mutation at random, is just randomized directed mutation. The concept of a mutant having lower fitness, is undefined in metabiology. A defense would be to say that algorithmic mutations actually combine the steps of mutation and selection into one — this is a technique that is often used in macroevolutionary models. However, I cannot grant this defense, because in macroevolutionary models, even when a fitter agent is always selected, the mutations are still capable of defining an unfit individual. Further, when macroevolutionary models set up such a hard gradient (of always selecting the fitter mutant) then these models are not used to study fitness because it is trivial to show unbounded fitness growth in such models.

Is Chaitin’s model the simplest it can be? Chaitin spends the first four chapters defending algorithms/software as metaphors for genes; Does metabiology actually use the power of algorithms? Chaitin does not attempt to address abiogenesis in his model and the first organism is an arbitrary program. Every subsequent program arises from the original by a sequence of algorithmic mutations; but every mutation returns an program of the form \pi \beta where \beta is some rational number greater than 0 and less than \Omega. In other words, the genes are only capable of programming:

N = 1
while \Omega_N < \beta do N = N + 1 end
print N

I hid the subroutine that computes \Omega_N but we will see that it is a superficial feature. \beta and the N that the algorithm returns are completely equivalent. A larger \beta returns a larger N. Since the prefix \pi is always repeated, I will just represent organisms by the rational number after \pi. Instead of \pi \beta, I will write \beta. Now, I can simple re-define the fitness function:

\phi(\beta) = \begin{cases}   \beta & \mathrm{if} \; \beta < \Omega \\   \bot & \mathrm{otherwise}    \end{cases}

and the mutation operator:

M_K(\beta) = \begin{cases}   \beta + \frac{1}{2^K} & \mathrm{if} \; \beta < \Omega - \frac{1}{2^K} \\  \beta & \mathrm{otherwise}  \end{cases}

and metabiology remains completely unchanged, with the exception of the original organism which we can just set as 0 for our purposes. Of course, instead of the fitness growing higher and higher it grows to be a better and better lower bound for \Omega. All the algorithms are stripped away, except in that \Omega comes from algorithmic information theory. However, we don’t need \Omega to achieve all of Chaitin’s results: we can use any irrational number between 0 and 1 like \frac{1}{\sqrt{2}} (in fact, the number doesn’t even need to be irrational it just has to have a non-terminating expansion in base 2). In other words, the algorithmic part of metabiology is superficial.

Further, by looking at metabiology in the simple terms I described above we can also overcome the directed nature of ‘algorithmic’ mutations. We just modify the mutation operator to be indexed by K and either – or + with both K and – or + selected from some distribution. A natural distribution is to pick K proportional to 2^{-K} and – or + uniformly. Now, we can define a random mutation operator:

M_{K,+}  \begin{cases}   \beta + \frac{1}{2^K} & \mathrm{if} \; \beta < \Omega + \frac{1}{2^K} \\  \beta & \mathrm{otherwise}  \end{cases}

and

M_{K,-}  \begin{cases}   \beta - \frac{1}{2^K} & \mathrm{if} \; \beta \geq \frac{1}{2^K} \\  \beta & \mathrm{otherwise}  \end{cases}

Then given a focal organism \beta and a mutant \beta' we can use our favorite selection rule to chose who survives. A tempting one is to use Chaitin’s hard-max and let whoever is larger survive. Alternatively, we can use a softer rule like \beta' takes over with probability proportional to exp(\beta' - \beta) (this is a popular rule in evolutionary game theory models). Alternatively, we can stick ourselves plainly inside neutral evolution and just say \beta' always takes over regardless of fitness. This would create a random walk model, which can still show an increase in complexity as proposed by Gould (1997) and summarized on this blog by Julian.

References

Chaitin, G. (2009). Evolution of Mutating Software EATCS Bulletin, 97, 157-164

Is Chaitin proving Darwin with metabiology?

Cover of Proving Darwin

Algorithmic information theory (AIT) allows us to study the inherent structure of objects, and qualify some as ‘random’ without reference to a generating distribution. The theory originated when Ray Solomonoff (1960), Andrey Kolmogorov (1965), and Gregory Chaitin (1966) looked at probability, statistics, and information through the algorithmic lens. Now the theory has become a central part of theoretical computer science, and a tool with which we can approach other disciplines. Chaitin uses it to formalize biology.

In 2009, he originated the new field of metabiology, a computation theoretic approach to evolution (Chaitin, 2009). Two months ago, Chaitin published his introduction and defense of the budding field: Proving Darwin: Making Biology Mathematical. His goal is to distill the essence of evolution, formalize it, and provide a mathematical proof that it ‘works’. I am very sympathetic to this goal.

Chaitin’s conviction that evolution can be formalized stems from his deeply Platonic view of the world. Since evolution is so beautiful and ubiquitous, there must be a pure perfect form of it. We have to look past the unnecessary details and extract the essence. For Chaitin this means ignoring everything except the genetic code. The physical form of the organisms is merely a vessel and tool for translating genes into fitness. Although this approach might seem frightening at first, it is not foreign to biologists; Chaitin is simply taking the gene-centered view of evolution.

As for the genetic code, Chaitin takes the second word very seriously; genes are simply software to be transformed into fitness by the hardware that is the physics of the organisms’ environment. The only things that inhabit this formal world are self-delimiting programs — a technical term from AIT meaning that no extension of a valid program is valid. This allows us to define a probability measure over programs written as finite binary strings, which will be necessary in the technical section.

Physics simply runs the programs. If the program halts then the natural number it outputs is the program’s fitness. In other words, we have a perfectly static environment. If you were interested ecology or evolutionary game theory, then Chaitin just threw you out with the bath water. If you were interested in modeling, and wanted to have something computable define your fitness, then tough luck. Finally, in a fundamental biological theory, I would expect fitness to be something we measure when looking at the organisms, not a fundamental quantity inherent in the model. In biology, a creature simply reproduces or doesn’t, survives or doesn’t; fitness is something the observer defines when reasoning about the organisms. Why does Chaitin not derive fitness from more fundamental properties like reproduction and survival?

In Chaitin’s approach there is no reproduction, there is only one organism mutating through time. If you are interested in population biology, or speciation then you can’t look at them in this model. The mutations are not point-mutations, but what Chaitin calls algorithmic mutations. The algorithmic mutation actually combine the act of mutating and selecting into one step, it is a n-bit program that takes the current organism A and outputs a new organism B of higher fitness (note, that it needs an oracle call for the Halting-problem to do so). The probability that A is replaced by B is then 2^{-n}. There is no way to decouple the selection step from the mutation step in an algorithmic mutation, although this is not clear without the technical details which I will postpone until a future post. Chaitin’s model does not have random mutations, it has randomized directed mutations. Fitness as a basic assumption, static environment, and directed mutations make this a teleological model — a biologist’s nightmare.

What does Chaitin achieve? His primary result is to show biological creativity, which in this model means a constant (and fast) increase in fitness. His secondary result is to delineate between three types of design: blind search, evolution, and intelligent design. He shows that to arrive at an organism that has the maximum fitness of any n-bit organism (this is the number BB(n) — the nth busy beaver number), blind search required on the order of 2^n steps, evolution requires between n^2 and n^3, and intelligent design (that selects the best algorithmic mutation at each step) requires n steps. These are interesting questions, but what do they have to do with Darwin?

Does Chaitin prove Darwin?

We are finally at the central question of this post. To answer this, we need to understand what Darwin achieved. The best approach is to look at Mayr’s (1982) five facts and three inferences that define Darwin’s natural selection:

  • Fact 1: Population increases exponentially if all agents got to reproduce.
    Metabiology: A single agent that doesn’t reproduce
  • Fact 2: Population is stable except for occasional fluctuations.
    Metabiology: There is always one agent, thus stable
  • Fact 3: Resources are limited and relatively constant.
    Metabiology: Resources are not defined.
  • Inference 1: There is a fierce competition for survival with only a small fraction of the progeny of each generation making it to the next.
    Metabiology: Every successful mutation makes it to the next generation.
  • Fact 4: No two agents are exactly the same.
    Metabiology: There is only one agent.
  • Fact 5: Much of this variation is heritable.
    Metabiology: Nothing is heritable, a new mutant has nothing to do with the previous agent except having a higher fitness.
  • Inference 2: Survival depends in part on the heredity of the agent.
    Metabiology: A mutant is created/survives only if more fit than the focal agent.
  • Inference 3: Over generations this produces continual gradual change
    Metabiology: Agent constantly improves in fitness

The only thing to add to the above list is the method for generation variation: random mutation. As we saw before, metabiology uses directed mutation. From the above, it mostly seems like Chaitin and Darwin were concerned about different things. Chaitin doesn’t prove Darwin.

However, I don’t think Chaitin’s exercise was fruitless. I think it is important to try to formalize the basic essence of evolution, and to prove theorems about it. However, I think Chaitin needs to remember what made his development of algorithmic information theory so successful. AIT was able to address existing questions of interest in novel ways. So the lesson of this post is to concentrate on the questions biologists want to answer (or have answered already) when building a formal model. Make sure that your formal approach can at least express some of the questions a biologist would want to ask.

References

Chaitin, G. (1966). On the Length of Programs for Computing Finite Binary Sequences. J. Association for Computing Machinery 13(4): 547–569.

Chaitin, G. (2009). Evolution of Mutating Software EATCS Bulletin, 97, 157-164

Kolmogorov, A. (1965). Three approaches to the definition of the quantity of information. Problems of Information Transmission 1: 3–11

Mayr, E. (1982). The Growth of Biological Thought. Harvard University Press. ISBN 0-674-36446-5

Solomonoff, R. (1960). A Preliminary Report on a General Theory of Inductive Inference. Technical Report ZTB-138, Zator Company, Cambridge, Mass.