# Critique of Chaitin’s algorithmic mutation

Last week, I reviewed Gregory Chaitin’s Proving Darwin. I concentrated on a broad overview of the book and metabiology, and did not touch on the mathematical details; This post will start touching. The goal is to summarize Chaitin’s ideas more rigorously and to address Chaitin’s comment:

I disagree with your characterization of algorithmic mutations as directed.
They are in fact random.
Rgds,
GJC

Specifically, I need to (a) summarize the mathematical details of algorithmic mutations, (b) clarify the distinction between random mutation and (what I called) randomized directed mutation, and (c) defend my assertion that algorithmic mutations are not purely random but directed. I need to fully explain my earlier claim:

The algorithmic mutation actually combine the act of mutating and selecting into one step, it is a $n$-bit program that takes the current organism A and outputs a new organism B of higher fitness (note, that it needs an oracle call for the Halting-problem to do so). The probability that A is replaced by B is then $2^{-n}$. There is no way to decouple the selection step from the mutation step in an algorithmic mutation, although this is not clear without the technical details which I will postpone until a future post. Chaitin’s model does not have random mutations, it has randomized directed mutations.

The high level description of algorithmic mutations given in Chapter 5 is not necessarily inconsistent with random mutations, but it is not detailed enough to make a concrete judgement. We have to look at Appendix 2 (The Heart of the Proof) to really understand how Chaitin’s mutations act. As before, an organism $A$ is a halting self-delimiting program, and $A$‘s fitness $\phi(A)$ is the integer $A$ outputs upon halting. I will use $t(A)$ to mean the running time of $A$ and will set $t(P) = \infty$ for a non-halting program $P$.

A mutation $M_K$ works as follows. $M_K(A)$ calculates a lowerbound $\Omega_{\phi(A)} = \sum_{A \in S(\phi(A))} \frac{1}{2^{|A|}}$ of Chaitin’s constant $\Omega$, where $S(N)$ is the set of programs smaller than $N$ bits in length and halting by $N$ steps. $M_K$ calculates a new number $\beta = \Omega_{\phi(A)} + 2^{-K}$ and mutates $A$ into $A' = \pi\beta$. Here $\pi$ is a self-delimiting prefix that finds the first $N$ such that $\Omega_N \geq \beta$, outputs $N$, and halts. If $\beta < \Omega$ then $A'$ a halts and is thus an organism with fitness $\phi(A') = \beta$. If $\beta \geq \Omega$ then $A'$ is a non-halting program and thus not an organism (or at best an organism with an undefined fitness $\phi(A')$). Chaitin’s mutations either return a strictly more fit organism, or do not return one at all (equivalently: an organism with undefined fitness).

To simulate evolution (as opposed to a deterministic picking of mutations), Chaitin selects the mutations randomly (with mutation $M_K$ being selected in proportion to $2^{-|M_K|}$ ). This is probably why he writes them as ‘random’, however picking a directed mutation at random, is just randomized directed mutation. The concept of a mutant having lower fitness, is undefined in metabiology. A defense would be to say that algorithmic mutations actually combine the steps of mutation and selection into one — this is a technique that is often used in macroevolutionary models. However, I cannot grant this defense, because in macroevolutionary models, even when a fitter agent is always selected, the mutations are still capable of defining an unfit individual. Further, when macroevolutionary models set up such a hard gradient (of always selecting the fitter mutant) then these models are not used to study fitness because it is trivial to show unbounded fitness growth in such models.

Is Chaitin’s model the simplest it can be? Chaitin spends the first four chapters defending algorithms/software as metaphors for genes; Does metabiology actually use the power of algorithms? Chaitin does not attempt to address abiogenesis in his model and the first organism is an arbitrary program. Every subsequent program arises from the original by a sequence of algorithmic mutations; but every mutation returns an program of the form $\pi \beta$ where $\beta$ is some rational number greater than 0 and less than $\Omega$. In other words, the genes are only capable of programming:

N = 1
while $\Omega_N < \beta$ do N = N + 1 end
print N

I hid the subroutine that computes $\Omega_N$ but we will see that it is a superficial feature. $\beta$ and the N that the algorithm returns are completely equivalent. A larger $\beta$ returns a larger N. Since the prefix $\pi$ is always repeated, I will just represent organisms by the rational number after $\pi$. Instead of $\pi \beta$, I will write $\beta$. Now, I can simple re-define the fitness function:

$\phi(\beta) = \begin{cases} \beta & \mathrm{if} \; \beta < \Omega \\ \bot & \mathrm{otherwise} \end{cases}$

and the mutation operator:

$M_K(\beta) = \begin{cases} \beta + \frac{1}{2^K} & \mathrm{if} \; \beta < \Omega - \frac{1}{2^K} \\ \beta & \mathrm{otherwise} \end{cases}$

and metabiology remains completely unchanged, with the exception of the original organism which we can just set as 0 for our purposes. Of course, instead of the fitness growing higher and higher it grows to be a better and better lower bound for $\Omega$. All the algorithms are stripped away, except in that $\Omega$ comes from algorithmic information theory. However, we don’t need $\Omega$ to achieve all of Chaitin’s results: we can use any irrational number between 0 and 1 like $\frac{1}{\sqrt{2}}$ (in fact, the number doesn’t even need to be irrational it just has to have a non-terminating expansion in base 2). In other words, the algorithmic part of metabiology is superficial.

Further, by looking at metabiology in the simple terms I described above we can also overcome the directed nature of ‘algorithmic’ mutations. We just modify the mutation operator to be indexed by K and either – or + with both K and – or + selected from some distribution. A natural distribution is to pick K proportional to $2^{-K}$ and – or + uniformly. Now, we can define a random mutation operator:

$M_{K,+} \begin{cases} \beta + \frac{1}{2^K} & \mathrm{if} \; \beta < \Omega + \frac{1}{2^K} \\ \beta & \mathrm{otherwise} \end{cases}$

and

$M_{K,-} \begin{cases} \beta - \frac{1}{2^K} & \mathrm{if} \; \beta \geq \frac{1}{2^K} \\ \beta & \mathrm{otherwise} \end{cases}$

Then given a focal organism $\beta$ and a mutant $\beta'$ we can use our favorite selection rule to chose who survives. A tempting one is to use Chaitin’s hard-max and let whoever is larger survive. Alternatively, we can use a softer rule like $\beta'$ takes over with probability proportional to $exp(\beta' - \beta)$ (this is a popular rule in evolutionary game theory models). Alternatively, we can stick ourselves plainly inside neutral evolution and just say $\beta'$ always takes over regardless of fitness. This would create a random walk model, which can still show an increase in complexity as proposed by Gould (1997) and summarized on this blog by Julian.

### References

Chaitin, G. (2009). Evolution of Mutating Software EATCS Bulletin, 97, 157-164