October 6, 2011
by Julian Xue
Nine for mortal men doomed to die.
In the last post I wrote about the evolution of complexity and Gould’s and McShea’s approaches to explaining the patterns of increasing complexity in evolution. That hardly exhausts the vast multitude of theories out there, but I’d like put down some of my own thoughts on the matter, as immature as they may seem.
My intuition is that if we chose a random life-form out of all possible life-forms, truly a random one — without respect to history, the time it takes to evolve that life-form, etc, then this randomly chosen life form will be inordinately complex, with a vast array of structure and hierarchy. I believe this because there are simply many more ways to be alive if one is incredibly complex, there’s more ways to arrange one’s self. This intuition gives me a way to define an entropic background such that evolution is always tempted, along or against fitness considerations, to relax to high entropy and become this highly complex form of life.
I think this idea is original, at least I haven’t heard of it yet elsewhere — but in my impoverished reading I might be very wrong, as wrong as when I realized that natural selection can’t optimize mutation rates or evolvability (something well known to at least three groups of researchers before me, as I realized much later). If anyone knows someone who had this idea before, let me know!
I will try to describe how I think this process might come about.
Consider the space of all possible systems,
. Any system
is made up of components, chosen out of the large space
. A system made out of n components I shall call an n-system. Of the members of
, let there be a special property called “viability”. We will worry later about what exactly viability means, for now let’s simply make it an extremely rare property, satisfied by a tiny fraction,
, of
.
At the beginning of the process, let there be only 1-systems, or systems of one component. If
is large enough, then somewhere in this space is at least one viable component, call this special component
. Somehow, through sheer luck, the process stumbles on
. The process then dictates some operations that can happen to
. For now, let us consider three processes: addition of a new component, mutation of the existing component, and removal of an existing component. The goal is to understand how these three operations affect the evolution of the system while preserving viability.
Let us say that viability is a highly correlated attribute, and systems close to a viable system is much more likely to be viable than a randomly chosen system. We can introduce three probabilities here, one for the probability of viability upon the addition of a new component, upon the removal of an existing component, and upon the mutation of an existing component. For now, however, since the process is at at a 1-system, removal of components cannot preserve viability — as Gould astutely observed. Thus, we can consider additions and mutations. For simplicity I will consider only one probability,
, the probability of viability upon an edit.
It turns out that two parameters,
(the size or cardinality of
) and
, are critical to the evolution of the system. There are two types of processes that I’m interested in, although there are more than what I list below:
1) “Easy” processes:
is small and
is large. There are only a few edits / additions we can make to the system, and most of them are viable.
2) “Hard” processes:
is very large and
is small, but not too small. There are many edits possible and only a very small fraction of these edits are viable. However,
is not so small that none of these edits are viable. In fact,
is large enough that not only some edits are viable, but also these edits can be discovered in reasonable time and population size, once we add these ingredients to this model (not yet).
The key point is that easy processes are reversible and hard processes are not. Most of existing evolutionary theory so far as dealt with easy processes, which leads to a stable optimum driven only by environmental dictates of what is fittest, because the viable system space is strongly connected. Hard processes, on the other hand, have a viable system space that is connected — but very sparsely so. This model really is an extension of Gavrilets’ models, which is why I spent so much time reviewing them!
Now let’s see how a hard process proceeds. It’s actually very simple: the
either mutates around to other viable 1-systems, or adds a component to become a viable 2-system. By the definition of a hard process, these two events are possible, but might take a bit of time. Let’s say we are at a 2-system,
. Mutations of the two system might also hit a viable system. Sooner or later, we will hit a viable
as a mutation of
. At this point, it’s really hard for
to become a 1-system. It needs to have a mutation back to
and then a loss to
. This difficulty is magnified if we hit
as
continues to mutate,
might be a mutation neighbor to
but not
. Due to the large size of the set
, reverse mutation to
becomes virtually impossible. On the other hand, let’s say we reached
. Removing a component results in either
or
. The probability that at least one of them is viable is
, which for
very small, is still small. Thus, while growth in size is possible, because a system can grow into many, many different things, reduction is size is much more difficult, because one can only reduce into a limited number of things. Since most things are not viable, reduction is much more likely to result in a unviable system. This isn’t to say reduction never happens or is impossible, but overall there is a very strong trend upwards.
All this is very hand waving, and in fact a naive formalization of it doesn’t work — as I will show in the next post. But the main idea should be sound: it’s that reduction of components is very easy in the time right after the addition of a component (we can just lose the newly added component), but if no reduction happens for a while (say by chance), then mutations lock the number of components in. Since the mutation happened in a particular background of components, the viability property after mutation is true only with respect to that background. Changing that background through mutation or addition is occasionally okay, because there is a very large space things that one can grow or mutate in to, but all the possible systems that one can reduce down to may be unviable. For a n-system, there are n possible reductions, but
possible additions and
possible mutations. For as long as
, this line of reasoning is possible. In fact, it is possible until
becomes large, at which point the probability that the probability that a system can lose a component and remain viable becomes significant.
Phew. In the next post I shall try to tighten this argument.