Cooperation, enzymes, and the origin of life

Enzymes play an essential role in life. Without them, the translation of genetic material into proteins — the building blocks of all phenotypic traits — would be impossible. That fact, however, poses a problem for anyone trying to understand how life appeared in the hot, chaotic, bustling molecular “soup” from which it sparked into existence some 4 billion years ago.

StromatolitesThrow a handful of self-replicating organic molecules into a glass of warm water, then shake it well. In this thoroughly mixed medium, molecules that help other molecules replicate faster –- i.e. enzymes or analogues thereof — do so at their own expense and, by virtue of natural selection, must sooner or later go extinct. But now suppose that little pockets or “vesicles” form inside the glass by some abiotic process, encapsulating the molecules into isolated groups. Suppose further that, once these vesicles reach a certain size, they can split and give birth to “children” vesicles — again, by some purely physical, abiotic process. What you now have is a recipe for group selection potentially favorable to the persistence of catalytic molecules. While less fit individually, catalysts favor the group to which they belong.

This gives rise to a conflict opposing (1) within-group selection against “altruistic” traits and (2) between-group selection for such traits. In other words, enzymes and abiotic vesicles make an evolutionary game theory favourite — a social dilemma.
Read more of this post

Approximating spatial structure with the Ohtsuki-Nowak transform

Can we describe reality? As a general philosophical question, I could spend all day discussing it and never arrive at a reasonable answer. However, if we restrict to the sort of models used in theoretical biology, especially to the heuristic models that dominate the field, then I think it is relatively reasonable to conclude that no, we cannot describe reality. We have to admit our current limits and rely on thinking of our errors in the dual notions of assumptions or approximations. I usually prefer the former and try to describe models in terms of the assumptions that if met would make them perfect (or at least good) descriptions. This view has seemed clearer and more elegant than vague talk of approximations. It is the language I used to describe the Ohtsuki-Nowak (2006) transform over a year ago. In the months since, however, I’ve started to realize that the assumptions-view is actually incompatible with much of my philosophy of modeling. To contrast my previous exposition (and to help me write up some reviewer responses), I want to go through a justification of the ON-transform as a first-order approximation of spatial structure.
Read more of this post

Change, progress, and philosophy in science

Bertrand_Russell“Philosophy of science is about as useful to scientists as ornithology is to birds” is a quote usually attributed to Feynman that embodies a sentiment that seems all too common among scientists. If I wish to be as free as a bird to go about my daily rituals of crafting science in the cage that I build for myself during my scientific apprenticeship then I agee that philosophy is of little use to me. Much like a politician can hold office without a knowledge of history, a scientist can practice his craft without philosophy. However, like an ignorance of history, an ignorance of philosophy tends to make one myopic. For theorists, especially, such a restricted view of intellectual tradition can be very stifling and make scientific work seem like a trade instead of an art. So, to keep my work a joy instead of chore, I tend to structure myself by reading philosophy and trying to understand where my scientific work fits in the history of thought. For this, Bertrand Russell is my author of choice.

I don’t read Russell because I agree with his philosophy, although much of what he says is agreeable. In fact, it is difficult to say what agreement with his philosophy would mean, since his thoughts on many topics changed through his long 98 year life. I read his work because it has a spirit of honest inquiry and not a search for proof of some preconceived conclusion (although, like all humans, he is not always exempt from the dogmatism flaw). I read his work because it is written with a beautiful and precise wit. Most importantly, I read his work because — unlike many philosophers — he wrote clearly enough that it is meaningful to disagree with him.
Read more of this post

Evolution is a special kind of (machine) learning

Theoretical computer science has a long history of peering through the algorithmic lens at the brain, mind, and learning. In fact, I would argue that the field was born from the epistemological questions of what can our minds learn of mathematical truth through formal proofs. The perspective became more scientific with McCullock & Pitts’ (1943) introduction of finite state machines as models of neural networks and Turing’s B-type neural networks paving the way for our modern treatment of artificial intelligence and machine learning. The connections to biology, unfortunately, are less pronounced. Turing ventured into the field with his important work on morphogenesis, and I believe that he could have contributed to the study of evolution but did not get the chance. This work was followed up with the use of computers in biology, and with heuristic ideas from evolution entering computer science in the form of genetic algorithms. However, these areas remained non-mathematical, with very few provable statements or non-heuristic reasoning. The task of making strong connections between theoretical computer science and evolutionary biology has been left to our generation.

ValiantAlthough the militia of cstheorists reflecting on biology is small, Leslie Valiant is their standard-bearer for the steady march of theoretical computer science into both learning and evolution. Due in part to his efforts, artificial intelligence and machine learning are such well developed fields that their theory branch has its own name and conferences: computational learning theory (CoLT). Much of CoLT rests on Valiant’s (1984) introduction of probably-approximately correct (PAC) learning which — in spite of its name — is one of the most formal and careful ways to understand learnability. The importance of this model cannot be understated, and resulted in Valiant receiving (among many other distinctions) the 2010 Turing award (i.e. the Nobel prize of computer science). Most importantly, his attention was not confined only to pure cstheory, he took his algorithmic insights into biology, specifically computational neuroscience (see Valiant (1994; 2006) for examples), to understand human thought and learning.

Like any good thinker reflecting on biology, Valiant understands the importance of Dobzhansky’s observation that “nothing in biology makes sense except in the light of evolution”. Even for the algorithmic lens it helps to have this illumination. Any understanding of learning mechanisms like the brain is incomplete without an examination of the evolutionary dynamics that shaped these organs. In the mid-2000s, Valiant embarked on the quest of formalizing some of the insights cstheory can offer evolution, culminating in his PAC-based model of evolvability (Valiant, 2009). Although this paper is one of the most frequently cited on TheEGG, I’ve waited until today to give it a dedicated post.
Read more of this post

Misleading models: “How learning can guide evolution”

HintonI often see examples of mathematicians, physicists, or computer scientists transitioning into other scientific disciplines and going on to great success. However, the converse is rare, and the only two examples I know is Edward Witten’s transition from an undergad in history and linguistics to a ground-breaking career in theoretical physicist, and Geoffrey Hinton‘s transition from an undergrad in experimental psychology to a trend setting career in artificial intelligence. Although in my mind Hinton is associated with neural networks and deep learning, that isn’t his only contribution in fields close to my heart. As is becoming pleasantly common on TheEGG, this is a connection I would have missed if it wasn’t for Graham Jones‘ insightful comment and subsequent email discussion in early October.

The reason I raise the topic four months later, is because the connection continues our exploration of learning and evolution. In particular, Hinton & Nowlan (1987) were the first to show the Baldwin effect in action. They showed how learning can speed up evolution in model that combined a genetic algorithm with learning by trial and error. Although the model was influential, I fear that it is misleading and the strength of its results are often misinterpreted. As such, I wanted to explore these shortcomings and spell out what would be a convincing demonstration of a qualitative increase in adaptability due to learning.
Read more of this post

Phenotypic plasticity, learning, and evolution

MendelBaldwinLearning and evolution are eerily similar, yet different.

This tension fuels my interest in understanding how they interact. In the context of social learning, we can think of learning and evolution as different dynamics. For individual learning, however, it is harder to find a difference. On the one hand, this has led learning experts like Valiant (2009) to suggest that evolution is a subset of machine learning. On the other hand, due to its behaviorist roots, a lot of evolutionary thought simply ignored learning or did not treat it explicitly. To find interesting interactions between the two concepts we have to turn to ideas from before the modern synthesis — the Simpson-Baldwin effect (Baldwin 1886, 1902; Simpson, 1953):
Read more of this post