Algorithmic Darwinism

The workshop on computational theories of evolution started off on Monday, March 17th with Leslie Valiant — one of the organizers — introducing his model of evolvability (Valiant, 2009). This original name was meant to capture what type of complexity can be achieved through evolution. Unfortunately — especially at this workshop — evolvability already had a different, more popular meaning in biology: mechanisms that make an organism or species ‘better’ at evolving, in the sense of higher mutations rates, de novo genes, recombination through sex, etc. As such, we need a better name and I am happy to take on the renaming task.
Read more of this post

Advertisements

Computational theories of evolution

If you look at your typical computer science department’s faculty list, you will notice the theorists are a minority. Sometimes they are further subdivided by being culled off into mathematics departments. As such, any institute that unites and strengthens theorists is a good development. That was my first reason for excitement two years ago when I learned that a $60 million grant would establish the Simons Institute for the Theory of Computing at UC, Berkeley. The institute’s mission is close to my heart: bringing the study of theoretical computer science to bear on the natural sciences; an institute for the algorithmic lens. My second reason for excitement was that one of the inaugural programs is evolutionary biology and the theory of computing. Throughout this term, a series workshops are being held to gather and share the relevant experience.

Right now, I have my conference straw hat on, as I wait for a flight transfer in Dallas on my way to one of the events in this program, the workshop on computational theories of evolution. For the next week I will be in Berkeley absorbing all there is to know on the topic. Given how much I enjoyed Princeton’s workshop on natural algorithms in the sciences, I can barely contain my excitement.
Read more of this post

Evolution is a special kind of (machine) learning

Theoretical computer science has a long history of peering through the algorithmic lens at the brain, mind, and learning. In fact, I would argue that the field was born from the epistemological questions of what can our minds learn of mathematical truth through formal proofs. The perspective became more scientific with McCullock & Pitts’ (1943) introduction of finite state machines as models of neural networks and Turing’s B-type neural networks paving the way for our modern treatment of artificial intelligence and machine learning. The connections to biology, unfortunately, are less pronounced. Turing ventured into the field with his important work on morphogenesis, and I believe that he could have contributed to the study of evolution but did not get the chance. This work was followed up with the use of computers in biology, and with heuristic ideas from evolution entering computer science in the form of genetic algorithms. However, these areas remained non-mathematical, with very few provable statements or non-heuristic reasoning. The task of making strong connections between theoretical computer science and evolutionary biology has been left to our generation.

ValiantAlthough the militia of cstheorists reflecting on biology is small, Leslie Valiant is their standard-bearer for the steady march of theoretical computer science into both learning and evolution. Due in part to his efforts, artificial intelligence and machine learning are such well developed fields that their theory branch has its own name and conferences: computational learning theory (CoLT). Much of CoLT rests on Valiant’s (1984) introduction of probably-approximately correct (PAC) learning which — in spite of its name — is one of the most formal and careful ways to understand learnability. The importance of this model cannot be understated, and resulted in Valiant receiving (among many other distinctions) the 2010 Turing award (i.e. the Nobel prize of computer science). Most importantly, his attention was not confined only to pure cstheory, he took his algorithmic insights into biology, specifically computational neuroscience (see Valiant (1994; 2006) for examples), to understand human thought and learning.

Like any good thinker reflecting on biology, Valiant understands the importance of Dobzhansky’s observation that “nothing in biology makes sense except in the light of evolution”. Even for the algorithmic lens it helps to have this illumination. Any understanding of learning mechanisms like the brain is incomplete without an examination of the evolutionary dynamics that shaped these organs. In the mid-2000s, Valiant embarked on the quest of formalizing some of the insights cstheory can offer evolution, culminating in his PAC-based model of evolvability (Valiant, 2009). Although this paper is one of the most frequently cited on TheEGG, I’ve waited until today to give it a dedicated post.
Read more of this post

Misleading models: “How learning can guide evolution”

HintonI often see examples of mathematicians, physicists, or computer scientists transitioning into other scientific disciplines and going on to great success. However, the converse is rare, and the only two examples I know is Edward Witten’s transition from an undergad in history and linguistics to a ground-breaking career in theoretical physicist, and Geoffrey Hinton‘s transition from an undergrad in experimental psychology to a trend setting career in artificial intelligence. Although in my mind Hinton is associated with neural networks and deep learning, that isn’t his only contribution in fields close to my heart. As is becoming pleasantly common on TheEGG, this is a connection I would have missed if it wasn’t for Graham Jones‘ insightful comment and subsequent email discussion in early October.

The reason I raise the topic four months later, is because the connection continues our exploration of learning and evolution. In particular, Hinton & Nowlan (1987) were the first to show the Baldwin effect in action. They showed how learning can speed up evolution in model that combined a genetic algorithm with learning by trial and error. Although the model was influential, I fear that it is misleading and the strength of its results are often misinterpreted. As such, I wanted to explore these shortcomings and spell out what would be a convincing demonstration of a qualitative increase in adaptability due to learning.
Read more of this post

Phenotypic plasticity, learning, and evolution

MendelBaldwinLearning and evolution are eerily similar, yet different.

This tension fuels my interest in understanding how they interact. In the context of social learning, we can think of learning and evolution as different dynamics. For individual learning, however, it is harder to find a difference. On the one hand, this has led learning experts like Valiant (2009) to suggest that evolution is a subset of machine learning. On the other hand, due to its behaviorist roots, a lot of evolutionary thought simply ignored learning or did not treat it explicitly. To find interesting interactions between the two concepts we have to turn to ideas from before the modern synthesis — the Simpson-Baldwin effect (Baldwin 1886, 1902; Simpson, 1953):
Read more of this post

Baldwin effect and overcoming the rationality fetish

G.G. Simpson and J.M. Baldwin

G.G. Simpson and J.M. Baldwin

As I’ve mentioned previously, one of the amazing features of the internet is that you can take almost any idea and find a community obsessed with it. Thus, it isn’t surprising that there is a prominent subculture that fetishizes rationality and Bayesian learning. They tend to accumulate around forums with promising titles like OvercomingBias and Less Wrong. Since these communities like to stay abreast with science, they often offer evolutionary justifications for why humans might be Bayesian learners and claim a “perfect Bayesian reasoner as a fixed point of Darwinian evolution”. This lets them side-stepped observed non-Bayesian behavior in humans, by saying that we are evolving towards, but haven’t yet reached this (potentially unreachable, but approximable) fixed point. Unfortunately, even the fixed-point argument is naive of critiques like the Simpson-Baldwin effect.

Introduced in 1896 by psychologist J.M. Baldwin then named and reconciled with the modern synthesis by leading paleontologist G.G. Simpson (1953), the Simpson-Baldwin effect posits that “[c]haracters individually acquired by members of a group of organisms may eventually, under the influence of selection, be reenforced or replaced by similar hereditary characters” (Simpson, 1953). More explicitly, it consists of a three step process (some of which can occur in parallel or partially so):

  1. Organisms adapt to the environment individually.
  2. Genetic factors produce hereditary characteristics similar to the ones made available by individual adaptation.
  3. These hereditary traits are favoured by natural selection and spread in the population.

The overall result is that originally individual non-hereditary adaptation become hereditary. For Baldwin (1886,1902) and other early proponents (Morgan 1886; Osborn 1886, 1887) this was a way to reconcile Darwinian and strong Lamarkian evolution. With the latter model of evolution exorcised from the modern synthesis, Simpson’s restatement became a paradox: why do we observe the costly mechanism and associated errors of individual learning, if learning does not enhance individual fitness at equilibrium and will be replaced by simpler non-adaptive strategies? This encompass more specific cases like Rogers’ paradox (Boyd & Richerson, 1985; Rogers, 1988) of social learning.
Read more of this post

Programming language for biochemistry

Computer scientists that think of nature as literally computing, often take the stance that biological organisms are nothing more than protein interaction networks. For example, this is the stance that Leslie Valiant (2009) takes when defining ecorithms: biology is just a specialization of computer science focused on evolvable circuits. User @exploderator summarized the realist computational view of biology on Reddit while answering what theoretical computer science can offer biology:

[B]iology is primarily chemo-computation, chemical information systems and computational hardware.
Theoretical comp sci is the only field that is actually specifically dedicated to studying the mathematics / logic of computation. Therefore, although biology is an incredibly hard programming problem (only a fool thinks nature simple), it is indeed more about programming and less about the hardware it’s running on.

Although it is an easy stance for a theoretician to take, it is a little bit more involved for a molecular biologist, chemist, or engineer. Yet for the last 30 years, even experimentalists have been captivated by this computational realism and promise of engineering molecular devices (Drexler, 1981). Half a year ago, I even reviewed Bonnet et al. (2013) taking steps towards building transcriptors. They are focusing on the hardware side of biological computation and building a DNA-analogue of the von Neumann architecture. However, what we really need is a level of abstraction: a chemical programming language that can be compiled into biocompatible reactions.
Read more of this post

Limits on efficient minimization and the helpfulness of teachers.

Two weeks ago, I lectured on how we can minimize and learn deterministic finite state automata. Although it might not be obvious, these lectures are actually pretty closely related since minimization and learning often go hand-in-hand. During both lectures I hinted that the results won’t hold for non-deterministic finite state automata (NFA), and challenged the students to identify where the proofs break down. Of course, these is no way to patch the proofs I presented to carry over to the NFA case, in fact we expect that no efficient algorithms exist for minimizing or learning NFAs.
Read more of this post

Bounded rationality: systematic mistakes and conflicting agents of mind

Before her mother convinced her to be a doctor, my mother was a ballerina. As a result, whenever I tried to blame some external factor for my failures, I was met with my mother’s favorite aphorism: a bad dancer’s shoes are always too tight.

“Ahh, another idiosyncratic story about the human side of research,” you note, “why so many?”

Partially these stories are to broaden TheEGG blog’s appeal, and to lull you into a false sense of security before overrunning you with mathematics. Partially it is a homage to the blogs that inspired me to write, such as Lipton and Regan’s “Godel’s Lost Letters and P = NP”. Mostly, however, it is to show that science — like everything else — is a human endeavour with human roots and subject to all the excitement, disappointments, insights, and biases that this entails. Although science is a human narrative, unlike the similar story of pseudoscience, she tries to overcome or recognize her biases when they hinder her development.

selfservingbias

The self-serving bias has been particularily thorny in decision sciences. Humans, especially individuals with low self-esteem, tend to attribute their success to personal skill, while blaming their failures on external factors. As you can guess from my mother’s words, I struggle with this all the time. When I try to explain the importance of worst-case analysis, algorithmic thinking, or rigorous modeling to biologist and fail, my first instinct is to blame it on the structural differences between the biological and mathematical community, or biologists’ discomfort with mathematics. In reality, the blame is with my inability to articulate the merits of my stance, or provide strong evidence that I can offer any practical biological results. Even more depressing, I might be suffering from a case of interdisciplinitis and promoting a meritless idea while completely failing to connect to the central questions in biology. However, I must maintain my self-esteem, and even from my language here, you can tell that I am unwilling to fully entertain the latter possibility. Interestingly, this sort of bias can propagate from individual researchers into their theories.

One of the difficulties for biologists, economists, and other decision scientists has been coming to grips with observed irrationality in humans and other animals. Why wouldn’t there be a constant pressure toward more rational animals that maximize their fitness? Who is to blame for this irrational behavior? In line with the self-serving bias, it must be that crack in the sidewalk! Or maybe some other feature of the environment.
Read more of this post

How teachers help us learn deterministic finite automata

Many graduate students, and even professors, have a strong aversion to teaching. This tends to produce awful, one-sided classes that students attend just to transcribe the instructor’s lecture notes. The trend is so bad that in some cases instructors take pride in their bad teaching, and at some institutions — or so I hear around the academic water-cooler — you might even get in trouble for being too good a teacher. Why are you spending so much effort preparing your courses, instead of working on research? And it does take a lot of effort to be an effective teacher, it takes skill to turn a lecture theatre into an interactive environment where information flows both ways. A good teacher has to be able to asses how the students are progressing, and be able to clarify misconceptions held by the students even when the students can’t identify those conceptions as misplaced. Last week I had an opportunity to excercise my teaching by lecturing Prakash Panangaen’s COMP330 course.
Read more of this post