Cataloging a year of blogging: cancer and biology

Welcome to 111101111.

Another year has come to an end, and it is time to embrace tradition and reflect on the past twelve months. In fact, I will try to do one better and start a new tradition: cataloging a year of blogging.

Last year, I split up the 83 content heavy posts of 2013 into nine categories in three themes: established applications of evolutionary game theory (ethnocentrism and the public good; and mathematical oncology), expanding from behavior to society and mind (representations and rationality for replicators; feedback between finance & economics and ecology & evolution; and, learning, intelligence, and the social brain), and envisioning the algorithmic world (proof, automata, and physics; natural algorithms and biology; fitness landscapes and evolutionary equilibria; and, metamodeling and the (algorithmic) philosophy of science). In 2014 there was a sharp decrease in number of posts with only 44 articles of new content (and the 3 posts cataloging 2013, so 47 total) — this was due to a nearly 4 month blogging silence in the middle of the year — but a quarter increase in readership with 151,493 views compared to 2013’s 119,935 views. This time, I will need only two posts to survey the past year; this post for the practical and the next for the philosophical.

MathOncoFor me, the year was distributed between three cities, the usual suspects of Montreal and New York, and in October I moved down to Tampa, Florida to work with David Basanta and Jacob Scott in the Intergrated Mathematical Oncology department of the H. Lee Moffitt Cancer Center and Research Institute. A winter without snow is strange but wearing shorts in December makes up for it; plus the sunsets over the Gulf of Mexico are absolutely beautiful. Unsurprisingly, this move has meant that the practical aspects of my focus have shifted almost completely to biology; cancer, in particular.

This post is about the biology and oncology articles that made up about half of last year’s content. Given the autobiographical turn of this post, it will be (loosely) structured around three workshops that I attended in 2014, and the online conversations and collaborations that TheEGG was a host to.
Read more of this post

Evolution is a special kind of (machine) learning

Theoretical computer science has a long history of peering through the algorithmic lens at the brain, mind, and learning. In fact, I would argue that the field was born from the epistemological questions of what can our minds learn of mathematical truth through formal proofs. The perspective became more scientific with McCullock & Pitts’ (1943) introduction of finite state machines as models of neural networks and Turing’s B-type neural networks paving the way for our modern treatment of artificial intelligence and machine learning. The connections to biology, unfortunately, are less pronounced. Turing ventured into the field with his important work on morphogenesis, and I believe that he could have contributed to the study of evolution but did not get the chance. This work was followed up with the use of computers in biology, and with heuristic ideas from evolution entering computer science in the form of genetic algorithms. However, these areas remained non-mathematical, with very few provable statements or non-heuristic reasoning. The task of making strong connections between theoretical computer science and evolutionary biology has been left to our generation.

ValiantAlthough the militia of cstheorists reflecting on biology is small, Leslie Valiant is their standard-bearer for the steady march of theoretical computer science into both learning and evolution. Due in part to his efforts, artificial intelligence and machine learning are such well developed fields that their theory branch has its own name and conferences: computational learning theory (CoLT). Much of CoLT rests on Valiant’s (1984) introduction of probably-approximately correct (PAC) learning which — in spite of its name — is one of the most formal and careful ways to understand learnability. The importance of this model cannot be understated, and resulted in Valiant receiving (among many other distinctions) the 2010 Turing award (i.e. the Nobel prize of computer science). Most importantly, his attention was not confined only to pure cstheory, he took his algorithmic insights into biology, specifically computational neuroscience (see Valiant (1994; 2006) for examples), to understand human thought and learning.

Like any good thinker reflecting on biology, Valiant understands the importance of Dobzhansky’s observation that “nothing in biology makes sense except in the light of evolution”. Even for the algorithmic lens it helps to have this illumination. Any understanding of learning mechanisms like the brain is incomplete without an examination of the evolutionary dynamics that shaped these organs. In the mid-2000s, Valiant embarked on the quest of formalizing some of the insights cstheory can offer evolution, culminating in his PAC-based model of evolvability (Valiant, 2009). Although this paper is one of the most frequently cited on TheEGG, I’ve waited until today to give it a dedicated post.
Read more of this post

Misleading models: “How learning can guide evolution”

HintonI often see examples of mathematicians, physicists, or computer scientists transitioning into other scientific disciplines and going on to great success. However, the converse is rare, and the only two examples I know is Edward Witten’s transition from an undergad in history and linguistics to a ground-breaking career in theoretical physicist, and Geoffrey Hinton‘s transition from an undergrad in experimental psychology to a trend setting career in artificial intelligence. Although in my mind Hinton is associated with neural networks and deep learning, that isn’t his only contribution in fields close to my heart. As is becoming pleasantly common on TheEGG, this is a connection I would have missed if it wasn’t for Graham Jones‘ insightful comment and subsequent email discussion in early October.

The reason I raise the topic four months later, is because the connection continues our exploration of learning and evolution. In particular, Hinton & Nowlan (1987) were the first to show the Baldwin effect in action. They showed how learning can speed up evolution in model that combined a genetic algorithm with learning by trial and error. Although the model was influential, I fear that it is misleading and the strength of its results are often misinterpreted. As such, I wanted to explore these shortcomings and spell out what would be a convincing demonstration of a qualitative increase in adaptability due to learning.
Read more of this post

Phenotypic plasticity, learning, and evolution

MendelBaldwinLearning and evolution are eerily similar, yet different.

This tension fuels my interest in understanding how they interact. In the context of social learning, we can think of learning and evolution as different dynamics. For individual learning, however, it is harder to find a difference. On the one hand, this has led learning experts like Valiant (2009) to suggest that evolution is a subset of machine learning. On the other hand, due to its behaviorist roots, a lot of evolutionary thought simply ignored learning or did not treat it explicitly. To find interesting interactions between the two concepts we have to turn to ideas from before the modern synthesis — the Simpson-Baldwin effect (Baldwin 1886, 1902; Simpson, 1953):
Read more of this post

Baldwin effect and overcoming the rationality fetish

G.G. Simpson and J.M. Baldwin

G.G. Simpson and J.M. Baldwin

As I’ve mentioned previously, one of the amazing features of the internet is that you can take almost any idea and find a community obsessed with it. Thus, it isn’t surprising that there is a prominent subculture that fetishizes rationality and Bayesian learning. They tend to accumulate around forums with promising titles like OvercomingBias and Less Wrong. Since these communities like to stay abreast with science, they often offer evolutionary justifications for why humans might be Bayesian learners and claim a “perfect Bayesian reasoner as a fixed point of Darwinian evolution”. This lets them side-stepped observed non-Bayesian behavior in humans, by saying that we are evolving towards, but haven’t yet reached this (potentially unreachable, but approximable) fixed point. Unfortunately, even the fixed-point argument is naive of critiques like the Simpson-Baldwin effect.

Introduced in 1896 by psychologist J.M. Baldwin then named and reconciled with the modern synthesis by leading paleontologist G.G. Simpson (1953), the Simpson-Baldwin effect posits that “[c]haracters individually acquired by members of a group of organisms may eventually, under the influence of selection, be reenforced or replaced by similar hereditary characters” (Simpson, 1953). More explicitly, it consists of a three step process (some of which can occur in parallel or partially so):

  1. Organisms adapt to the environment individually.
  2. Genetic factors produce hereditary characteristics similar to the ones made available by individual adaptation.
  3. These hereditary traits are favoured by natural selection and spread in the population.

The overall result is that originally individual non-hereditary adaptation become hereditary. For Baldwin (1886,1902) and other early proponents (Morgan 1886; Osborn 1886, 1887) this was a way to reconcile Darwinian and strong Lamarkian evolution. With the latter model of evolution exorcised from the modern synthesis, Simpson’s restatement became a paradox: why do we observe the costly mechanism and associated errors of individual learning, if learning does not enhance individual fitness at equilibrium and will be replaced by simpler non-adaptive strategies? This encompass more specific cases like Rogers’ paradox (Boyd & Richerson, 1985; Rogers, 1988) of social learning.
Read more of this post