Token vs type fitness and abstraction in evolutionary biology

There are only twenty-six letters in the English alphabet, and yet there are more than twenty-six letters in this sentence. How do we make sense of this?

Ever since I first started collaborating with David Basanta and Jacob Scott back in 2012/13, a certain tension about evolutionary games has been gnawing at me. A feeling that a couple of different concepts are being swept up under the rug of a single name.[1] This feeling became stronger during my time at Moffitt, especially as I pushed for operationalizing evolutionary games. The measured games that I was imagining were simply not the same sort of thing as the games implemented in agent-based models. Finally this past November, as we were actually measuring the games that cancer plays, a way to make the tension clear finally crystallized for me: the difference between reductive and effective games could be linked to two different conceptions of fitness.

This showed a new door for me: philosophers of biology have already done extensive conceptual analysis of different versions of fitness. Unfortunately, due to various time pressures, I could only peak through the keyhole before rushing out my first draft on the two conceptions of evolutionary games. In particular, I didn’t connect directly to the philosophy literature and just named the underlying views of fitness after the names I’ve been giving to the games: reductive fitness and effective fitness.

Now, after a third of a year busy teaching and revising other work, I finally had a chance to open that door and read some of the philosophy literature. This has provided me with a better vocabulary and clearer categorization of fitness concepts. Instead of defining reductive vs effective fitness, the distinction I was looking for is between token fitness and type fitness. And in this post, I want to discuss that distinction. I will synthesize some of the existing work in a way that is relevant to separating reductive vs. effective games. In the process, I will highlight some missing points in the current debates. I suspect this points have been overlooked because most of the philosophers of biology are focused more on macroscopic organisms instead of the microscopic systems that motivated me.[2]

Say what you will of birds and ornithology, but I am finding reading philosophy of biology to be extremely useful for doing ‘actual’ biology. I hope that you will, too.

Read more of this post

Advertisements

Symmetry breaking and non-cell-autonomous growth rates in cancer

“You can’t step in the same river twice” might seem like an old aphorism of little value, but I think it is central to making sense of the sciences. This is especially clear if we rephrase it as: “you can’t do the same experiment twice”. After all, a replication experiment takes place at a different time, sometimes a different place, maybe done by a different experimenter. Why should any of the countless rules that governed the initial experiment still hold for the replicate? But our methodology demands that we must be able to repeat experiments. We achieve by making a series of symmetry assumptions. For example: the universality or homogeneity of physical laws. We can see this with early variants of the principle of sufficient reason in Anaximander and Aristotle. It developed closer to the modern statements with Galileo, Copernicus and Newton by pushing the laws of physics outside the sublunary sphere and suggesting that the planets follows the same laws as the apple. In fact, Alfred North Whitehead considered a belief in trustworthy uniformity of physical laws to be the defining feature of western philosophy (and science) since Thales.

In this post, I want to go through some of the symmetries we assume and how to break them. And I want to discuss this at levels from grand cosmology to the petri dish. In the process, I’ll touch on the fundamental constants of physics, how men stress out mice, and how standard experimental practices in cancer biology assume a cell-autonomous process symmetry.

Read more of this post

Fusion and sex in protocells & the start of evolution

In 1864, five years after reading Darwin’s On the Origin of Species, Pyotr Kropotkin — the anarchist prince of mutual aid — was leading a geographic survey expedition aboard a dog-sleigh — a distinctly Siberian variant of the HMS Beagle. In the harsh Manchurian climate, Kropotkin did not see competition ‘red in tooth and claw’, but a flourishing of cooperation as animals banded together to survive their environment. From this, he built a theory of mutual aid as a driving factor of evolution. Among his countless observations, he noted that no matter how selfish an animal was, it still had to come together with others of its species, at least to reproduce. In this, he saw both sex and cooperation as primary evolutionary forces.

Now, Martin A. Nowak has taken up the challenge of putting cooperation as a central driver of evolution. With his colleagues, he has tracked the problem from myriad angles, and it is not surprising that recently he has turned to sex. In a paper released at the start of this month, Sam Sinai, Jason Olejarz, Iulia A. Neagu, & Nowak (2016) argue that sex is primary. We need sex just to kick start the evolution of a primordial cell.

In this post, I want to sketch Sinai et al.’s (2016) main argument, discuss prior work on the primacy of sex, a similar model by Wilf & Ewens, the puzzle over emergence of higher levels of organization, and the difference between the protocell fusion studied by Sinai et al. (2016) and sex as it is normally understood. My goal is to introduce this fascinating new field that Sinai et al. (2016) are opening to you, dear reader; to provide them with some feedback on their preprint; and, to sketch some preliminary ideas for future extensions of their work.

Read more of this post

Eukaryotes without Mitochondria and Aristotle’s Ladder of Life

In 348/7 BC, fearing anti-Macedonian sentiment or disappointed with the control of Plato’s Academy passing to Speusippus, Aristotle left Athens for Asian Minor across the Aegean sea. Based on his five years[1] studying of the natural history of Lesbos, he wrote the pioneering work of zoology: The History of Animals. In it, he set out to catalog the what of biology before searching for the answers of why. He initiated a tradition of naturalists that continues to this day.

Aristotle classified his observations of the natural world into a hierarchical ladder of life: humans on top, above the other blooded animals, bloodless animals, and plants. Although we’ve excised Aristotle’s insistence on static species, this ladder remains for many. They consider species as more complex than their ancestors, and between the species a presence of a hierarchy of complexity with humans — as always — on top. A common example of this is the rationality fetish that views Bayesian learning as a fixed point of evolution, or ranks species based on intelligence or levels-of-consciousness. This is then coupled with an insistence on progress, and gives them the what to be explained: the arc of evolution is long, but it bends towards complexity.

In the early months of TheEGG, Julian Xue turned to explaining the why behind the evolution of complexity with ideas like irreversible evolution as the steps up the ladder of life.[2] One of Julian’s strongest examples of such an irreversible step up has been the transition from prokaryotes to eukaryotes through the acquisition of membrane-bound organelles like mitochondria. But as an honest and dedicated scholar, Julian is always on the lookout for falsifications of his theories. This morning — with an optimistic “there goes my theory” — he shared the new Kamkowska et al. (2016) paper showing a surprising what to add to our natural history: a eukaryote without mitochondria. An apparent example of a eukaryote stepping down a rung in complexity by losing its membrane-bound ATP powerhouse.
Read more of this post

Choosing units of size for populations of cells

Recently, I have been interacting more and more closely with experiment. This has put me in the fortunate position of balancing the design and analysis of both theoretical and experimental models. It is tempting to think of theorists as people that come up with ideas to explain an existing body of facts, and of mathematical modelers as people that try to explain (or represent) an existing experiment. But in healthy collaboration, theory and experiment should walk hand it hand. If experiments pose our problems and our mathematical models are our tools then my insistence on pairing tools and problems (instead of ‘picking the best tool for the problem’) means that we should be willing to deform both for better communication in the pair.

Evolutionary game theory — and many other mechanistic models in mathematical oncology and elsewhere — typically tracks population dynamics, and thus sets population size (or proportions within a population) as central variables. Most models think of the units of population as individual organisms; in this post, I’ll stick to the petri dish and focus on cells as the individual organisms. We then try to figure out properties of these individual cells and their interactions based on prior experiments or our biological intuitions. Experimentalists also often reason in terms of individual cells, making them seem like a natural communication tool. Unfortunately, experiments and measurements themselves are usually not about cells. They are either of properties that are only meaningful at the population level — like fitness — or indirect proxies for counts of individual cells — like PSA or intensity of fluorescence. This often makes counts of individual cells into an inferred theoretical quantity and not a direct observable. And if we are going to introduce an extra theoretical term then parsimony begs for a justification.

But what is so special about the number of cells? In this post, I want to question the reasons to focus on individual cells (at the expense of other choices) as the basic atoms of our ontology.

Read more of this post

Lotka-Volterra, replicator dynamics, and stag hunting bacteria

Happy year of the monkey!

Last time in the Petri dish, I considered the replicator dynamics between type-A and type-B cells abstractly. In the comments, Arne Traulsen pointed me to Li et al. (2015):

We have attempted something similar in spirit with bacteria. Looking at frequencies alone, it looked like coordination. But taking into account growth led to different conclusions […] In that case, things were more subtle than anticipated…

So following their spirit, I will get more concrete in this post and replace type-A by Curvibacter sp. AEP13 and type-B by Duganella sp. C1.2 — two bacteria that help fresh water Hydra avoid fungal infection. And I will also show how to extend our replicator dynamics with growth and changing cell density.

Although I try to follow Arne’s work very closely, I had not read Li et al. (2015) before, so I scheduled it for a reading group this past Friday. I really enjoyed the experiments that they conducted, but I don’t agree with their interpretations that taking growth into account leads to a different conclusion. In this post, I will sketch how they measured their experimental system and then provide a replicator equation representation of the Lotka-Volterra model they use to interpret their results. From this, we’ll be able to conclude that C and D are playing the Stag Hunt — or coordination, or assurance, pick your favorite terminology — game.

Read more of this post

Measuring games in the Petri dish

For the next couple of months, Jeffrey Peacock is visiting Moffitt. He’s a 4th year medical student at the University of Central Florida with a background in microbiology and genetic engineering of bacteria and yeast. Together with Andriy Marusyk and Jacob Scott, he will move to human cells and run some in vitro experiments with non-small cell lung cancer — you can read more about this on Connecting the Dots. Robert Vander Velde is also in the process of designing some experiments of his own. Both Jeff and Robert are interested in evolutionary game theory, so this is great opportunity for me to put my ideas on operationalization of replicator dynamics into practice.

In this post, I want to outline the basic process for measuring a game from in vitro experiments. Games in the Petri-dish. It won’t be as action packed as Agar.io — that’s an actual MMO cells-in-Petri-dish game; play here — but hopefully it will be more grounded in reality. I will introduce the gain function, show how to measure it, and stress the importance of quantifying the error on this measurement. Since this is part of the theoretical preliminaries for my collaborations, we don’t have our own data to share yet, so I will provide an illustrative cartoon with data from Archetti et al. (2015). Finally, I will show what sort of data would rule-out the theoretician’s favourite matrix games and discuss the ego-centric representation of two-strategy matrix games. The hope is that we can use this work to go from heuristic guesses at what sort of games microbes or cancer cells might play to actually measuring those games.
Read more of this post

Cooperation, enzymes, and the origin of life

Enzymes play an essential role in life. Without them, the translation of genetic material into proteins — the building blocks of all phenotypic traits — would be impossible. That fact, however, poses a problem for anyone trying to understand how life appeared in the hot, chaotic, bustling molecular “soup” from which it sparked into existence some 4 billion years ago.

StromatolitesThrow a handful of self-replicating organic molecules into a glass of warm water, then shake it well. In this thoroughly mixed medium, molecules that help other molecules replicate faster –- i.e. enzymes or analogues thereof — do so at their own expense and, by virtue of natural selection, must sooner or later go extinct. But now suppose that little pockets or “vesicles” form inside the glass by some abiotic process, encapsulating the molecules into isolated groups. Suppose further that, once these vesicles reach a certain size, they can split and give birth to “children” vesicles — again, by some purely physical, abiotic process. What you now have is a recipe for group selection potentially favorable to the persistence of catalytic molecules. While less fit individually, catalysts favor the group to which they belong.

This gives rise to a conflict opposing (1) within-group selection against “altruistic” traits and (2) between-group selection for such traits. In other words, enzymes and abiotic vesicles make an evolutionary game theory favourite — a social dilemma.
Read more of this post

Mathematical models of running cockroaches and scale-invariance in cells

I often think of myself as an applied mathematician — I even spent a year of grad school in a math department (although it was “Combinatorics and Optimization” not “Applied Math”) — but when the giant systems of ODEs or PDEs come a-knocking, I run and hide. I confine myself to abstract or heuristic models, and for the questions I tend to ask these are the models people often find interesting. These models are built to be as simple as possible, and often are used to prove a general statement (if it is an abstraction) that will hold for any more detailed model, or to serve as an intuition pump (if it is a heuristic). If there are more than a handful of coupled equations or if a simple symmetry (or Mathematica) doesn’t solve them, then I call it quits or simplify.

However, there is a third type of model — an insilication. These mathematical or computational models are so realistic that their parameters can be set directly by experimental observations (not merely optimized based on model output) and the outputs they generate can be directly tested against experiment or used to generate quantitative predictions. These are the domain of mathematical engineers and applied mathematicians, and some — usually experimentalists, but sometimes even computer scientists — consider these to be the only real scientific models. As a prototypical example of an insilication, think of the folks at NASA numerically solving the gravitational model of our solar system to figure out how to aim the next mission to Mars. These models often have dozens or hundreds (or sometimes more!) coupled equations, where every part is known to perform to an extreme level of accuracy.
Read more of this post

Algorithmic view of historicity and separation of scales in biology

A Science publications is one of the best ways to launch your career, especially if it is based on your undergraduate work, part of which you carried out with makeshift equipment in your dorm! That is the story of Thomas M.S. Chang, who in 1956 started experiments (partially carried out in his residence room in McGill’s Douglas Hall) that lead to the creation of the first artificial cell (Chang, 1964). This was — in the words of the 1989 New Scientists — an “elegantly simple and intellectually ambitious” idea that “has grown into a dynamic field of biomedical research and development.” A field that promises to connect biology and computer science by physically realizing John von Neumann’s dream of a self-replication machine.

makingBilayer
Read more of this post