Game landscapes: from fitness scalars to fitness functions

My biology writing focuses heavily on fitness landscapes and evolutionary games. On the surface, these might seem fundamentally different from each other, with their only common feature being that they are both about evolution. But there are many ways that we can interconnect these two approaches.

The most popular connection is to view these models as two different extremes in terms of time-scale.

When we are looking at evolution on short time-scales, we are primarily interested which of a limited number of extant variants will take over the population or how they’ll co-exist. We can take the effort to model the interactions of the different types with each other, and we summarize these interactions as games.

But when we zoom out to longer and longer timescales, the importance of these short term dynamics diminish. And we start to worry about how new types arise and take over the population. At this timescale, the details of the type interactions are not as important and we can just focus on the first-order: fitness. What starts to matter is how fitness of nearby mutants compares to each other, so that we can reason about long-term evolutionary trajectories. We summarize this as fitness landscapes.

From this perspective, the fitness landscapes are the more foundational concept. Games are the details that only matter in the short term.

But this isn’t the only perspective we can take. In my recent contribution with Peter Jeavons to Russell Rockne’s 2019 Mathematical Oncology Roadmap, I wanted to sketch a different perspective. In this post I want to sketch this alternative perspective and discuss how ‘game landscapes’ generalize the traditional view of fitness landscapes. In this way, the post can be viewed as my third entry on progressively more general views of fitness landscapes. The previous two were on generalizing the NK-model, and replacing scalar fitness by a probability distribution.

In this post, I will take this exploration of fitness landscapes a little further and finally connect to games. Nothing profound will be said, but maybe it will give another look at a well-known object.

Read more of this post

Advertisements

Colour, psychophysics, and the scientific vs. manifest image of reality

Recently on TheEGG, I’ve been writing a lot about the differences between effective (or phenomenological) and reductive theories. Usually, I’ve confined this writing to evolutionary biology; especially the tension between effective and reductive theories in the biology of microscopic systems. For why this matters to evolutionary game theory, see Kaznatcheev (2017, 2018).

But I don’t think that microscopic systems are the funnest place to see this interplay. The funnest place to see this is in psychology.

In the context of psychology, you can add an extra philosophical twist. Instead of differentiating between reductive and effective theories; a more drastic difference can be drawn between the scientific and manifest image of reality.

In this post, I want to briefly talk about how our modern theories of colour vision developed. This is a nice example of good effective theory leading before any reductive basis. And with that background in mind, I want to ask the question: are colours real? Maybe this will let me connect to some of my old work on interface theories of perception (see Kaznatcheev, Montrey, and Shultz, 2014).

Read more of this post

Constant-sum games as a way from non-cell autonomous processes to constant tumour growth rate

A lot of thinking in cancer biology seems to be focused on cell-autonomous processes. This is the (overly) reductive view that key properties of cells, such as fitness, are intrinsic to the cells themselves and not a function of their interaction with other cells in the tumour. As far as starting points go, this is reasonable. But in many cases, we can start to go beyond this cell-autonomous starting point and consider non-cell-autonomous processes. This is when the key properties of a cell are not a function of just that cell but also its interaction partners. As an evolutionary game theorist, I am clearly partial to this view.

Recently, I was reading yet another preprint that has observed non-cell autonomous fitness in tumours. In this case, Johnson et al. (2019) spotted the Allee effect in the growth kinetics of cancer cells even at extremely low densities (seeding in vitro at <200 cells in a 1 mm^3 well). This is an interesting paper, and although not explicitly game-theoretic in its approach, I think it is worth reading for evolutionary game theorists.

Johnson et al.'s (2019) approach is not explicitly game-theoretic because they consider their in vitro populations as a monomorphic clonal line, and thus don't model interactions between types. Instead, they attribute non-cell autonomous processes to density dependence of the single type on itself. In this setting, they reasonably define the cell-autonomous null-model as constant exponential growth, i.e. \dot{N}_T = w_TN_T for some constant fitness w_T and total tumour size N_T.

It might also be tempting to use the same model to capture cell-autonomous growth in game-theoretic models. But this would be mistaken. For this is only effectively cell-autonomous at the level of the whole tumour, but could hide non-cell-autonomous fitness at the level of the different types that make up the tumour. This apparent cell-autonomous total growth will happen whenever the type interactions are described by constant-sum games.

Given the importance of constant-sum games (more famously known as zero-sum games) to the classical game theory literature, I thought that I would write a quick introductory post about this correspondence between non-cell autonomous constant-sum games and effectively cell-autonomous growth at the level of the whole tumour.

Read more of this post

Quick introduction: the algorithmic lens

Computers are a ubiquitous tool in modern research. We use them for everything from running simulation experiments and controlling physical experiments to analyzing and visualizing data. For almost any field ‘X’ there is probably a subfield of ‘computational X’ that uses and refines these computational tools to further research in X. This is very important work and I think it should be an integral part of all modern research.

But this is not the algorithmic lens.

In this post, I will try to give a very brief description (or maybe just a set of pointers) for the algorithmic lens. And of what we should imagine when we see an ‘algorithmic X’ subfield of some field X.

Read more of this post

From perpetual motion machines to the Entscheidungsproblem

Turing MachineThere seems to be a tendency to use the newest technology of the day as a metaphor for making sense of our hardest scientific questions. These metaphors are often vague and inprecise. They tend to overly simplify the scientific question and also misrepresent the technology. This isn’t useful.

But the pull of this metaphor also tends to transform the technical disciplines that analyze our newest tech into fundamental disciplines that analyze our universe. This was the case for many aspects of physics, and I think it is currently happening with aspects of theoretical computer science. This is very useful.

So, let’s go back in time to the birth of modern machines. To the water wheel and the steam engine.

I will briefly sketch how the science of steam engines developed and how it dealt with perpetual motion machines. From here, we can jump to the analytic engine and the modern computer. I’ll suggest that the development of computer science has followed a similar path — with the Entscheidungsproblem and its variants serving as our perpetual motion machine.

The science of steam engines successfully universalized itself into thermodynamics and statistical mechanics. These are seen as universal disciplines that are used to inform our understanding across the sciences. Similarly, I think that we need to universalize theoretical computer science and make its techniques more common throughout the sciences.

Read more of this post

Fitness distributions versus fitness as a summary statistic: algorithmic Darwinism and supply-driven evolution

For simplicity, especially in the fitness landscape literature, fitness is often treated as a scalar — usually a real number. If our fitness landscape is on genotypes then each genotype has an associated scalar value of fitness. If our fitness landscape is on phenotypes then each phenotype has an associated scalar value of fitness.

But this is a little strange. After all, two organisms with the same genotype or phenotype don’t necessarily have the same number of offspring or other life outcomes. As such, we’re usually meant to interpret the value of fitness as the mean of some random variable like number of children. But is the mean the right summary statistic to use? And if it is then which mean: arithmetic or geometric or some other?

One way around this is to simply not use a summary statistic, and instead treat fitness as a random variable with a corresponding distribution. For many developmental biologists, this would still be a simplification since it ignores many other aspects of life-histories — especially related to reproductive timing. But it is certainly an interesting starting point. And one that I don’t see pursued enough in the fitness landscape literature.

The downside is that it makes an already pretty vague and unwieldy model — i.e. the fitness landscape — even less precise and even more unwieldy. As such, we should pursue this generalization only if it brings us something concrete and useful. In this post I want to discuss two aspects of this: better integration of evolution with computational learning theory and thinking about supply driven evolution (i.e. arrival of the fittest). In the process, I’ll be drawing heavily on the thoughts of Leslie Valiant and Julian Z. Xue.

Read more of this post

Quick introduction: Generalizing the NK-model of fitness landscapes

As regular readers of TheEGG know, I’ve been interested in fitness landscapes for many years. At their most basic, a fitness landscape is an almost unworkably vague idea: it is just a mapping from some description of organisms (usually a string corresponding to a genotype or phenotype) to fitness, alongside some notion of locality — i.e. some descriptions being closer to each other than to some other descriptions. Usually, fitness landscapes are studied over combinatorially large genotypic spaces on many loci, with locality coming form something like point mutations at each locus. These spaces are exponentially large in the number of loci. As such, no matter how rapidly next-generation sequencing and fitness assays expand, we will not be able to treat a fitness landscape as simply an array of numbers and measure each fitness. At least for any moderate or larger number of genes.

The space is just too big.

As such, we can’t consider an arbitrary mapping from genotypes to fitness. Instead, we need to consider compact representations.

Ever since Julian Z. Xue first introduced me to it, my favorite compact representation has probably been the NK-model of fitness landscapes. In this post, I will rehearse the definition of what I’d call the classic NK-model. But I’ll then consider how the model would have been defined if it was originally proposed by a mathematician or computer scientists. I’ll call this the generalized NK-model and argue that it isn’t only mathematically more natural but also biologically more sensible.
Read more of this post

Quick introduction: Evolutionary game assay in Python

It’s been a while since I’ve shared or discussed code on TheEGG. So to avoid always being too vague and theoretical, I want to use this post to explain how one would write some Python code to measure evolutionary games. This will be an annotated sketch of the game assay from our recent work on measuring evolutionary games in non-small cell lung cancer (Kaznatcheev et al., 2019).

The motivation for this post came about a month ago when Nathan Farrokhian was asking for some advice on how to repeat our game assay with a new experimental system. He has since done so (I think) by measuring the game between Gefitinib-sensitive and Gefitinib-resistant cell types. And I thought it would make a nice post in the quick introductions series.

Of course, the details of the system don’t matter. As long as you have an array of growth rates (call them yR and yG with corresponding errors yR_e and yG_e) and initial proportions of cell types (call them xR and xG) then you could repeat the assay. To see how to get to this array from more primitive measurements, see my old post on population dynamics from time-lapse microscopy. It also has Python code for your enjoyment.

In this post, I’ll go through the two final steps of the game assay. First, I’ll show how to fit and visualize fitness functions (Figure 3 in Kaznatcheev et al., 2019). Second, I’ll transform those fitness functions into game points and plot (Figure 4b in Kaznatcheev et al., 2019). I’ll save discussions of the non-linear game assay (see Appendix F in Kaznatcheev et al., 2019) for a future post.
Read more of this post

Abstracting evolutionary games in cancer

As you can tell from browsing the mathematical oncology posts on TheEGG, somatic evolution is now recognized as a central force in the initiation, progression, treatment, and management of cancer. This has opened a new front in the proverbial war on cancer: focusing on the ecology and evolutionary biology of cancer. On this new front, we are starting to deploy new kinds of mathematical machinery like fitness landscapes and evolutionary games.

Recently, together with Peter Jeavons, I wrote a couple of thousand words on this new machinery for Russell Rockne’s upcoming mathematical oncology roadmap. Our central argument being — to continue the war metaphor — that with new machinery, we need new tactics.

Biologist often aim for reductive explanations, and mathematical modelers have tended to encourage this tactic by searching for mechanistic models. This is important work. But we also need to consider other tactics. Most notable, we need to look at the role that abstraction — both theoretical and empirical abstraction — can play in modeling and thinking about cancer.

The easiest way to share my vision for how we should approach this new tactic would be to throw a preprint up on BioRxiv or to wait for Rockne’s road map to eventually see print. Unfortunately, BioRxiv has a policy against views-like articles — as I was surprised to discover. And I am too impatient to wait for the eventual giant roadmap article.

Hence, I want to share some central parts in this blog post. This is basically an edited and slightly more focused version of our roadmap. Since, so far, game theory models have had more direct impact in oncology than fitness landscapes, I’ve focused this post exclusively on games.
Read more of this post

Supply and demand as driving forces behind biological evolution

Recently I was revisiting Xue et al. (2016) and Julian Xue’s thought on supply-driven evolution more generally. I’ve been fascinated by this work since Julian first told me about it. But only now did I realize the economic analogy that Julian is making. So I want to go through this Mutants as Economic Goods metaphor in a bit of detail. A sort of long-delayed follow up to my post on evolution as a risk-averse investor (and another among many links between evolution and economics).

Let us start by viewing the evolving population as a market — focusing on the genetic variation in the population, in particular. From this view, each variant or mutant trait is a good. Natural selection is the demand. It prefers certain goods over others and ‘pays more’ for them in the currency of fitness. Mutation and the genotype-phenotype map that translates individual genetic changes into selected traits is the supply. Both demand and supply matter to the evolutionary economy. But as a field, we’ve put too much emphasis on the demand — survival of the fittest — and not enough emphasis on the supply — arrival of the fittest. This accusation of too much emphasis on demand has usually been raised against the adaptationist program.

The easiest justification for the demand focus of the adapatationist program has been one of model simplicity — similar to the complete market models in economics. If we assume isotropic mutations — i.e. there is the same unbiased chance of a trait to mutate in any direction on the fitness landscape — then surely mutation isn’t an important force in evolution. As long as the right genetic variance is available then nature will be able to select it and we can ignore further properties of the mutation operator. We can make a demand based theory of evolution.

But if only life was so simple.
Read more of this post