Quick introduction: Evolutionary game assay in Python

It’s been a while since I’ve shared or discussed code on TheEGG. So to avoid always being too vague and theoretical, I want to use this post to explain how one would write some Python code to measure evolutionary games. This will be an annotated sketch of the game assay from our recent work on measuring evolutionary games in non-small cell lung cancer (Kaznatcheev et al., 2019).

The motivation for this post came about a month ago when Nathan Farrokhian was asking for some advice on how to repeat our game assay with a new experimental system. He has since done so (I think) by measuring the game between Gefitinib-sensitive and Gefitinib-resistant cell types. And I thought it would make a nice post in the quick introductions series.

Of course, the details of the system don’t matter. As long as you have an array of growth rates (call them yR and yG with corresponding errors yR_e and yG_e) and initial proportions of cell types (call them xR and xG) then you could repeat the assay. To see how to get to this array from more primitive measurements, see my old post on population dynamics from time-lapse microscopy. It also has Python code for your enjoyment.

In this post, I’ll go through the two final steps of the game assay. First, I’ll show how to fit and visualize fitness functions (Figure 3 in Kaznatcheev et al., 2019). Second, I’ll transform those fitness functions into game points and plot (Figure 4b in Kaznatcheev et al., 2019). I’ll save discussions of the non-linear game assay (see Appendix F in Kaznatcheev et al., 2019) for a future post.
Read more of this post

Advertisements

Abstracting evolutionary games in cancer

As you can tell from browsing the mathematical oncology posts on TheEGG, somatic evolution is now recognized as a central force in the initiation, progression, treatment, and management of cancer. This has opened a new front in the proverbial war on cancer: focusing on the ecology and evolutionary biology of cancer. On this new front, we are starting to deploy new kinds of mathematical machinery like fitness landscapes and evolutionary games.

Recently, together with Peter Jeavons, I wrote a couple of thousand words on this new machinery for Russell Rockne’s upcoming mathematical oncology roadmap. Our central argument being — to continue the war metaphor — that with new machinery, we need new tactics.

Biologist often aim for reductive explanations, and mathematical modelers have tended to encourage this tactic by searching for mechanistic models. This is important work. But we also need to consider other tactics. Most notable, we need to look at the role that abstraction — both theoretical and empirical abstraction — can play in modeling and thinking about cancer.

The easiest way to share my vision for how we should approach this new tactic would be to throw a preprint up on BioRxiv or to wait for Rockne’s road map to eventually see print. Unfortunately, BioRxiv has a policy against views-like articles — as I was surprised to discover. And I am too impatient to wait for the eventual giant roadmap article.

Hence, I want to share some central parts in this blog post. This is basically an edited and slightly more focused version of our roadmap. Since, so far, game theory models have had more direct impact in oncology than fitness landscapes, I’ve focused this post exclusively on games.
Read more of this post

Supply and demand as driving forces behind biological evolution

Recently I was revisiting Xue et al. (2016) and Julian Xue’s thought on supply-driven evolution more generally. I’ve been fascinated by this work since Julian first told me about it. But only now did I realize the economic analogy that Julian is making. So I want to go through this Mutants as Economic Goods metaphor in a bit of detail. A sort of long-delayed follow up to my post on evolution as a risk-averse investor (and another among many links between evolution and economics).

Let us start by viewing the evolving population as a market — focusing on the genetic variation in the population, in particular. From this view, each variant or mutant trait is a good. Natural selection is the demand. It prefers certain goods over others and ‘pays more’ for them in the currency of fitness. Mutation and the genotype-phenotype map that translates individual genetic changes into selected traits is the supply. Both demand and supply matter to the evolutionary economy. But as a field, we’ve put too much emphasis on the demand — survival of the fittest — and not enough emphasis on the supply — arrival of the fittest. This accusation of too much emphasis on demand has usually been raised against the adaptationist program.

The easiest justification for the demand focus of the adapatationist program has been one of model simplicity — similar to the complete market models in economics. If we assume isotropic mutations — i.e. there is the same unbiased chance of a trait to mutate in any direction on the fitness landscape — then surely mutation isn’t an important force in evolution. As long as the right genetic variance is available then nature will be able to select it and we can ignore further properties of the mutation operator. We can make a demand based theory of evolution.

But if only life was so simple.
Read more of this post

Quick introduction: Problems and algorithms

For this week, I want to try a new type of post. A quick introduction to a standard topic that might not be familiar to all readers and that could be useful later on. The goal is to write a shorter post than usual and provide an launching point for future more details discussion on a topic. Let’s see if I can stick to 500 words — although this post is 933, so — in the future.

For our first topic, let’s turn to theoretical computer science.

There are many ways to subdivide theoretical computer science, but one of my favorite divisions is into the two battling factions of computational complexity and algorithm design. To sketch a caricature: the former focus on computational problems and lower bounds, and the latter focus on algorithms and upper bounds. The latter have counter-parts throughout science, but I think the former are much less frequently encountered outside theoretical computer science. I want to sketch the division between these two fields. In the future I’ll explain how it can be useful for reasoning about evolutionary biology.

So let’s start with some definitions, or at least intuitions.
Read more of this post

Cataloging a year of social blogging

With almost all of January behind us, I want to share the final summary of 2018. The first summary was on cancer and fitness landscapes; the second was on metamodeling. This third summary continues the philosophical trend of the second, but focuses on analyzing the roles of science, philosophy, and related concepts in society.

There were only 10 posts on the societal aspects of science and philosophy in 2018, with one of them not on this blog. But I think it is the most important topic to examine. And I wish that I had more patience and expertise to do these examinations.

Read more of this post

Cataloging a year of metamodeling blogging

Last Saturday, with just minutes to spare in the first calendar week of 2019, I shared a linkdex the ten (primarily) non-philosophical posts of 2018. It was focused on mathematical oncology and fitness landscapes. Now, as the second week runs into its final hour, it is time to start into the more philosophical content.

Here are 18 posts from 2018 on metamodeling.

With a nice number like 18, I feel obliged to divide them into three categories of six articles each. These three categories: (1) abstraction and reductive vs. effective theorie; (2) metamodeling and philosophy of mathematical biology; and the (3) historical context for metamodeling.

You might expect the third category to be an after-though. But it actually includes some of the most read posts of 2018. So do skim the whole list, dear reader.

Next week, I’ll discuss my remaining ten posts of 2018. The posts focused on the interface of science and society.
Read more of this post

Cataloging a year of blogging: cancer and fitness landscapes

Happy 2019!

As we leave 2018, the Theory, Evolution, and Games Group Blog enters its 9th calendar year. This past year started out slowly with only 4 posts in the first 5 months. However, after May 31st, I managed to maintain a regular posting schedule. This is the 32nd calendar week in a row with at least one new blog post released.

I am very happy about this regularity. Let’s see if I can maintain it throughout 2019.

A total of 38 posts appeared on TheEGG last year. This is the 3rd most prolific year after the 47 in 2014 and 88 in 2013. One of those being a review of the 12 posts of 2017 (the least prolific year for TheEGG).

But the other 37 posts are too much to cover in one review. Thus, in this catalogue, I’ll focus on cancer and fitness landscapes. Next week, I’ll deal with the more philosophical content from the last year.
Read more of this post

Reductionism: to computer science from philosophy

A biologist and a mathematician walk together into their joint office to find the rubbish bin on top of the desk and on fire. The biologist rushes out, grabs a fire extinguisher, puts out the blaze, returns the bin to the floor and they both start their workday.

The next day, the same pair return to their office to find the rubbish bin in its correct place on the floor but again on fire. This time the mathematician springs to action. She takes the burning bin, puts it on the table, and starts her workday.

The biologist is confused.

Mathematician: “don’t worry, I’ve reduced the problem to a previously solved case.”

What’s the moral of the story? Clearly, it’s that reductionism is “[o]ne of the most used and abused terms in the philosophical lexicon.” At least it is abused enough for this sentiment to make the opening line of Ruse’s (2005) entry in the Oxford Companion to Philosophy.

All of this was not apparent to me.

I underestimated the extent of disagreement about the meaning of reductionism among people who are saying serious things. A disagreement that goes deeper than the opening joke or the distinction between ontological, epistemological, methodological, and theoretical reductionism. Given how much I’ve written about the relationship between reductive and effective theories, it seems important for me to sort out how people read ‘reductive’.

Let me paint the difference that I want to discuss in the broadest stroke with reference to the mind-body problem. Both of the examples I use are purely illustrative and I do not aim to endorse either. There is one sense in which reductionism uses reduce in the same way as ‘reduce, reuse, and recycle’: i.e. reduce = use less, eliminate. It is in this way that behaviourism is a reductive account of the mind, since it (aspires to) eliminate the need to refer to hidden mental, rather than just behavioural, states. There is a second sense in which reductionism uses reducere, or literally from Latin: to bring back. It is in this way that the mind can be reduced to the brain; i.e. discussions of the mind can be brought back to discussions of the brain, and the mind can be taken as fully dependent on the brain. I’ll expand more on this sense throughout the post.

In practice, the two senses above are often conflated and intertwined. For example, instead of saying that the mind is fully dependent on the brain, people will often say that the mind is nothing but the brain, or nothing over and above the brain. When doing this, they’re doing at least two different things. First, they’re claiming to have eliminated something. And second, conflating reduce and reducere. This observation of conflation is similar to my claim that Galileo conflated idealization and abstraction in his book-keeping analogy.

And just like with my distinction between idealization and abstraction, to avoid confusion, the two senses of reductionism should be kept conceptually separate. As before, I’ll make this clear by looking at how theoretical computer science handles reductions. A study in algorithmic philosophy!

In my typical arrogance, I will rename the reduce-concept as eliminativism. And based on its agreement with theoretical computer science, I will keep the reducere-concept as reductionism.
Read more of this post

Open-ended evolution on hard fitness landscapes from VCSPs

There is often interest among the public and in the media about evolution and its effects for contemporary humans. In this context, some argue that humans have stopped evolving, including persons who have a good degree of influence over the public opinion. Famous BBC Natural History Unit broadcaster David Attenborough, for example, argued a few years ago in an interview that humans are the only species who “put halt to natural selection of its own free will”. The first time I read this, I thought that it seemed plausible. The advances in medicine that we made in the last two centuries mean that almost all babies can reach adulthood and have children of their own, which appears to cancel natural selection. However, after more careful thought, I realized that these sort of arguments for the ‘end of evolution’ could not be true.

Upon more reflection, there just seem to be better arguments for open-ended evolution.

One way of seeing that we’re still evolving is by observing that we actually created a new environment, with very different struggles than the ones that we encountered in the past. This is what Adam Benton (2013) suggests in his discussion of Attenborough. Living in cities with millions of people is very different from having to survive in a prehistoric jungle, so evolutionary pressures have shifted in this new environment. Success and fitness are measured differently. The continuing pace of changes and evolution in various fields such as technology, medicine, sciences is a clear example that humans continue to evolve. Even from a physical point of view, research shows that we are now becoming taller, after the effects of the last ice age faded out (Yang et al., 2010), while our brain seems to get smaller, for various reasons with the most amusing being that we don’t need that much “central heating”. Take that Aristotle! Furthermore, the shape of our teeth and jaws changed as we changed our diet, with different populations having a different structure based on the local diet (von Cramon-Taubadel, 2011).

But we don’t even need to resort to dynamically changing selection pressures. We can argue that evolution is ongoing even in a static environment. More importantly, we can make this argument in the laboratory. Although we do have to switch from humans to a more prolific species. A good example of this would be Richard Lenski’s long-term E-coli evolution experiment (Lenski et al., 1991) which shows that evolution is still ongoing after 50000 generations in the E-coli bacteria (Wiser et al., 2013). The fitness of the E. coli keeps increasing! This certainly seems like open-ended evolution.

But how do we make theoretical sense of these experimental observations? Artem Kaznatcheev (2018) has one suggestion: ‘hard’ landscapes due to the constraints of computational complexity. He suggests that evolution can be seen as a computational problem, in which the organisms try to maximize their fitness over successive generations. This problem would still be constrained by the theory of computational complexity, which tells us that some problems are too hard to be solved in a reasonable amount of time. Unfortunately, Artem’s work is far too theoretical. This is where my third-year project at the University of Oxford comes in. I will be working together with Artem on actually simulating open-ended evolution on specific examples of hard fitness landscapes that arise from valued constraint satisfaction problems (VCSPs).

Why VCSPs? They are an elegant generalization of the weighted 2SAT problem that Artem used in his work on hard landscapes. I’ll use this blog post to introduce CSPs, VCSPs, explain how they generalize weighted 2 SAT (and thus the NK fitness landscape model), and provide a way to translate between the language of computer science and that of biology.

Read more of this post

Local peaks and clinical resistance at negative cost

Last week, I expanded on Rob Noble’s warning about the different meanings of de novo resistance with a general discussion on the meaning of resistance in a biological vs clinical setting. In that post, I suggested that clinicians are much more comfortable than biologists with resistance without cost, or more radically: with negative cost. But I made no argument — especially no reductive argument that could potentially sway a biologist — about why we should entertain the clinician’s perspective. I want to provide a sketch for such an argument in this post.

In particular, I want to present a theoretical and extremely simple fitness landscape on which a hypothetical tumour might be evolving. The key feature of this landscape is a low local peak blocking the path to a higher local peak — a (partial) ultimate constraint on evolution. I will then consider two imaginary treatments on this landscape, one that I find to be more similar to a global chemotherapy and one that is meant to capture the essence of a targetted therapy. In the process, I will get to introduce the idea of therapy transformations to a landscape — something to address the tendency of people treating treatment fitness landscapes as completely unrelated to untreated fitness landscapes.

Of course, these hypothetical landscapes are chosen as toy models where we can have resistance emerge with a ‘negative’ cost. It is an empirical question to determine if any of this heuristic capture some important feature of real cancer landscapes.

But we won’t know until we start looking.

Read more of this post