Open-ended evolution on hard fitness landscapes from VCSPs

There is often interest among the public and in the media about evolution and its effects for contemporary humans. In this context, some argue that humans have stopped evolving, including persons who have a good degree of influence over the public opinion. Famous BBC Natural History Unit broadcaster David Attenborough, for example, argued a few years ago in an interview that humans are the only species who “put halt to natural selection of its own free will”. The first time I read this, I thought that it seemed plausible. The advances in medicine that we made in the last two centuries mean that almost all babies can reach adulthood and have children of their own, which appears to cancel natural selection. However, after more careful thought, I realized that these sort of arguments for the ‘end of evolution’ could not be true.

Upon more reflection, there just seem to be better arguments for open-ended evolution.

One way of seeing that we’re still evolving is by observing that we actually created a new environment, with very different struggles than the ones that we encountered in the past. This is what Adam Benton (2013) suggests in his discussion of Attenborough. Living in cities with millions of people is very different from having to survive in a prehistoric jungle, so evolutionary pressures have shifted in this new environment. Success and fitness are measured differently. The continuing pace of changes and evolution in various fields such as technology, medicine, sciences is a clear example that humans continue to evolve. Even from a physical point of view, research shows that we are now becoming taller, after the effects of the last ice age faded out (Yang et al., 2010), while our brain seems to get smaller, for various reasons with the most amusing being that we don’t need that much “central heating”. Take that Aristotle! Furthermore, the shape of our teeth and jaws changed as we changed our diet, with different populations having a different structure based on the local diet (von Cramon-Taubadel, 2011).

But we don’t even need to resort to dynamically changing selection pressures. We can argue that evolution is ongoing even in a static environment. More importantly, we can make this argument in the laboratory. Although we do have to switch from humans to a more prolific species. A good example of this would be Richard Lenski’s long-term E-coli evolution experiment (Lenski et al., 1991) which shows that evolution is still ongoing after 50000 generations in the E-coli bacteria (Wiser et al., 2013). The fitness of the E. coli keeps increasing! This certainly seems like open-ended evolution.

But how do we make theoretical sense of these experimental observations? Artem Kaznatcheev (2018) has one suggestion: ‘hard’ landscapes due to the constraints of computational complexity. He suggests that evolution can be seen as a computational problem, in which the organisms try to maximize their fitness over successive generations. This problem would still be constrained by the theory of computational complexity, which tells us that some problems are too hard to be solved in a reasonable amount of time. Unfortunately, Artem’s work is far too theoretical. This is where my third-year project at the University of Oxford comes in. I will be working together with Artem on actually simulating open-ended evolution on specific examples of hard fitness landscapes that arise from valued constraint satisfaction problems (VCSPs).

Why VCSPs? They are an elegant generalization of the weighted 2SAT problem that Artem used in his work on hard landscapes. I’ll use this blog post to introduce CSPs, VCSPs, explain how they generalize weighted 2 SAT (and thus the NK fitness landscape model), and provide a way to translate between the language of computer science and that of biology.

Read more of this post

Advertisements

Local peaks and clinical resistance at negative cost

Last week, I expanded on Rob Noble’s warning about the different meanings of de novo resistance with a general discussion on the meaning of resistance in a biological vs clinical setting. In that post, I suggested that clinicians are much more comfortable than biologists with resistance without cost, or more radically: with negative cost. But I made no argument — especially no reductive argument that could potentially sway a biologist — about why we should entertain the clinician’s perspective. I want to provide a sketch for such an argument in this post.

In particular, I want to present a theoretical and extremely simple fitness landscape on which a hypothetical tumour might be evolving. The key feature of this landscape is a low local peak blocking the path to a higher local peak — a (partial) ultimate constraint on evolution. I will then consider two imaginary treatments on this landscape, one that I find to be more similar to a global chemotherapy and one that is meant to capture the essence of a targetted therapy. In the process, I will get to introduce the idea of therapy transformations to a landscape — something to address the tendency of people treating treatment fitness landscapes as completely unrelated to untreated fitness landscapes.

Of course, these hypothetical landscapes are chosen as toy models where we can have resistance emerge with a ‘negative’ cost. It is an empirical question to determine if any of this heuristic capture some important feature of real cancer landscapes.

But we won’t know until we start looking.

Read more of this post

Effective games from spatial structure

For the last week, I’ve been at the Institute Mittag-Leffler of the Royal Swedish Academy of Sciences for their program on mathematical biology. The institute is a series of apartments and a grand mathematical library located in the suburbs of Stockholm. And the program is a mostly unstructured atmosphere — with only about 4 hours of seminars over the whole week — aimed to bring like-minded researchers together. It has been a great opportunity to reconnect with old colleagues and meet some new ones.

During my time here, I’ve been thinking a lot about effective games and the effects of spatial structure. Discussions with Philip Gerlee were particularly helpful to reinvigorate my interest in this. As part of my reflection, I revisited the Ohtsuki-Nowak (2006) transform and wanted to use this post to share a cute observation about how space can create an effective game where there is no reductive game.

Suppose you were using our recent game assay to measure an effective game, and you got the above left graph for the fitness functions of your two types. On the x-axis, you have seeding proportion of type C and on the y-axis you have fitness. In cyan you have the measured fitness function for type C and in magenta, you have the fitness function for type D. The particular fitnesses scale of the y-axis is not super important, not even the x-intercept — I’ve chosen them purely for convenience. The only important aspect is that the cyan and magenta lines are parallel, with a positive slope, and the magenta above the cyan.

This is not a crazy result to get, compare it to the fitness functions for the Alectinib + CAF condition measured in Kaznatcheev et al. (2018) which is shown at right. There, cyan is parental and magenta is resistant. The two lines of best fit aren’t parallel, but they aren’t that far off.

How would you interpret this sort of graph? Is there a game-like interaction happening there?

Of course, this is a trick question that I give away by the title and set-up. The answer will depend on if you’re asking about effective or reductive games, and what you know about the population structure. And this is the cute observation that I want to highlight.

Read more of this post

The Noble Eightfold Path to Mathematical Biology

Twitter is not a place for nuance. It is a place for short, pithy statements. But if you follow the right people, those short statements can be very insightful. In these rare case, a tweet can be like a kōan: a starting place for thought and meditation. Today I want to reflect on such a thoughtful tweet from Rob Noble outlining his template for doing good work in mathematical biology. This reflection is inspired by the discussions we have on my recent post on mathtimidation by analytic solution vs curse of computing by simulation.

So, with slight modification and expansion from Rob’s original — and in keeping with the opening theme — let me present The Noble Eightfold Path to Mathematical Bilogy:

  1. Right Intention: Identify a problem or mysterious effect in biology;
  2. Right View: Study the existing mathematical and mental models for this or similar problems;
  3. Right Effort: Create model based on the biology;
  4. Right Conduct: Check that the output of the model matches data;
  5. Right Speech: Humbly write up;
  6. Right Mindfulness: Analyse why model works;
  7. Right Livelihood: Based on 6, create simplest, most general useful model;
  8. Right Samadhi: Rewrite focussing on 6 & 7.

The hardest, most valuable work begins at step 6.

The only problem is that people often stop at step 5, and sometimes skip step 2 and even step 3.

This suggests that the model is more prescriptive than descriptive. And aspiration for good scholarship in mathematical biology.

In the rest of the post, I want to reflect on if it is the right aspiration. And also add some detail to the steps.

Read more of this post

Minimal models for explaining unbounded increase in fitness

On a prior version of my paper on computational complexity as an ultimate constraint, Hemachander Subramanian made a good comment and question:

Nice analysis Artem! If we think of the fitness as a function of genes, interactions between two genes, and interactions between three genes and so on, your analysis using epistasis takes into account only the interactions (second order and more). The presence or absence of the genes themselves (first order) can change the landscape itself, though. Evolution might be able to play the game of standing still as the landscape around it changes until a species is “stabilized” by finding itself in a peak. The question is would traversing these time-dependent landscapes for optima is still uncomputable?

And although I responded to his comment in the bioRxiv Disqus thread, it seems that comments are version locked and so you cannot see Hema’s comment anymore on the newest version. As such, I wanted to share my response on the blog and expand a bit on it.

Mostly this will be an incomplete argument for why biologists should care about worst-case analysis. I’ll have to expand on it more in the future.

Read more of this post

Mathtimidation by analytic solution vs curse of computing by simulation

Recently, I was chatting with Patrick Ellsworth about the merits of simulation vs analytic solutions in evolutionary game theory. As you might expect from my old posts on the curse of computing, and my enjoyment of classifying games into dynamic regimes, I started with my typical argument against simulations. However, as I searched for a positive argument for analytic solutions of games, I realized that I didn’t have a good one. Instead, I arrived at another negative argument — this time against analytic solutions of heuristic models.

Hopefully this curmudgeoning comes as no surprise by now.

But it did leave me in a rather confused state.

Given that TheEGG is meant as a place to share such confusions, I want to use this post to set the stage for the simulation vs analytic debate in EGT and then rehearse my arguments. I hope that, dear reader, you will then help resolve the confusion.

First, for context, I’ll share my own journey from simulations to analytic approaches. You can see a visual sketch of it above. Second, I’ll present an argument against simulations — at least as I framed that argument around the time I arrived at Moffitt. Third, I’ll present the new argument against analytic approaches. At the end — as is often the case — there will be no resolution.

Read more of this post

Separating theory from nonsense via communication norms, not Truth

Earlier this week on twitter, Brian Skinner wrote an interesting thread on how to distinguish good theory from crackpottery. He started with a trait that both theorists and crackpots share: we have an “irrational self-confidence” — a belief that just by thinking we “can arrive at previously-unrealized truths about the world”. From this starting point, the two diverge in their use of evidence. A crackpot relies primarily on positive evidence: he thinks hard about a problem, arrives at a theory that feels right, and then publicizes the result.

A theorist, on the other prong, incorporates negative evidence: she ponders hard about a problem, arrives at a theory that feels right and then proceeds to try to disprove that theory. She reads the existing literature and looks at the competing theories, takes time to understand them and compare them against her own. If any disagree with hers then she figures out why those theories are wrong. She pushes her theory to the extremes, looks at its limiting cases and checks them for agreement with existing knowledge. Only after her theory comes out unscathed from all these challenges does she publicize it.

For Skinner, this second prong is the definition of scholarship. In practice, coming up with a correct theory is mostly a painful process of discarding many of your own wrong attempts. A good theorist is a thorough, methodical and skeptical of their own ideas.

The terminology of crackpottery vs scholarship is probably overly harsh, as Skinner acknowledges. And in practice, somebody might be a good theorist in one domain but a crackpot elsewhere. As Malkym Lesdrae points out, there are many accomplished accademics who are also crackpot theorists: “Most often it’s about things outside their field of specialty”. Thus, this ideal self-skepticism might be domain specific.

It is also a destructive ideal.

In other words, I disagreed with Skinner on the best way to separate good theory from nonsense. Mostly on the framing. Skinner crystalized our disagreement in a tweet: whereas he views self-skepticism as I an obligation to the Truth, I view a similar sort of self-reflective behavior as a social obligation. I am committed to this latter view because I want to make sense of things like heuristic models, where truth is secondary to other modelling concerns. Where truth is not the most useful yardstick for checking the usefulness of model. Where you hear Box’s slogan: “all models are wrong, but some are useful.

Given the brief summary of Skinner’s view above — and please, Brian, correct me in the comments if I misrepresented your position — I want to use the rest of this post to sketch what I mean by self-reflective behavior as a social obligation.
Read more of this post

John Maynard Smith on reductive vs effective thinking about evolution

“The logic of animal conflict” — a 1973 paper by Maynard Smith and Price — is usually taken as the starting for evolutionary game theory. And as far as I am an evolutionary game theorists, it influences my thinking. Most recently, this thinking has led me to the conclusion that there are two difference conceptions of evolutionary games possible: reductive vs. effective. However, I don’t think that this would have come as much of a surprise to Maynard Smith and Price. In fact, the two men embodied the two different ways of thinking that underlay my two interpretations.

I was recently reminded of this when Aakash Pandey shared a Web of Stories interview with John Maynard Smith. This is a 4 minute snippet of a long interview with Maynard Smith. In the snippet, he starts with a discussion of the Price equation (or Price’s theorem, if you want to have that debate) but quickly digresses to a discussion of the two kinds of mathematical theories that can be made in science. He identifies himself with the reductive view and Price with the effective. I recommend watching the whole video, although I’ll quote relavent passages below.

In this post, I’ll present Maynard Smith’s distinction on the two types of thinking in evolutionary models. But I will do this in my own terminology to stress the connections to my recent work on evolutionary games. However, I don’t think this distinction is limited to evolutionary game theory. As Maynard Smith suggests in the video, it extends to all of evolutionary biology and maybe scientific modelling more generally.

Read more of this post

Personal case study on the usefulness of philosophy to biology

At the start of this month, one of my favourite blogs — Dynamic Ecology — pointed me to a great interview with Michela Massimi. She has recently won the Royal Society’s Wilkins-Bernal-Medawar Medal for the philosophy of science, and to celebrate Philip Ball interviewed her for Quanta. I recommend reading the whole interview, but for this post, I will focus on just one aspect.

Ball asked Massimi how she defends philosophy of science against dismissive comments by scientists like Feynman or Hawking. In response, she made the very important point that for the philosophy of science to be useful, it doesn’t need to be useful to science:

Dismissive claims by famous physicists that philosophy is either a useless intellectual exercise, or not on a par with physics because of being incapable of progress, seem to start from the false assumption that philosophy has to be of use for scientists or is of no use at all.

But all that matters is that it be of some use. We would not assess the intellectual value of Roman history in terms of how useful it might be to the Romans themselves. The same for archaeology and anthropology. Why should philosophy of science be any different?

Instead, philosophy is useful for humankind more generally. This is certainly true.

But even for a scientist who is only worrying about getting that next grant, or publishing that next flashy paper. For a scientist who is completely detached from the interests of humanity. Even for this scientist, I don’t think we have to concede the point on the usefulness of philosophy of science. Because philosophy, and philosophy of science in particular, doesn’t need to be useful to science. But it often is.

Here I want to give a personal example that I first shared in the comments on Dynamic Ecology.
Read more of this post

Abstract is not the opposite of empirical: case of the game assay

Last week, Jacob Scott was at a meeting to celebrate the establishment of the Center for Evolutionary Therapy at Moffitt, and he presented our work on measuring the effective games that non-small cell lung cancer plays (see this preprint for the latest draft). From the audience, David Basanta summarized it in a tweet as “trying to make our game theory models less abstract”. But I actually saw our work as doing the opposite (and so quickly disagreed).

However, I could understand the way David was using ‘abstract’. I think I’ve often used it in this colloquial sense as well. And in that sense it is often the opposite of empirical, which is seen as colloquially ‘concrete’. Given my arrogance, I — of course — assume that my current conception of ‘abstract’ is the correct one, and the colloquial sense is wrong. To test myself: in this post, I will attempt to define both what ‘abstract’ means and how it is used colloquially. As a case study, I will use the game assay that David and I disagreed about.

This is a particularly useful exercise for me because it lets me make better sense of how two very different-seeming aspects of my work — the theoretical versus the empirical — are both abstractions. It also lets me think about when simple models are abstract and when they’re ‘just’ toys.

Read more of this post