EGT Reading Group 56 – 60

Since my last update in February, the evolutionary game theory reading group has passed another milestone with 5 more meetings over the last 4 months. We looked at a broad range of topics, from life histories in cancer to the effects of heterogeneity and biodiversity. From the definitions of fitness to analyzing digital pathology. Part of this variety came from suggested papers by the group members. The paper for EGT 57 was suggested by Jill Gallaher, EGT 58 by Robert Vander Velde, and the second paper for EGT 60 came from a tip by Jacob Scott. We haven’t yet recovered our goal of regular weekly meetings, but we’ve more than halved the time it took for these five meetings compared to the previous ones.

Read more of this post

Acidity and vascularization as linear goods in cancer

Last month, Robert Vander Velde discussed a striking similarity between the linear version of our model of two anti-correlated goods and the Hauert et al. (2002) optional public good game. Robert didn’t get a chance to go into the detailed math behind the scenes, so I wanted to do that today. The derivations here will be in the context of mathematical oncology, but will follow the earlier ecological work closely. There is only a small (and generally inconsequential) difference in the mathematics of the double anti-correlated goods and the optional public goods games. Keep your eye out for it, dear reader, and mention it in the comments if you catch it.[1]

In this post, I will remind you of the double goods game for acidity and vascularization, show you how to simplify the resulting fitness functions in the linear case — without using the approximations of the general case — and then classify the possible dynamics. From the classification of dynamics, I will speculate on how to treat the game to take us from one regime to another. In particular, we will see the importance of treating anemia, that buffer therapy can be effective, and not so much for bevacizumab.

Read more of this post

Eukaryotes without Mitochondria and Aristotle’s Ladder of Life

In 348/7 BC, fearing anti-Macedonian sentiment or disappointed with the control of Plato’s Academy passing to Speusippus, Aristotle left Athens for Asian Minor across the Aegean sea. Based on his five years[1] studying of the natural history of Lesbos, he wrote the pioneering work of zoology: The History of Animals. In it, he set out to catalog the what of biology before searching for the answers of why. He initiated a tradition of naturalists that continues to this day.

Aristotle classified his observations of the natural world into a hierarchical ladder of life: humans on top, above the other blooded animals, bloodless animals, and plants. Although we’ve excised Aristotle’s insistence on static species, this ladder remains for many. They consider species as more complex than their ancestors, and between the species a presence of a hierarchy of complexity with humans — as always — on top. A common example of this is the rationality fetish that views Bayesian learning as a fixed point of evolution, or ranks species based on intelligence or levels-of-consciousness. This is then coupled with an insistence on progress, and gives them the what to be explained: the arc of evolution is long, but it bends towards complexity.

In the early months of TheEGG, Julian Xue turned to explaining the why behind the evolution of complexity with ideas like irreversible evolution as the steps up the ladder of life.[2] One of Julian’s strongest examples of such an irreversible step up has been the transition from prokaryotes to eukaryotes through the acquisition of membrane-bound organelles like mitochondria. But as an honest and dedicated scholar, Julian is always on the lookout for falsifications of his theories. This morning — with an optimistic “there goes my theory” — he shared the new Kamkowska et al. (2016) paper showing a surprising what to add to our natural history: a eukaryote without mitochondria. An apparent example of a eukaryote stepping down a rung in complexity by losing its membrane-bound ATP powerhouse.
Read more of this post

Population dynamics from time-lapse microscopy

Half a month ago, I introduced you to automated time-lapse microscopy, but I showed the analysis of only a single static image. I didn’t take advantage of the rich time-series that the microscope provides for us. A richness that becomes clearest with video:

Above, you can see two types of non-small cell lung cancer cells growing in the presence of 512 nmol of Alectinib. The cells fluorescing green are parental cells that are susceptible to the drug, and the ones in red have an evolved resistance. In the 3 days of the video, you can see the cells growing and expanding. It is the size of these populations that we want to quantify.

In this post, I will remedy last week’s omission and share some empirical population dynamics. As before, I will include some of the Python code I built for these purposes. This time the code is specific to how our microscope exports its data, and so probably not as generalizable as one might want. But hopefully it will still give you some ideas on how to code analysis for your own experiments, dear reader. As always, the code is on my github.

Although the opening video considers two types of cancer cells competing, for the rest of the post I will consider last week’s system: coculturing Alectinib-sensitive (parental) non-small cell lung cancer and fibroblasts in varying concentrations of Alectinib. Finally, this will be another tools post so the only conclusions are of interest as sanity checks. Next week I will move on to more interesting observations using this sort of pipeline.
Read more of this post

Counting cancer cells with computer vision for time-lapse microscopy

Competing cellsSome people characterize TheEGG as a computer science blog. And although (theoretical) computer science almost always informs my thought, I feel like it has been a while since I have directly dealt with the programming aspects of computer science here. Today, I want to remedy that. In the process, I will share some Python code and discuss some new empirical data collected by Jeff Peacock and Andriy Marusyk.[1]

Together with David Basanta and Jacob Scott, the five of us are looking at the in vitro dynamics of resistance to Alectinib in non-small cell lung cancer. Alectinib is a new ALK-inhibitor developed by the Chugai Pharmaceutical Co. that was approved for clinical use in Japan in 2014, and in the USA at the end of 2015. Currently, it is intended for tough lung cancer cases that have failed to respond to crizotinib. Although we are primarily interested in how alectinib resistance develops and unfolds, we realize the importance of the tumour’s microenvironment, so one of our first goals — and the focus here — is to see how the Alectinib sensitive cancer cells interact with healthy fibroblasts. Since I’ve been wanting to learn basic computer vision skills and refresh my long lapsed Python knowledge, I decided to hack together some cell counting algorithms to analyze our microscopy data.[2]

In this post, I want to discuss some of our preliminary work although due to length constraints there won’t be any results of interest to clinical oncologist in this entry. Instead, I will introduce automated microscopy to computer science readers, so that they know another domain where their programming skills can come in useful; and discuss some basic computer vision so that non-computational biologists know how (some of) their cell counters (might) work on the inside. Thus, the post will be methods heavy and part tutorial, part background, with a tiny sprinkle of experimental images.[3] I am also eager for some feedback and tips from readers that are more familiar than I am with these methods. So, dear reader, leave your insights in the comments.

Read more of this post

Cancer metabolism and voluntary public goods games

When I first came to Tampa to do my Masters[1], my focus turned to explanations of the Warburg effect — especially a recent paper by Archetti (2014) — and the acid-mediated tumor invasion hypothesis (Gatenby, 1995; Basanta et al., 2008). In the course of our discussions about Archetti (2013,2014), Artem proposed the idea of combining two public goods, such as acid and growth factors. In an earlier post, Artem described the model that came out of these discussions. This model uses two “anti-correlated” public goods in tumors: oxygen (from vasculature) and acid (from glycolytic metabolism).

The dynamics of our model has some interesting properties such as an internal equilibrium and (as we showed later) cycles. When I saw these cycles I started to think about “games” with similar dynamics to see if they held any insights. One such model was Hauert et al.’s (2002) voluntary public goods game.[2] As I looked closer at our model and their model I realized that the properties and logic of these two models are much more similar than we initially thought. In this post, I will briefly explain Hauert et al.’s (2002) model and then discuss its potential application to cancer, and to our model.
Read more of this post

Choosing units of size for populations of cells

Recently, I have been interacting more and more closely with experiment. This has put me in the fortunate position of balancing the design and analysis of both theoretical and experimental models. It is tempting to think of theorists as people that come up with ideas to explain an existing body of facts, and of mathematical modelers as people that try to explain (or represent) an existing experiment. But in healthy collaboration, theory and experiment should walk hand it hand. If experiments pose our problems and our mathematical models are our tools then my insistence on pairing tools and problems (instead of ‘picking the best tool for the problem’) means that we should be willing to deform both for better communication in the pair.

Evolutionary game theory — and many other mechanistic models in mathematical oncology and elsewhere — typically tracks population dynamics, and thus sets population size (or proportions within a population) as central variables. Most models think of the units of population as individual organisms; in this post, I’ll stick to the petri dish and focus on cells as the individual organisms. We then try to figure out properties of these individual cells and their interactions based on prior experiments or our biological intuitions. Experimentalists also often reason in terms of individual cells, making them seem like a natural communication tool. Unfortunately, experiments and measurements themselves are usually not about cells. They are either of properties that are only meaningful at the population level — like fitness — or indirect proxies for counts of individual cells — like PSA or intensity of fluorescence. This often makes counts of individual cells into an inferred theoretical quantity and not a direct observable. And if we are going to introduce an extra theoretical term then parsimony begs for a justification.

But what is so special about the number of cells? In this post, I want to question the reasons to focus on individual cells (at the expense of other choices) as the basic atoms of our ontology.

Read more of this post

Mutation-bias driving the evolution of mutation rates

In classic game theory, we are often faced with multiple potential equilibria between which to select with no unequivocal way to choose between these alternatives. If you’ve ever heard Artem justify dynamic approaches, such as evolutionary game theory, then you’ve seen this equilibrium selection problem take center stage. Natural selection has an analogous ‘problem’ of many local fitness peaks. Is the selection between them simply an accidental historical process? Or is there a method to the madness that is independent of the the environment that defines the fitness landscape and that can produce long term evolutionary trends?

Two weeks ago, in my first post of this series, I talked about an idea Wallace Arthur (2004) calls “developmental bias”, where the variation of traits in a population can determine which fitness peak the population evolves to. The idea is that if variation is generated more frequently in a particular direction, then fitness peaks in that direction are more easily discovered. Arthur hypothesized that this mechanism can be responsible for long-term evolutionary trends.

A very similar idea was discovered and called “mutation bias” by Yampolsky & Stoltzfus (2001). The difference between mutation bias and developmental bias is that Yampolsky & Stoltzfus (2001) described the idea in the language of discrete genetics rather than trait-based phenotypic evolution. They also did not invoke developmental biology. The basic mechanism, however, was the same: if a population is confronted with multiple fitness peaks nearby, mutation bias will make particular peaks much more likely.

In this post, I will discuss the Yampolsky & Stoltzfus (2001) “mutation bias”, consider applications of it to the evolution of mutation rates by Gerrish et al. (2007), and discuss how mutation is like and unlike other biological traits.

Read more of this post

Don’t treat the player, treat the game: buffer therapy and bevacizumab

No matter how much I like modeling for the sake of modeling, or science for the sake of science, working in a hospital adds some constraints. At some point people look over at you measuring games in the Petri dish and ask “why are you doing this?” They expect an answer that involves something that benefits patients. That might mean prevention, early detection, or minimizing side-effects. But in most cases it means treatment: how does your work help us treat cancer? Here, I think, evolutionary game theory — and the Darwinian view of cancer more generally — offers a useful insight in the titular slogan: don’t treat the player, treat the game.

One of the most salient negative features of cancer is the tumour — the abnormal mass of cancer cells. It seems natural to concentrate on getting rid of these cells, or at least reducing their numbers. This is why tumour volume has become a popular surrogate endpoint for clinical trials. This is treating the player. Instead, evolutionary medicine would ask us to find the conditions that caused the system to evolve towards the state of having a large tumour and how we can change those conditions. Evolutionary therapy aims to change the environmental pressures on the tumour, such that the cancerous phenotypes are no longer favoured and are driven to extinction (or kept in check) by Darwinian forces. The goal is to change the game so that cancer proves to be a non-viable strategy.[1]

In this post I want to look at the pairwise game version of my joint work with Robert Vander Velde, David Basanta, and Jacob Scott on the Warburg effect (Warburg, 1956; Gatenby & Gillies, 2004) and acid-mediated tumour invasion (Gatenby, 1995; Gatenby & Gawlinski, 2003). Since in this work we are concerned with the effects of acidity and vascularization on cancer dynamics, I will concentrate on interventions that affect acidity (buffer therapy; for early empirical work, see Robey et al., 2009) or vascularization (angiogenesis inhibitor therapy like bevacizumab).

My goal isn’t to say something new about these therapies, but to use them as illustrations for the importance of changing between qualitatively different dynamic regimes. In particular, I will be dealing with the oncological equivalent of a spherical cow in frictionless vacuum. I have tried to add some caveats in the footnotes, but these could be multiplied indefinitely without reaching an acceptably complete picture.

Read more of this post

Variation for supply driven evolution

I’ve taken a very long hiatus (nearly 5 years!) from this blog. I suppose getting married and getting an MD are good excuses, but Artem has very kindly let me return. And I greatly appreciate this chance, because I’d like to summarize an idea I had been working on for a while. So far, only two publication has come out of it (Xue et al., 2015a,b), but it’s an idea that has me excited. So excited that I defended a thesis on it this Tuesday. For now, I call it supply-driven evolution, where I try to show how the generation of variation can determine long-term evolution.

Evolutionary theoreticians have long known that how variation is generated has a decisive role in evolutionary outcome. The reason is that natural selection can only choose among what has been generated, so focusing on natural selection will not produce a full understanding of evolution. But how does variation affect evolution, and can variation be the decisive factor in how evolution proceeds? I believe that the answer is “frequently, yes,” because it does not actually compete with natural selection. I’ll do a brief overview of the literature in the first few posts. By the end, I hope how this mechanism can explain some forms of irreversible evolution, stuff I had blogged about five years ago.

Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 2,463 other followers