February 25, 2015
by Artem Kaznatcheev

As you know, dear regular reader, I have a rather uneasy relationship with reductionism, especially when doing mathematical modeling in biology. In mathematical oncology, for example, it seems that there is a hope that through our models we can bring a more rigorous mechanistic understanding of cancer, but at the same time there is the joke that given almost any microscopic mechanism there is an experimental paper in the oncology literature supporting it and another to contradict it. With such a tenuous and shaky web of beliefs justifying (or just hinting towards) our nearly arbitrary microdynamical assumptions, it seems unreasonable to ground our models in reductionist stories. At such a time of ontological crisis, I have an instinct to turn — much like many physicists did during a similar crisis at the start of the 20th century in their discipline — to operationalism. Let us build a convincing mathematical theory of cancer in the petri dish with as few considerations of things we can’t reliably measure and then see where to go from there. To give another analogy to physics in the late 1800s, let us work towards a thermodynamics of cancer and worry about its many possible statistical mechanics later.

This is especially important in applications of evolutionary game theory where assumptions abound. These assumptions aren’t just about modeling details like the treatments of space and stochasticity or approximations to them but about if there is even a game taking place or what would constitute a game-like interaction. However, to work toward an operationalist theory of games, we need experiments that beg for EGT explanations. There is a recent history of these sort of experiments in viruses and microbes (Lenski & Velicer, 2001; Crespi, 2001; Velicer, 2003; West et al., 2007; Ribeck & Lenski, 2014), slime molds (Strassmann & Queller, 2011) and yeast (Gore et al., 2009; Sanchez & Gore, 2013), but the start of these experiments in oncology by Archetti et al. (2015) is current events^{[1]}. In the weeks since that paper, I’ve had a very useful reading group and fruitful discussions with Robert Vander Velde and Julian Xue about the experimental aspects of this work. This Monday, I spent most of the afternoon discussing similar experiments with Robert Noble who is visiting Moffitt from Montpellier this week.

In this post, I want to unlock some of this discussion from the confines of private emails and coffee chats. In particular, I will share my theorist’s cartoon understanding of the experiments in Archetti et al. (2015) and how they can help us build an operationalist approach to EGT but how they are not (yet) sufficient to demonstrate the authors’ central claim that neuroendocrine pancreatic cancer dynamics involve a public good.

Read more of this post

## Cancer, bad luck, and a pair of paradoxes

April 4, 2015 by Rob Noble 14 Comments

Among the highlights of my recent visit to IMO were several stimulating discussions with Artem Kaznatcheev. I’m still thinking over my response to his recent post about reductionist versus operationalist approaches in math biology, which is very relevant to some of my current research. Meanwhile, at Artem’s suggestion, this post will discuss a reanalysis of the “cancer and bad luck” paper that spurred so many headlines at the start of this year. Whereas many others have written critiques of that paper’s statistical methods and interpretations, my colleagues and I instead tried fitting alternative models to the underlying data. We thus found ourselves revisiting a couple of celebrated scientific paradoxes.

To start this post, I will introduce you to Simpson’s paradox and Peto’s paradox. With these pair of paradoxes in mind, we’ll turn a critical eye to Tomasetti & Vogelstein (2015), and I will explain our reanalysis of their data set.

Read more of this post

Filed under Commentary, Models Tagged with mathematical oncology