Measuring games in the Petri dish

For the next couple of months, Jeffrey Peacock is visiting Moffitt. He’s a 4th year medical student at the University of Central Florida with a background in microbiology and genetic engineering of bacteria and yeast. Together with Andriy Marusyk and Jacob Scott, he will move to human cells and run some in vitro experiments with non-small cell lung cancer — you can read more about this on Connecting the Dots. Robert Vander Velde is also in the process of designing some experiments of his own. Both Jeff and Robert are interested in evolutionary game theory, so this is great opportunity for me to put my ideas on operationalization of replicator dynamics into practice.

In this post, I want to outline the basic process for measuring a game from in vitro experiments. Games in the Petri-dish. It won’t be as action packed as Agar.io — that’s an actual MMO cells-in-Petri-dish game; play here — but hopefully it will be more grounded in reality. I will introduce the gain function, show how to measure it, and stress the importance of quantifying the error on this measurement. Since this is part of the theoretical preliminaries for my collaborations, we don’t have our own data to share yet, so I will provide an illustrative cartoon with data from Archetti et al. (2015). Finally, I will show what sort of data would rule-out the theoretician’s favourite matrix games and discuss the ego-centric representation of two-strategy matrix games. The hope is that we can use this work to go from heuristic guesses at what sort of games microbes or cancer cells might play to actually measuring those games.
Read more of this post

Advertisements

Operationalizing the local environment for replicator dynamics

Recently, Jake Taylor-King arrived in Tampa and last week we were brainstorming some projects to work on together. In the process, I dug up an old idea I’ve been playing with as my understanding of the Ohtsuki-Nowak transform matured. The basic goal is to work towards an operational account of spatial structure without having to commit ourselves to a specific model of space. I will take replicator dynamics and work backwards from them, making sure that each term we use can be directly measured in a single system or abducted from the other measurements. The hope is that if we start making such measurements then we might see some empirical regularities which will allow us to link experimental and theoretical models more closely without having to make too many arbitrary assumptions. In this post, I will sketch the basic framework and then give an example of how some of the spatial features can be measured from a sample histology.
Read more of this post

Seeing edge effects in tumour histology

Some of the hardest parts of working towards the ideal of a theorist, at least for me, are: (1) making sure that I engage with problems that can be made interesting to the new domain I enter and not just me; (2) engaging with these problems in a way and using tools that can be made compelling and useful to the domain’s existing community, and (3) not being dismissive of and genuinely immersing myself in the background knowledge and achievements of the domain, at least around the problems I am engaging with. Ignoring these three points, especially the first, is one of the easiest ways to succumb to interdisciplinitis; a disease that catches me at times. For example, in one of the few references to TheEGG in the traditional academic literature, Karel Mulder writes on the danger of ignoring the second and third points:

Sometimes scientists are offering a helping hand to another discipline, which is all but a sign of compassion and charity… It is an expression of disdain for the poor colleagues that can use some superior brains.

The footnote that highlights an example of such “disciplinary arrogance/pride” is a choice quote from the introduction of my post on what theoretical computer science can offer biology. Mulder exposes my natural tendency toward a condescension. Thus, to be a competent theorist, I need to actively work on inoculating myself against interdisciplinitis.

One of the best ways I know to learn humility is to work with great people from different backgrounds. In the domain of oncology, I found two such collaborators in Jacob Scott and David Basanta. Recently we updated our paper on edge effects in game theoretic dynamics of spatially structured tumours (Kaznatcheev et al., 2015); as always that link leads to the arXiv preprint, but this time — in a first for me — we have also posted the paper to the bioRxiv[1]. I’ve already blogged about the Basanta et al. (2008) work that inspired this and our new technical contribution[2], including the alternative interpretation of the transform of Ohtsuki & Nowak (2006) that we used along the way. So today I want to discuss some of the clinical and biological content of our paper; much of it was greatly expanded upon in this version of the paper. In the process, I want to reflect on the theorist’s challenge learning the language and customs of a newly entered domain.

Read more of this post

An approach towards ethics: neuroscience and development

For me personally it has always been a struggle, reading through all the philosophical and religious literature I have a long standing interest in, to verbalize my intuitive concept of morals in any satisfactory way. Luckily for me, once I’ve started reading up on modern psychology and neuroscience, I found out that there are empirical models based on clustering of the abundant concepts that correlate well with both our cultured intuitions and our knowledge of brain functioning. Models that are for the studies of Ethics what the Big Five traits are for personality theories or what the Cattell-Horn-Carroll theory is for cognitive abilities.  In this post I’m going to provide an account of research of what is the most elucidating level of explanation of human morals – that of neuroscience and psychology. The following is not meant as a comprehensive review, but a sample of what I consider the most useful explanatory tools. The last section touches briefly upon genetic and endocrinological component of human morals, but it is nothing more than a mention. Also, I’ve decided to omit citations in quotes, because I don’t want to include into the list of reference the research I am personally unfamiliar with.

A good place to start is Jonathan Haidt’s TED talk:

Read more of this post

An approach towards ethics: primate sociality

Moral decision making is one of the major torrents in human behavior. It often overrides other ways of making judgments, it generates conflicting sets of cultural values and is reinforced by them. Such conflicts might even occur in the head of some unfortunate individual, which makes the process really creative. On the other hand ethical behavior is the necessary social glue and the way people prioritize prosocial practices.

In the comments to his G+ post about Michael Sandel’s Justice course, Artem Kaznatcheev invited me to have a take on moral judgment and social emotions based on what I gathered through my readings in the recent couple of years. I’m by no means an expert in any of the fields that I touch upon in the following considerations, but I’ve been purposefully struggling with the topic due to my interest in behavioral sciences trying to come up with a lucid framework to think about the subject. Not everything I write here is backed up very well by research, mainly because I step up a little and try to see what might come next, but I’ll definitely do my best to leave my general understanding distinct from concepts prevailing in the studies I have encountered. It is not an essay on ethics per se, but rather where I am now in understanding how moral sentiments work. A remark to make is that for the purposes of that text I understand behavior broadly, e.g. thinking is a behavior.

Read more of this post

Experimental and comparative oncology: zebrafish, dogs, elephants

One of the exciting things about mathematical oncology is that thinking about cancer often forces me to leave my comfortable arm-chair and look at some actually data. No matter how much I advocate for the merits of heuristic modeling, when it comes to cancer, data-agnostic models take second stage to data-rich modeling. This close relationship between theory and experiment is of great importance to the health of a discipline, and the MBI Workshop on the Ecology and Evolution of Cancer highlights the health of mathematical oncology: mathematicians are sitting side-by-side with clinicians, biologists with computer scientists, and physicists next to ecologists. This means that the most novel talks for me have been the ones highlighting the great variety of experiments that are being done and how they inform theory.In this post I want to highlight some of these talks, with a particular emphasis on using the study of cancer in non-humans to inform human medicine.
Read more of this post

Dogs are hosts to the oldest and most widely disseminated cancer

SugarA little while ago, I got a new friend and roommate: Sugar. She is very docile, loves walks and belly-rubs, but isn’t a huge fan of other dogs. Her previous owner was an elderly woman that couldn’t take Sugar outside during most of the year — if you haven’t heard, Montreal is pretty difficult to walk around during winter. This resulted in less exposure to other dogs leading to an anti-social attitude, and less exercise which (combined with Sugar’s adorable demands for food) made Sugar overweight. She now gets plenty of exercise and is slowly returning to a healthy weight and attitude.

But, you can never be too careful, so Sugar will go in for a check-up on Monday. Just like humans, dogs have many treatable conditions, and for some — like cancer — it is better to catch them early. But when it comes to cancer, there is one things that sets dogs apart from nearly all other species: they are susceptible to one of only two known naturally occurring clonally transmissible cancers — canine transmissible venereal tumor (CTVT).

That’s right, a contagious cancer. More precisely a single clonal line that has been living as as a parasitic life form for over 11,000 years (Murchison, Wedge et al., 2014)!
Read more of this post

Software through the lens of evolutionary biology

My preferred job title is ‘theorist’, but that is often too ambiguous in casual and non-academic conversation, so I often settle for ‘computer scientist’. Unfortunately, it seems that the overwhelming majority of people equate computer scientists to programmers or some general ‘tech person’, forgetting M.R. Fellows rallying cry: “Computer science is not about machines, in the same way that astronomy is not about telescopes.” Although — like most theorists — I know how to program, the programming I do is nothing like what (I hear) is in industry. In particular, all of my code is relatively small and with concentration, or maybe a single sheet of paper, I can usually keep the whole thing in my head. In fact, the only time I’ve worked in a large code base was developing extensions for MediaWiki during my first summer of college to be used by some groups at the Canadian Light Source. Combined with the preceeding semester of drawing UML diagrams and writing up req&spec documents, I was convinced that I would never be a software engineer. However, I did learn a valuable lessons: real world projects are big and unwieldy, logistics have to be taken seriously, comments and documentation are your friends, and for a sufficiently large software project there is no single person that knows the whole code.

FirefoxBugsWith that much unknown, it is not surprising that bugs abound. Even back in 2002 software bugs cost the US $59.5 billion annually or 0.6% of the GDP, and I imagine the cost has only gone up. If you count ultrafast extreme events or flash crashes of automated hight-frequency traders as bugs, then some argue that you have part of our recent financial crises to blame on software errors (Johnson et al., 2013). To get a feel for the numerosity, a big project like Mozilla Firefox can easily get 2000 new bugs in a year (see figure at left), and Yet most of these bugs are not particularly difficult, and don’t require major overhauls to fix. Even the most serious failures can be fixed by a 12 year-old, why not let evolution have a go?
Read more of this post

Programming language for biochemistry

Computer scientists that think of nature as literally computing, often take the stance that biological organisms are nothing more than protein interaction networks. For example, this is the stance that Leslie Valiant (2009) takes when defining ecorithms: biology is just a specialization of computer science focused on evolvable circuits. User @exploderator summarized the realist computational view of biology on Reddit while answering what theoretical computer science can offer biology:

[B]iology is primarily chemo-computation, chemical information systems and computational hardware.
Theoretical comp sci is the only field that is actually specifically dedicated to studying the mathematics / logic of computation. Therefore, although biology is an incredibly hard programming problem (only a fool thinks nature simple), it is indeed more about programming and less about the hardware it’s running on.

Although it is an easy stance for a theoretician to take, it is a little bit more involved for a molecular biologist, chemist, or engineer. Yet for the last 30 years, even experimentalists have been captivated by this computational realism and promise of engineering molecular devices (Drexler, 1981). Half a year ago, I even reviewed Bonnet et al. (2013) taking steps towards building transcriptors. They are focusing on the hardware side of biological computation and building a DNA-analogue of the von Neumann architecture. However, what we really need is a level of abstraction: a chemical programming language that can be compiled into biocompatible reactions.
Read more of this post

Epistasis and empirical fitness landscapes

Biologists tend to focus on nuances — to the point that Rutherford considered the field as stamp-collecting — and very local properties of systems, leading at times to rather reductionist views. These approaches are useful for connecting to experiment, but can be shown to underspecify conceptual models that need a more holistic approach. In the case of fitness landscapes, the metric that biologists study is epistasis — the amount of between locus interactions — and is usually considered for the interaction of just two loci at a time; although Beerenwinkel et al. (2007a,b) have recently introduced a geometric theory of gene interaction for considering epistasis across any number of loci. In contrast, more holistic measures can be as simple as the number of peaks in the landscape, or the computational or as complicated as the global combinatorial features of interest to theoretical computer scientists. In this post I discuss connections between the two and provide a brief overview of the empirical work on fitness landscapes.

Epistasis in fitness graphs of two loci. Arrows point from lower fitness to higher fitness, and AB always has higher fitness than ab. From left to right no epistasis, sign epistasis, reverse sign epistasis.

Epistasis in fitness graphs of two loci. Arrows point from lower fitness to higher fitness, and AB always has higher fitness than ab. From left to right no epistasis, sign epistasis, reverse sign epistasis.


Read more of this post