Short history of iterated prisoner’s dilemma tournaments

Nineteen Eighty — if I had to pick the year that computational modeling invaded evolutionary game theory then that would be it. In March, 1980 — exactly thirty-five years ago — was when Robert Axelrod, a professor of political science at University of Michigan, published the results of his first tournament for iterated prisoner’s dilemma in the Journal of Conflict Resolution. Game theory experts, especially those specializing in Prisoner’s dilemma, from the disciplines of psychology, political science, economics, sociology, and mathematics submitted 14 FORTRAN programs to compete in a round-robin tournament coded by Axelrod and his research assistant Jeff Pynnonen. If you want to relive these early days of evolutionary game theory but have forgotten FORTRAN and only speak Python then I recommend submitting a strategy to an analogous tournament by Vincent Knight on GitHub. But before I tell you more about submitting, dear reader, I want to celebrate the anniversary of Axelrod’s paper by sharing more about the original tournament.

Maybe it will give you some ideas for strategies.
Read more of this post

Five motivations for theoretical computer science

There are some situations, perhaps lucky ones, where it is felt that an activity needs no external motivation or justification.  For the rest, it can be helpful to think of what the task at hand can be useful for. This of course doesn’t answer the larger question of what is worth doing, since it just distributes the burden somewhere else, but establishing these connections seems like a natural part of an answer to the larger question.

Along those lines, the following are five intellectual areas for whose study theoretical computer science concepts and their development can be useful – therefore, a curiosity about these areas can provide some motivation for learning about those cstheory concepts or developing them. They are arranged from the likely more obvious to most people to the less so: technology, mathematics, science, society, and philosophy. This post could also serve as an homage to delayed gratification (perhaps with some procrastination mixed in), having been finally written up more than three years after first discussing it with Artem.

Read more of this post

Operationalizing replicator dynamics and partitioning fitness functions

As you know, dear regular reader, I have a rather uneasy relationship with reductionism, especially when doing mathematical modeling in biology. In mathematical oncology, for example, it seems that there is a hope that through our models we can bring a more rigorous mechanistic understanding of cancer, but at the same time there is the joke that given almost any microscopic mechanism there is an experimental paper in the oncology literature supporting it and another to contradict it. With such a tenuous and shaky web of beliefs justifying (or just hinting towards) our nearly arbitrary microdynamical assumptions, it seems unreasonable to ground our models in reductionist stories. At such a time of ontological crisis, I have an instinct to turn — much like many physicists did during a similar crisis at the start of the 20th century in their discipline — to operationalism. Let us build a convincing mathematical theory of cancer in the petri dish with as few considerations of things we can’t reliably measure and then see where to go from there. To give another analogy to physics in the late 1800s, let us work towards a thermodynamics of cancer and worry about its many possible statistical mechanics later.

This is especially important in applications of evolutionary game theory where assumptions abound. These assumptions aren’t just about modeling details like the treatments of space and stochasticity or approximations to them but about if there is even a game taking place or what would constitute a game-like interaction. However, to work toward an operationalist theory of games, we need experiments that beg for EGT explanations. There is a recent history of these sort of experiments in viruses and microbes (Lenski & Velicer, 2001; Crespi, 2001; Velicer, 2003; West et al., 2007; Ribeck & Lenski, 2014), slime molds (Strassmann & Queller, 2011) and yeast (Gore et al., 2009; Sanchez & Gore, 2013), but the start of these experiments in oncology by Archetti et al. (2015) is current events[1]. In the weeks since that paper, I’ve had a very useful reading group and fruitful discussions with Robert Vander Velde and Julian Xue about the experimental aspects of this work. This Monday, I spent most of the afternoon discussing similar experiments with Robert Noble who is visiting Moffitt from Montpellier this week.

In this post, I want to unlock some of this discussion from the confines of private emails and coffee chats. In particular, I will share my theorist’s cartoon understanding of the experiments in Archetti et al. (2015) and how they can help us build an operationalist approach to EGT but how they are not (yet) sufficient to demonstrate the authors’ central claim that neuroendocrine pancreatic cancer dynamics involve a public good.
Read more of this post

Pairwise games as a special case of public goods

Usually, when we are looking at public goods games, we consider an agent interacting with a group of n other agents. In our minds, we often imagine n to be large, or sometimes even take the limit as n goes to infinity. However, this isn’t the only limit that we should consider when we are grooming our intuition. It is also useful to scale to pairwise games by setting n = 1. In the case of a non-linear public good game with constant cost, this results in a game given by two parameters \frac{\Delta f_0}{c}, \frac{\Delta f_1}{c} — the difference in the benefit of the public good from having 1 instead of 0 and 2 instead of 1 contributor in the group, respectively; measured in multiples of the cost c. In that case, if we want to recreate any two-strategy pairwise cooperate-defect game with the canonical payoff matrix \begin{pmatrix}1 & U \\ V & 0 \end{pmatrix} then just set \frac{\Delta f_0}{c} = 1 + U and \frac{\Delta f_1}{c} = 2 + V. Alternatively, if you want a free public good (c = 0) then use \Delta f_0 = U and \Delta f_1 = 1 - V. I’ll leave verifying the arithmetic as an exercise for you, dear reader.

In this post, I want to use this sort of n = 1 limit to build a little bit more intuition for the double public good games that I built recently with Robert Vander Velde, David Basanta, and Jacob Scott to think about acid-mediated tumor invasion. In the process, we will get to play with some simplexes to classify the nine qualitatively distinct dynamics of this limit and write another page in my open science notebook.
Read more of this post

Evolutionary non-commutativity suggests novel treatment strategies

In the Autumn of 2011 I received an email from Jacob Scott, now a good friend and better mentor, who was looking for an undergraduate to code an evolutionary simulation. Jake had just arrived in Oxford to start his DPhil in applied mathematics and by chance had dined at St Anne’s College with Peter Jeavons, then a tutor of mine, the evening before. Jake had outlined his ideas, Peter had supplied a number of email addresses, Jake sent an email and I uncharacteristically replied saying I’d give it a shot. These unlikely events would led me to where I am today — a DPhil candidate in the Oxford University Department of Computer Science. My project with Jake was a success and I was invited to speak at the 2012 meeting of the Society of Mathematical Biology in Knoxville, TN. Here I met one of Jake’s supervisors, Alexander Anderson, who invited me to visit the Department of Integrated Mathematical Oncology at the Moffitt Cancer Center and Research Institute for a workshop in December of that year. Here Dr. Anderson and I discussed one of the key issues with the work I will present in this post, issues that now form the basis of my DPhil with Dr. Anderson as one of two supervisors. Fittingly, the other is Peter Jeavons.

Jake was considering the problem of treating and avoiding drug resistance and in his short email provided his hypothesis as a single question: “Can we administer a sequence of drugs to steer the evolution of a disease population to a configuration from which resistance cannot emerge?”

In Nichol et al. (2015), we provide evidence for an affirmative answer to this question. I would like to use this post to introduce you to our result, and discuss some of the criticisms.

Read more of this post

Evolutionary game theory without interactions

When I am working on evolutionary game theory, I usually treat the models I build as heuristics to guide intuitions and push the imagination. But working on something as practical as cancer, and being in a department with many physics-trained colleagues puts pressure on me to think of moving more towards insilications or abductions. Now, Philip Gerlee and Philipp Altrock are even pushing me in that direction with their post on TheEGG. So this entry might seem a bit uncharacteristic, I will describe an experiment — at least as a theorist like me imagines them.

Consider the following idealized protocol that is loosely inspired by Archetti et al. (2015) and the E. coli Long-term evolution experiment (Lenski et al., 1991; Wiser et al., 2013; Ribeck & Lenski, 2014). We will (E1) take a new petri dish or plate; (E2) fill it with a fixed mix of nutritional medium like fetal bovine serum; (E3) put a known number N of two different cell types A and B on the medium (on the first plate we will also know the proportion of A and B in the mixture); (E4) let them grow for a fixed amount of time T which will be on the order of a cell cycle (or two); (E5) scrape the cells off the medium; and (E6) return to step (E1) while selecting N cells at random from the ones we got in step (E5) to seed step (E3). Usually, you would use this procedure to see how A-cells and B-cells compete with each other, as Archetti et al. (2015). However, what would it look like if the cells don’t compete with each other? What if they produce no signalling molecules — in fact, if they excrete nothing into the environment, to avoid cross-feeding interactions — and don’t touch each other? What if they just sit there independently eating their very plentiful nutrient broth?[1]

Would you expect to see evolutionary game dynamics between A and B? Obviously, since I am asking, I expect some people to answer ‘no’ and then be surprised when I derive some math to show that the answer can be ‘yes’. So, dear reader, humour me by being surprised.
Read more of this post

False memories and journalism

We like to think of ourselves as a collection of our memories, and of each memory as a snapshot of an event in our lives. Sure, we all know that our minds aren’t as sturdy as our computer’s hard-drive, so these snapshots decay over time, especially the boring ones — that’s why most of us can’t remember what we had for breakfast 12 years ago. We are even familiar with old snapshots rearranging their order and losing context, but we don’t expect to generate vivid and certain memories of events that didn’t occur. How could we have a snapshot of something that didn’t happen?

This view of memory is what makes Brian Williams’ recent fib about being on board a helicopter that was hit by two rockets and small arms fire in Iraq 12 years ago, so hard to believe. There was indeed a helicopter that was forced to land on that day, but the downed aircraft’s crew reports that Williams was actually on a helicopter about an hour behind the three that came under fire. Williams has apologized for his story, saying he conflated his helicopter with the downed one. To this, Erik Wemple voices the popular skepticism that “‘conflating’ the experience of taking incoming fire with the experience of not taking incoming fire seems verily impossible.”

But research into false memories suggests that such constructed memories as Williams’ do occur. In this post, I want to discuss these sort of false memories, share a particularly interesting example, and then discuss what this might mean for journalism.

Read more of this post

Rogers’ paradox: Why cheap social learning doesn’t raise mean fitness

It’s Friday night, you’re lonely, you’re desperate and you’ve decided to do the obvious—browse Amazon for a good book to read—when, suddenly, you’re told that you’ve won one for free. Companionship at last! But, as you look at the terms and conditions, you realize that you’re only given a few options to choose from. You have no idea what to pick, but luckily you have some help: Amazon lets you read through the first chapter of each book before choosing and, now that you think about it, your friend has read most of the books on the list as well. So, how do you choose your free book?

If you answered “read the first chapter of each one,” then you’re a fan of asocial/individual learning. If you decided to ask your friend for a recommendation, then you’re in favor of social learning. Individual learning would probably have taken far more time here than social learning, which is thought to be a common scenario: Social learning’s prevalence is often explained in terms of its ability to reduce costs—such as metabolic, opportunity or predation costs—below those incurred by individual learning (Aoki et al., 2005; Kendal et al., 2005; Laland, 2004). However, a model by Rogers (1988) famously showed that this is not the whole story behind social learning’s evolution.
Read more of this post

Seeing edge effects in tumour histology

Some of the hardest parts of working towards the ideal of a theorist, at least for me, are: (1) making sure that I engage with problems that can be made interesting to the new domain I enter and not just me; (2) engaging with these problems in a way and using tools that can be made compelling and useful to the domain’s existing community, and (3) not being dismissive of and genuinely immersing myself in the background knowledge and achievements of the domain, at least around the problems I am engaging with. Ignoring these three points, especially the first, is one of the easiest ways to succumb to interdisciplinitis; a disease that catches me at times. For example, in one of the few references to TheEGG in the traditional academic literature, Karel Mulder writes on the danger of ignoring the second and third points:

Sometimes scientists are offering a helping hand to another discipline, which is all but a sign of compassion and charity… It is an expression of disdain for the poor colleagues that can use some superior brains.

The footnote that highlights an example of such “disciplinary arrogance/pride” is a choice quote from the introduction of my post on what theoretical computer science can offer biology. Mulder exposes my natural tendency toward a condescension. Thus, to be a competent theorist, I need to actively work on inoculating myself against interdisciplinitis.

One of the best ways I know to learn humility is to work with great people from different backgrounds. In the domain of oncology, I found two such collaborators in Jacob Scott and David Basanta. Recently we updated our paper on edge effects in game theoretic dynamics of spatially structured tumours (Kaznatcheev et al., 2015); as always that link leads to the arXiv preprint, but this time — in a first for me — we have also posted the paper to the bioRxiv[1]. I’ve already blogged about the Basanta et al. (2008) work that inspired this and our new technical contribution[2], including the alternative interpretation of the transform of Ohtsuki & Nowak (2006) that we used along the way. So today I want to discuss some of the clinical and biological content of our paper; much of it was greatly expanded upon in this version of the paper. In the process, I want to reflect on the theorist’s challenge learning the language and customs of a newly entered domain.

Read more of this post

An approach towards ethics: neuroscience and development

For me personally it has always been a struggle, reading through all the philosophical and religious literature I have a long standing interest in, to verbalize my intuitive concept of morals in any satisfactory way. Luckily for me, once I’ve started reading up on modern psychology and neuroscience, I found out that there are empirical models based on clustering of the abundant concepts that correlate well with both our cultured intuitions and our knowledge of brain functioning. Models that are for the studies of Ethics what the Big Five traits are for personality theories or what the Cattell-Horn-Carroll theory is for cognitive abilities.  In this post I’m going to provide an account of research of what is the most elucidating level of explanation of human morals – that of neuroscience and psychology. The following is not meant as a comprehensive review, but a sample of what I consider the most useful explanatory tools. The last section touches briefly upon genetic and endocrinological component of human morals, but it is nothing more than a mention. Also, I’ve decided to omit citations in quotes, because I don’t want to include into the list of reference the research I am personally unfamiliar with.

A good place to start is Jonathan Haidt’s TED talk:

Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 2,402 other followers