Misbeliefs, evolution and games: a positive case

A recurrent theme here in TheEGG is the limits and reliability of knowledge. These get explored from many directions: on epistemological grounds, from the philosophy of science angle, but also formally, through game theory and simulations. In this post, I will explore the topic of misbeliefs as adaptations. Misbeliefs will be intended as ideas about reality that a given subject accepts as true, despite them being wrong, inaccurate or otherwise mistaken. The notion that evolution might not systematically and exclusively support true beliefs isn’t new to TheEGG, but it has also been tackled by many other people, by means of different methodologies, including my own personal philosophising. The overarching question is whether misbeliefs can be systematically adaptive, a prospect that tickles my devious instincts: if it were the case, it would fly in the face of naïve rationalists, who frequently assume that evolution consistently favours the emergence of truthful ways to perceive the world.

Given our common interests, Artem and I have had plenty of long discussions in the past couple of years, mostly sparked by his work on Useful Delusions (see Kaznatcheev et al., 2014), for some more details on our exchanges, as well as a little background on myself, please see the notes[1]. A while ago,  I found an article by McKay and Dennett (M&D), entitled “The evolution of misbelief” (2009)[2], Artem offered me the chance to write a guest post on it, and I was very happy to accept.

What follows will mix philosophical, clinical and mathematical approaches, with the hope to produce a multidisciplinary synthesis.
Read more of this post

EGT Reading Group 46 – 50 and a photo

Part of the original intent for this blog was to accompany the evolutionary game theory reading group that I started running at McGill in 2010. The blog has taken off, but the reading group has waned. However, since I still have some hope to revive a regular reading group, I have continued to call occasional journal discussion meetings that I organize as the EGT reading group. These meetings are very sparse and highly irregular, not the weekly groups that they were in 2010. For example, since my last update on May 28th, 2013, around 22 months have passed with the group meeting only 5 times. Still, these 5 meetings bring us to a milestone and hence an update on the papers we’ve read:
Read more of this post

A detailed update on readership for the first 200 posts

It is time — this is the 201st article on TheEGG — to get an update on readership since our 151st post and lament on why academics should blog. I apologize for this navel-gazing post, and it is probably of no interest to you unless you are really excited about blog statistics. I am writing this post largely for future reference and to celebrate this arbitrary milestone.

The of statistics in this article are largely superficial proxies — what does a view even mean? — and only notable because of how easy they are to track. These proxies should never be used to seriously judge academics but I do think they can serve as a useful self-tracking tool. Making your blog’s statistics available publicly can be a useful comparison for other bloggers to get an idea of what sort of readership and posting habits are typical. In keeping with this rough and lighthearted comparison, according to Jeromy Anglim’s order-of-magnitude rules of thumb, in the year since the last update the blog has been popular in terms of RSS subscribers and relatively popular in terms of annual page views.

As before, I’ll start with the public self-metrics of the viewership graph for the last 6 and a half months:

Columns are views per week at TheEGG blog since the end of August, 2014. The vertical lines separate months, and the black line is average views per day for each month. The scale for weeks is on the left, it is different from the scale for daily average, those are labeled at each height.

Columns are views per week at TheEGG blog since the end of August, 2014. The vertical lines separate months, and the black line is average views per day for each month. The scale for weeks is on the left, it is different from the scale for daily average, those are labeled at each height.

If you’d like to know more, dear reader, then keep reading. Otherwise, I will see you on the next post!
Read more of this post

Pairing tools and problems: a lesson from the methods of mathematics and the Entscheidungsproblem

Three weeks ago it was my lot to present at the weekly integrated mathematical oncology department meeting. Given the informal setting, I decided to grab one gimmick and run with it. I titled my talk: ‘2’. It was an overview of two recent projects that I’ve been working on: double public goods for acid mediated tumour invasion, and edge
effects in game theoretic dynamics of solid tumours
. For the former, I considered two approximations: the limit as the number n of interaction partners is large and the limit as n = 1 — so there are two interacting parties. But the numerology didn’t stop there, my real goal was to highlight a duality between tools or techniques and the problems we apply them to or domains we use them in. As is popular at the IMO, the talk was live-tweeted with many unflattering photos and this great paraphrase (or was it a quote?) by David Basanta from my presentation’s opening:

Since I was rather sleep deprived from preparing my slides, I am not sure what I said exactly but I meant to say something like the following:

I don’t subscribe to the perspective that we should pick the best tool for the job. Instead, I try to pick the best tuple of job and tool given my personal tastes, competences, and intuitions. In doing so, I aim to push the tool slightly beyond its prior borders — usually with an incremental technical improvement — while also exploring a variant perspective — but hopefully still grounded in the local language — on some domain of interest. The job and tool march hand in hand.

In this post, I want to unpack this principle and follow it a little deeper into the philosophy of science. In the process, I will touch on the differences between endogenous and exogenous questions. I will draw some examples from my own work, by will rely primarily on methodological inspiration from pure math and the early days of theoretical computer science.

Read more of this post

Short history of iterated prisoner’s dilemma tournaments

Nineteen Eighty — if I had to pick the year that computational modeling invaded evolutionary game theory then that would be it. In March, 1980 — exactly thirty-five years ago — was when Robert Axelrod, a professor of political science at University of Michigan, published the results of his first tournament for iterated prisoner’s dilemma in the Journal of Conflict Resolution. Game theory experts, especially those specializing in Prisoner’s dilemma, from the disciplines of psychology, political science, economics, sociology, and mathematics submitted 14 FORTRAN programs to compete in a round-robin tournament coded by Axelrod and his research assistant Jeff Pynnonen. If you want to relive these early days of evolutionary game theory but have forgotten FORTRAN and only speak Python then I recommend submitting a strategy to an analogous tournament by Vincent Knight on GitHub. But before I tell you more about submitting, dear reader, I want to celebrate the anniversary of Axelrod’s paper by sharing more about the original tournament.

Maybe it will give you some ideas for strategies.
Read more of this post

Five motivations for theoretical computer science

There are some situations, perhaps lucky ones, where it is felt that an activity needs no external motivation or justification.  For the rest, it can be helpful to think of what the task at hand can be useful for. This of course doesn’t answer the larger question of what is worth doing, since it just distributes the burden somewhere else, but establishing these connections seems like a natural part of an answer to the larger question.

Along those lines, the following are five intellectual areas for whose study theoretical computer science concepts and their development can be useful – therefore, a curiosity about these areas can provide some motivation for learning about those cstheory concepts or developing them. They are arranged from the likely more obvious to most people to the less so: technology, mathematics, science, society, and philosophy. This post could also serve as an homage to delayed gratification (perhaps with some procrastination mixed in), having been finally written up more than three years after first discussing it with Artem.

Read more of this post

Operationalizing replicator dynamics and partitioning fitness functions

As you know, dear regular reader, I have a rather uneasy relationship with reductionism, especially when doing mathematical modeling in biology. In mathematical oncology, for example, it seems that there is a hope that through our models we can bring a more rigorous mechanistic understanding of cancer, but at the same time there is the joke that given almost any microscopic mechanism there is an experimental paper in the oncology literature supporting it and another to contradict it. With such a tenuous and shaky web of beliefs justifying (or just hinting towards) our nearly arbitrary microdynamical assumptions, it seems unreasonable to ground our models in reductionist stories. At such a time of ontological crisis, I have an instinct to turn — much like many physicists did during a similar crisis at the start of the 20th century in their discipline — to operationalism. Let us build a convincing mathematical theory of cancer in the petri dish with as few considerations of things we can’t reliably measure and then see where to go from there. To give another analogy to physics in the late 1800s, let us work towards a thermodynamics of cancer and worry about its many possible statistical mechanics later.

This is especially important in applications of evolutionary game theory where assumptions abound. These assumptions aren’t just about modeling details like the treatments of space and stochasticity or approximations to them but about if there is even a game taking place or what would constitute a game-like interaction. However, to work toward an operationalist theory of games, we need experiments that beg for EGT explanations. There is a recent history of these sort of experiments in viruses and microbes (Lenski & Velicer, 2001; Crespi, 2001; Velicer, 2003; West et al., 2007; Ribeck & Lenski, 2014), slime molds (Strassmann & Queller, 2011) and yeast (Gore et al., 2009; Sanchez & Gore, 2013), but the start of these experiments in oncology by Archetti et al. (2015) is current events[1]. In the weeks since that paper, I’ve had a very useful reading group and fruitful discussions with Robert Vander Velde and Julian Xue about the experimental aspects of this work. This Monday, I spent most of the afternoon discussing similar experiments with Robert Noble who is visiting Moffitt from Montpellier this week.

In this post, I want to unlock some of this discussion from the confines of private emails and coffee chats. In particular, I will share my theorist’s cartoon understanding of the experiments in Archetti et al. (2015) and how they can help us build an operationalist approach to EGT but how they are not (yet) sufficient to demonstrate the authors’ central claim that neuroendocrine pancreatic cancer dynamics involve a public good.
Read more of this post

Pairwise games as a special case of public goods

Usually, when we are looking at public goods games, we consider an agent interacting with a group of n other agents. In our minds, we often imagine n to be large, or sometimes even take the limit as n goes to infinity. However, this isn’t the only limit that we should consider when we are grooming our intuition. It is also useful to scale to pairwise games by setting n = 1. In the case of a non-linear public good game with constant cost, this results in a game given by two parameters \frac{\Delta f_0}{c}, \frac{\Delta f_1}{c} — the difference in the benefit of the public good from having 1 instead of 0 and 2 instead of 1 contributor in the group, respectively; measured in multiples of the cost c. In that case, if we want to recreate any two-strategy pairwise cooperate-defect game with the canonical payoff matrix \begin{pmatrix}1 & U \\ V & 0 \end{pmatrix} then just set \frac{\Delta f_0}{c} = 1 + U and \frac{\Delta f_1}{c} = 2 + V. Alternatively, if you want a free public good (c = 0) then use \Delta f_0 = U and \Delta f_1 = 1 - V. I’ll leave verifying the arithmetic as an exercise for you, dear reader.

In this post, I want to use this sort of n = 1 limit to build a little bit more intuition for the double public good games that I built recently with Robert Vander Velde, David Basanta, and Jacob Scott to think about acid-mediated tumor invasion. In the process, we will get to play with some simplexes to classify the nine qualitatively distinct dynamics of this limit and write another page in my open science notebook.
Read more of this post

Evolutionary non-commutativity suggests novel treatment strategies

In the Autumn of 2011 I received an email from Jacob Scott, now a good friend and better mentor, who was looking for an undergraduate to code an evolutionary simulation. Jake had just arrived in Oxford to start his DPhil in applied mathematics and by chance had dined at St Anne’s College with Peter Jeavons, then a tutor of mine, the evening before. Jake had outlined his ideas, Peter had supplied a number of email addresses, Jake sent an email and I uncharacteristically replied saying I’d give it a shot. These unlikely events would led me to where I am today — a DPhil candidate in the Oxford University Department of Computer Science. My project with Jake was a success and I was invited to speak at the 2012 meeting of the Society of Mathematical Biology in Knoxville, TN. Here I met one of Jake’s supervisors, Alexander Anderson, who invited me to visit the Department of Integrated Mathematical Oncology at the Moffitt Cancer Center and Research Institute for a workshop in December of that year. Here Dr. Anderson and I discussed one of the key issues with the work I will present in this post, issues that now form the basis of my DPhil with Dr. Anderson as one of two supervisors. Fittingly, the other is Peter Jeavons.

Jake was considering the problem of treating and avoiding drug resistance and in his short email provided his hypothesis as a single question: “Can we administer a sequence of drugs to steer the evolution of a disease population to a configuration from which resistance cannot emerge?”

In Nichol et al. (2015), we provide evidence for an affirmative answer to this question. I would like to use this post to introduce you to our result, and discuss some of the criticisms.

Read more of this post

Evolutionary game theory without interactions

When I am working on evolutionary game theory, I usually treat the models I build as heuristics to guide intuitions and push the imagination. But working on something as practical as cancer, and being in a department with many physics-trained colleagues puts pressure on me to think of moving more towards insilications or abductions. Now, Philip Gerlee and Philipp Altrock are even pushing me in that direction with their post on TheEGG. So this entry might seem a bit uncharacteristic, I will describe an experiment — at least as a theorist like me imagines them.

Consider the following idealized protocol that is loosely inspired by Archetti et al. (2015) and the E. coli Long-term evolution experiment (Lenski et al., 1991; Wiser et al., 2013; Ribeck & Lenski, 2014). We will (E1) take a new petri dish or plate; (E2) fill it with a fixed mix of nutritional medium like fetal bovine serum; (E3) put a known number N of two different cell types A and B on the medium (on the first plate we will also know the proportion of A and B in the mixture); (E4) let them grow for a fixed amount of time T which will be on the order of a cell cycle (or two); (E5) scrape the cells off the medium; and (E6) return to step (E1) while selecting N cells at random from the ones we got in step (E5) to seed step (E3). Usually, you would use this procedure to see how A-cells and B-cells compete with each other, as Archetti et al. (2015). However, what would it look like if the cells don’t compete with each other? What if they produce no signalling molecules — in fact, if they excrete nothing into the environment, to avoid cross-feeding interactions — and don’t touch each other? What if they just sit there independently eating their very plentiful nutrient broth?[1]

Would you expect to see evolutionary game dynamics between A and B? Obviously, since I am asking, I expect some people to answer ‘no’ and then be surprised when I derive some math to show that the answer can be ‘yes’. So, dear reader, humour me by being surprised.
Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 2,413 other followers