Pairing tools and problems: a lesson from the methods of mathematics and the Entscheidungsproblem

Three weeks ago it was my lot to present at the weekly integrated mathematical oncology department meeting. Given the informal setting, I decided to grab one gimmick and run with it. I titled my talk: ‘2’. It was an overview of two recent projects that I’ve been working on: double public goods for acid mediated tumour invasion, and edge
effects in game theoretic dynamics of solid tumours
. For the former, I considered two approximations: the limit as the number n of interaction partners is large and the limit as n = 1 — so there are two interacting parties. But the numerology didn’t stop there, my real goal was to highlight a duality between tools or techniques and the problems we apply them to or domains we use them in. As is popular at the IMO, the talk was live-tweeted with many unflattering photos and this great paraphrase (or was it a quote?) by David Basanta from my presentation’s opening:

Since I was rather sleep deprived from preparing my slides, I am not sure what I said exactly but I meant to say something like the following:

I don’t subscribe to the perspective that we should pick the best tool for the job. Instead, I try to pick the best tuple of job and tool given my personal tastes, competences, and intuitions. In doing so, I aim to push the tool slightly beyond its prior borders — usually with an incremental technical improvement — while also exploring a variant perspective — but hopefully still grounded in the local language — on some domain of interest. The job and tool march hand in hand.

In this post, I want to unpack this principle and follow it a little deeper into the philosophy of science. In the process, I will touch on the differences between endogenous and exogenous questions. I will draw some examples from my own work, by will rely primarily on methodological inspiration from pure math and the early days of theoretical computer science.

Read more of this post

Baldwin effect and overcoming the rationality fetish

G.G. Simpson and J.M. Baldwin

G.G. Simpson and J.M. Baldwin

As I’ve mentioned previously, one of the amazing features of the internet is that you can take almost any idea and find a community obsessed with it. Thus, it isn’t surprising that there is a prominent subculture that fetishizes rationality and Bayesian learning. They tend to accumulate around forums with promising titles like OvercomingBias and Less Wrong. Since these communities like to stay abreast with science, they often offer evolutionary justifications for why humans might be Bayesian learners and claim a “perfect Bayesian reasoner as a fixed point of Darwinian evolution”. This lets them side-stepped observed non-Bayesian behavior in humans, by saying that we are evolving towards, but haven’t yet reached this (potentially unreachable, but approximable) fixed point. Unfortunately, even the fixed-point argument is naive of critiques like the Simpson-Baldwin effect.

Introduced in 1896 by psychologist J.M. Baldwin then named and reconciled with the modern synthesis by leading paleontologist G.G. Simpson (1953), the Simpson-Baldwin effect posits that “[c]haracters individually acquired by members of a group of organisms may eventually, under the influence of selection, be reenforced or replaced by similar hereditary characters” (Simpson, 1953). More explicitly, it consists of a three step process (some of which can occur in parallel or partially so):

  1. Organisms adapt to the environment individually.
  2. Genetic factors produce hereditary characteristics similar to the ones made available by individual adaptation.
  3. These hereditary traits are favoured by natural selection and spread in the population.

The overall result is that originally individual non-hereditary adaptation become hereditary. For Baldwin (1886,1902) and other early proponents (Morgan 1886; Osborn 1886, 1887) this was a way to reconcile Darwinian and strong Lamarkian evolution. With the latter model of evolution exorcised from the modern synthesis, Simpson’s restatement became a paradox: why do we observe the costly mechanism and associated errors of individual learning, if learning does not enhance individual fitness at equilibrium and will be replaced by simpler non-adaptive strategies? This encompass more specific cases like Rogers’ paradox (Boyd & Richerson, 1985; Rogers, 1988) of social learning.
Read more of this post

Replicator dynamics of cooperation and deception

In my last post, I mentioned how conditional behavior usually implied a transfer of information from one agent to another, and that conditional cooperation was therefore vulnerable to exploitation through misrepresentation (deception). Little did I know that an analytic treatment of that point had been published a couple of months before.

McNally & Jackson (2013), the same authors who used neural networks to study the social brain hypothesis, present a simple game theoretic model to show that the existence of cooperation creates selection for tactical deception. As other commentators have pointed out, this is a rather intuitive conclusion, but of real interest here are how this relationship is formalized and whether this model maps onto reality in any convincing way. Interestingly, the target model is reminiscent of Artem’s perception and deception models, so it’s worth bringing them up for comparison; I’ll refer to them as Model 1 and Model 2.
Read more of this post

Cooperation and the evolution of intelligence

One of the puzzles of evolutionary anthropology is to understand how our brains got to grow so big. At first sight, the question seems like a no brainer (pause for eye-roll): big brains make us smarter, more adaptable and thus result in an obvious increase in fitness, right? The problem is that brains need calories, and lots of them. Though it accounts for only 2% of your total weight, your brain will consume about 20-25% of your energy intake. Furthermore, the brain from behind its barrier doesn’t have access to the same energy resources as the rest of your body, which is part of the reason why you can’t safely starve yourself thin (if it ever crossed your mind).

So maintaining a big brain requires time and resources. For us, the trade-off is obvious, but if you’re interested in human evolutionary history, you must keep in mind that our ancestors did not have access to chain food stores or high fructose corn syrup, nor were they concerned with getting a college degree. They were dealing with a different set of trade-offs and this is what evolutionary anthropologists are after. What is it that our ancestors’ brains allowed them to do so well that warranted such unequal energy allocation?
Read more of this post