Effective games from spatial structure
December 7, 2018 5 Comments
For the last week, I’ve been at the Institute Mittag-Leffler of the Royal Swedish Academy of Sciences for their program on mathematical biology. The institute is a series of apartments and a grand mathematical library located in the suburbs of Stockholm. And the program is a mostly unstructured atmosphere — with only about 4 hours of seminars over the whole week — aimed to bring like-minded researchers together. It has been a great opportunity to reconnect with old colleagues and meet some new ones.
During my time here, I’ve been thinking a lot about effective games and the effects of spatial structure. Discussions with Philip Gerlee were particularly helpful to reinvigorate my interest in this. As part of my reflection, I revisited the Ohtsuki-Nowak (2006) transform and wanted to use this post to share a cute observation about how space can create an effective game where there is no reductive game.
Suppose you were using our recent game assay to measure an effective game, and you got the above left graph for the fitness functions of your two types. On the x-axis, you have seeding proportion of type C and on the y-axis you have fitness. In cyan you have the measured fitness function for type C and in magenta, you have the fitness function for type D. The particular fitnesses scale of the y-axis is not super important, not even the x-intercept — I’ve chosen them purely for convenience. The only important aspect is that the cyan and magenta lines are parallel, with a positive slope, and the magenta above the cyan.
This is not a crazy result to get, compare it to the fitness functions for the Alectinib + CAF condition measured in Kaznatcheev et al. (2018) which is shown at right. There, cyan is parental and magenta is resistant. The two lines of best fit aren’t parallel, but they aren’t that far off.
How would you interpret this sort of graph? Is there a game-like interaction happening there?
Of course, this is a trick question that I give away by the title and set-up. The answer will depend on if you’re asking about effective or reductive games, and what you know about the population structure. And this is the cute observation that I want to highlight.
Rationality, the Bayesian mind and their limits
September 7, 2019 by Artem Kaznatcheev 1 Comment
Bayesianism is one of the more popular frameworks in cognitive science. Alongside other similar probalistic models of cognition, it is highly encouraged in the cognitive sciences (Chater, Tenenbaum, & Yuille, 2006). To summarize Bayesianism far too succinctly: it views the human mind as full of beliefs that we view as true with some subjective probability. We then act on these beliefs to maximize expected return (or maybe just satisfice) and update the beliefs according to Bayes’ law. For a better overview, I would recommend the foundations work of Tom Griffiths (in particular, see Griffiths & Yuille, 2008; Perfors et al., 2011).
This use of Bayes’ law has lead to a widespread association of Bayesianism with rationality, especially across the internet in places like LessWrong — Kat Soja has written a good overview of Bayesianism there. I’ve already written a number of posts about the dangers of fetishizing rationality and some approaches to addressing them; including bounded rationality, Baldwin effect, and interface theory. I some of these, I’ve touched on Bayesianism. I’ve also written about how to design Baysian agents for simulations in cognitive science and evolutionary game theory, and even connected it to quasi-magical thinking and Hofstadter’s superrationality for Kaznatcheev, Montrey & Shultz (2010; see also Masel, 2007).
But I haven’t written about Bayesianism itself.
In this post, I want to focus on some of the challenges faced by Bayesianism and the associated view of rationality. And maybe point to some approach to resolving them. This is based in part of three old questions from the Cognitive Sciences StackExhange: What are some of the drawbacks to probabilistic models of cognition?; What tasks does Bayesian decision-making model poorly?; and What are popular rationalist responses to Tversky & Shafir?
Read more of this post
Filed under Commentary, Preliminary, Reviews Tagged with bayesian, cognitive science, learning, prisoner's dilemma, rationality