Rationality, the Bayesian mind and their limits

Bayesianism is one of the more popular frameworks in cognitive science. Alongside other similar probalistic models of cognition, it is highly encouraged in the cognitive sciences (Chater, Tenenbaum, & Yuille, 2006). To summarize Bayesianism far too succinctly: it views the human mind as full of beliefs that we view as true with some subjective probability. We then act on these beliefs to maximize expected return (or maybe just satisfice) and update the beliefs according to Bayes’ law. For a better overview, I would recommend the foundations work of Tom Griffiths (in particular, see Griffiths & Yuille, 2008; Perfors et al., 2011).

This use of Bayes’ law has lead to a widespread association of Bayesianism with rationality, especially across the internet in places like LessWrong — Kat Soja has written a good overview of Bayesianism there. I’ve already written a number of posts about the dangers of fetishizing rationality and some approaches to addressing them; including bounded rationality, Baldwin effect, and interface theory. I some of these, I’ve touched on Bayesianism. I’ve also written about how to design Baysian agents for simulations in cognitive science and evolutionary game theory, and even connected it to quasi-magical thinking and Hofstadter’s superrationality for Kaznatcheev, Montrey & Shultz (2010; see also Masel, 2007).

But I haven’t written about Bayesianism itself.

In this post, I want to focus on some of the challenges faced by Bayesianism and the associated view of rationality. And maybe point to some approach to resolving them. This is based in part of three old questions from the Cognitive Sciences StackExhange: What are some of the drawbacks to probabilistic models of cognition?; What tasks does Bayesian decision-making model poorly?; and What are popular rationalist responses to Tversky & Shafir?

Read more of this post

On Frankfurt’s Truth and Bullshit

In 2015 and 2016, as part of my new year reflections on the year prior, I wrote a post about the ‘year in books’. The first was about philosophy, psychology and political economy and it was unreasonably long and sprawling as post. The second time, I decided to divide into several posts, but only wrote the first one on cancer: Neanderthals to the National Cancer Act to now. In this post, I want to return two of the books that were supposed to be in the second post for that year: Harry G. Frankfurt’s On Bullshit and On Truth.

Reading these two books in 2015 might have been an unfortunate preminission for the post-2016 world. And I wonder if a lot of people have picked up Frankfurt’s essays since. But with a shortage of thoughts for this week, I thought it’s better late than never to share my impressions.

In this post I want to briefly summarize my reading of Frankfurt’s position. And then I’ll focus on a particular shortcoming: I don’t think Frankfurt focuses enough on how and what for Truth is used in practice. From the perspective of their relationship to investigation and inquiry, Truth and Bullshit start to seem much less distinct than Frankfurt makes them. And both start to look like the negative force — although in the case of Truth: sometimes a necessary negative.
Read more of this post

Abstracting evolutionary games in cancer

As you can tell from browsing the mathematical oncology posts on TheEGG, somatic evolution is now recognized as a central force in the initiation, progression, treatment, and management of cancer. This has opened a new front in the proverbial war on cancer: focusing on the ecology and evolutionary biology of cancer. On this new front, we are starting to deploy new kinds of mathematical machinery like fitness landscapes and evolutionary games.

Recently, together with Peter Jeavons, I wrote a couple of thousand words on this new machinery for Russell Rockne’s upcoming mathematical oncology roadmap. Our central argument being — to continue the war metaphor — that with new machinery, we need new tactics.

Biologist often aim for reductive explanations, and mathematical modelers have tended to encourage this tactic by searching for mechanistic models. This is important work. But we also need to consider other tactics. Most notable, we need to look at the role that abstraction — both theoretical and empirical abstraction — can play in modeling and thinking about cancer.

The easiest way to share my vision for how we should approach this new tactic would be to throw a preprint up on BioRxiv or to wait for Rockne’s road map to eventually see print. Unfortunately, BioRxiv has a policy against views-like articles — as I was surprised to discover. And I am too impatient to wait for the eventual giant roadmap article.

Hence, I want to share some central parts in this blog post. This is basically an edited and slightly more focused version of our roadmap. Since, so far, game theory models have had more direct impact in oncology than fitness landscapes, I’ve focused this post exclusively on games.
Read more of this post

Overcoming folk-physics: the case of projectile motion for Aristotle, John Philoponus, Ibn-Sina & Galileo

A few years ago, I wrote about the importance of pairing tools and problems in science. Not selecting the best tool for the job, but adjusting both your problem and your method to form the best pair. There, I made the distinction between endogenous and exogenous questions. A question is endogenous to a field if it is motivated by the existing tools developed for the field or slight extensions of them. A question is exogenous if motivated by frameworks or concerns external to the field. Usually, such an external motivating framework is accepted uncritically with the most common culprits being the unarticulated ‘intuitive’ and ‘natural’ folk theories forced on us by our everyday experiences.

Sometimes a great amount of scientific or technological progress can be had from overcoming our reliance on a folk-theory. A classic examples of this would be the development of inertia and momentum in physics. In this post, I want to sketch a geneology of this transition to make the notion of endogenous vs exogenous questions a bit more precise.

How was the folk-physics of projectile motion abandoned?

In the process, I’ll get to touch briefly on two more recent threads on TheEGG: The elimination of the ontological division between artificial and natural motion (that was essential groundwork for Darwin’s later elimination of the division between artificial and natural processes) and the extraction and formalization of the tacit knowledge underlying a craft.
Read more of this post

Unity of knowing and doing in education and society

Traditionally, knowledge is separated from activity and passed down from teacher to student as disembodied information. For John Dewey, this tradition reinforces the false dichotomy between knowing and doing. A dichotomy that is socially destructive, and philosophically erroneous.

I largely agree with the above. The best experiences I’ve had of learning was through self-guided discovery of wanting to solve a problem. This is, for example, one of the best ways to learn to program, or math, or language, or writing, or nearly anything else. But in what way is this ‘doing’? Usually, ‘doing’ has a corporal physicality to it. Thinking happens while you sit at your desk: in fact, you might as well be disembodied. Doing happens elsewhere and requires your body.

In this post, I want to briefly discuss the knowing-doing dichotomy. In particular, I’ll stress the importance of social embodying rather than the physical embodying of ‘doing’. I’ll close with some vague speculations on the origins of this dichotomy and a dangling thread about how this might connect to the origins of science.

Read more of this post

Hackathons and a brief history of mathematical oncology

It was Friday — two in the morning. And I was busy fine-tuning a model in Mathematica and editing slides for our presentation. My team and I had been running on coffee and snacks all week. Most of us had met each other for the first time on Monday, got an inkling of the problem space we’d be working on, brainstormed, and hacked together a number of equations and a few chunks of code to prototype a solution. In seven hours, we would have to submit our presentation to the judges. Fifty thousand dollars in start-up funding was on the line.

A classic hackathon, except for one key difference: my team wasn’t just the usual mathematicians, programmers, computer & physical scientists. Some of the key members were biologists and clinicians specializing in blood cancers. And we weren’t prototyping a new app. We were trying to predict the risk of relapse for patients with chronic myeloid leukemia, who had stopped receiving imatinib. This was 2013 and I was at the 3rd annual integrated mathematical oncology workshop. It was one of my first exposures to using mathematical and computational tools to study cancer; the field of mathematical oncology.

As you can tell from other posts on TheEGG, I’ve continued thinking about and working on mathematical oncology. The workshops have also continued. The 7th annual IMO workshop — focused on stroma this year — is starting right now. If you’re not in Tampa then you can follow #MoffittIMO on twitter.

Since I’m not attending in person this year, I thought I’d provide a broad overview based on an article I wrote for Oxford Computer Science’s InSPIRED Research (see pg. 20-1 of this pdf for the original) and a paper by Helen Byrne (2010).

Read more of this post

Spatializing the Go-vs-Grow game with the Ohtsuki-Nowak transform

Recently, I’ve been thinking a lot about small projects to get students started with evolutionary game theory. One idea that came to mind is to look at games that have been analyzed in the inviscid regime then ‘spatialize’ them and reanalyze them. This is usually not difficult to do and provides some motivation to solving for and making sense of the dynamic regimes of a game. And it is not always pointless, for example, our edge effects paper (Kaznatcheev et al, 2015) is mostly just a spatialization of Basanta et al.’s (2008a) Go-vs-Grow game together with some discussion.

Technically, TheEGG together with that paper have everything that one would need to learn this spatializing technique. However, I realized that my earlier posts on spatializing with the Ohtsuki-Nowak transform might a bit too abstract and the paper a bit too terse for a student who just started with EGT. As such, in this post, I want to go more slowly through a concrete example of spatializing an evolutionary game. Hopefully, it will be useful to students. If you are a beginner to EGT that is reading this post, and something doesn’t make sense then please ask for clarification in the comments.

I’ll use the Go-vs-Grow game as the example. I will focus on the mathematics, and if you want to read about the biological or oncological significance then I encourage you to read Kaznatcheev et al. (2015) in full.
Read more of this post

Fusion and sex in protocells & the start of evolution

In 1864, five years after reading Darwin’s On the Origin of Species, Pyotr Kropotkin — the anarchist prince of mutual aid — was leading a geographic survey expedition aboard a dog-sleigh — a distinctly Siberian variant of the HMS Beagle. In the harsh Manchurian climate, Kropotkin did not see competition ‘red in tooth and claw’, but a flourishing of cooperation as animals banded together to survive their environment. From this, he built a theory of mutual aid as a driving factor of evolution. Among his countless observations, he noted that no matter how selfish an animal was, it still had to come together with others of its species, at least to reproduce. In this, he saw both sex and cooperation as primary evolutionary forces.

Now, Martin A. Nowak has taken up the challenge of putting cooperation as a central driver of evolution. With his colleagues, he has tracked the problem from myriad angles, and it is not surprising that recently he has turned to sex. In a paper released at the start of this month, Sam Sinai, Jason Olejarz, Iulia A. Neagu, & Nowak (2016) argue that sex is primary. We need sex just to kick start the evolution of a primordial cell.

In this post, I want to sketch Sinai et al.’s (2016) main argument, discuss prior work on the primacy of sex, a similar model by Wilf & Ewens, the puzzle over emergence of higher levels of organization, and the difference between the protocell fusion studied by Sinai et al. (2016) and sex as it is normally understood. My goal is to introduce this fascinating new field that Sinai et al. (2016) are opening to you, dear reader; to provide them with some feedback on their preprint; and, to sketch some preliminary ideas for future extensions of their work.

Read more of this post

Chemical games and the origin of life from prebiotic RNA

From bacteria to vertebrates, life — as we know it today — relies on complex molecular interactions, the intricacies of which science has not fully untangled. But for all its complexity, life always requires two essential abilities. Organisms need to preserve their genetic information and reproduce.

In our own cells, these tasks are assigned to specialized molecules. DNA, of course, is the memory store. The information it encodes is expressed into proteins via messenger RNAs.Transcription (the synthesis of mRNAs from DNA) and translation (the synthesis of proteins from mRNAs) are catalyzed by polymerases necessary to speed up the chemical reactions.

It is unlikely that life started that way, with such a refined division of labor. A popular theory for the origin of life, known as the RNA world, posits that life emerged from just one type of molecule: RNAs. Because RNA is made up of base-complementary nucleotides, it can be used as a template for its own reproduction, just like DNA. Since the 1980s, we also know that RNA can act as a self-catalyst. These two superpowers – information storage and self-catalysis – make it a good candidate for the title of the first spark of life on earth.

The RNA-world theory has yet to meet with empirical evidence, but laboratory experiments have shown that self-preserving and self-reproducing RNA systems can be created in vitro. Little is known, however, about the dynamics that governed pre- and early life. In a recent paper, Yeates et al. (2016) attempt to shed light on this problem by (1) examining how small sets of different RNA sequences can compete for survival and reproduction in the lab and (2) offering a game-theoretical interpretation of the results.

Read more of this post

Social algorithms and the Weapons of Math Destruction

Cathy O'Neil holding her new book: Weapons of Math Destruction at a Barnes & Noble in NYC.

Cathy O’Neil holding her new book: Weapons of Math Destruction at a Barnes & Noble in New York city.

In reference to intelligent robots taking over the world, Andrew Ng once said: “I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars.” Sure, it will be an important issue to think about when the time comes. But for now, there is no productive way to think seriously about it. Today there are more concrete problems to worry about and more basic questions that need to be answered. More importantly, there are already problems to deal with. Problems that don’t involve super intelligent tin-men, killer robots, nor sentient machine overlords. Focusing on distant speculation obscures the fact that algorithms — and not necessarily very intelligent ones — already reign over our lives. And for many this reign is far from benevolent.

I owe much of my knowledge about the (negative) effects of algorithms on society to the writings of Cathy O’Neil. I highly recommend her blog mathbabe.org. A couple of months ago, she shared the proofs of her book Weapons of Math Destruction with me, and given that the book came out last week, I wanted to share some of my impressions. In this post, I want to summarize what makes a social algorithm into a weapon of math destruction, and share the example of predictive policing.

Read more of this post