## Rationality, the Bayesian mind and their limits

Bayesianism is one of the more popular frameworks in cognitive science. Alongside other similar probalistic models of cognition, it is highly encouraged in the cognitive sciences (Chater, Tenenbaum, & Yuille, 2006). To summarize Bayesianism far too succinctly: it views the human mind as full of beliefs that we view as true with some subjective probability. We then act on these beliefs to maximize expected return (or maybe just satisfice) and update the beliefs according to Bayes’ law. For a better overview, I would recommend the foundations work of Tom Griffiths (in particular, see Griffiths & Yuille, 2008; Perfors et al., 2011).

This use of Bayes’ law has lead to a widespread association of Bayesianism with rationality, especially across the internet in places like LessWrong — Kat Soja has written a good overview of Bayesianism there. I’ve already written a number of posts about the dangers of fetishizing rationality and some approaches to addressing them; including bounded rationality, Baldwin effect, and interface theory. I some of these, I’ve touched on Bayesianism. I’ve also written about how to design Baysian agents for simulations in cognitive science and evolutionary game theory, and even connected it to quasi-magical thinking and Hofstadter’s superrationality for Kaznatcheev, Montrey & Shultz (2010; see also Masel, 2007).

But I haven’t written about Bayesianism itself.

In this post, I want to focus on some of the challenges faced by Bayesianism and the associated view of rationality. And maybe point to some approach to resolving them. This is based in part of three old questions from the Cognitive Sciences StackExhange: What are some of the drawbacks to probabilistic models of cognition?; What tasks does Bayesian decision-making model poorly?; and What are popular rationalist responses to Tversky & Shafir?

## Radicalization, expertise, and skepticism among doctors & engineers: the value of philosophy in education

This past Friday was a busy day for a lot of the folks in Integrated Mathematical Oncology here at the Moffitt Cancer Center. Everybody was rushing around to put the final touches on a multi-million dollar research center grant application to submit to the National Cancer Institute. Although the time was not busy for me, I still stopped by Jacob Scott’s office towards the end of the day to celebrate. Let me set the scene for you: it is a corner office down the hall from me; its many windows are scribbled over with graphs, equations, and biological interaction networks; two giant screens crowd a standing desk, and another screen is hidden in the corner; the only non-glass wall has scribbles in pencil for the carpenters: paint blackboard here. There are too many chairs — Jake is a connector, so his office is always open to guests.

A different celerbation in Jake’s office. The view is from his desk towards the wall that needs to be replaced by a blackboard.

In addition to the scientific and administrative stress of grant-writing, Jake was also covering for his friend as the doc-of-the-day for radiation oncology. So as I rambled on: “If we consider nodes of degree three or higher in this model, we would break up contingent blocks of mutants and result in the domain of our probability distribution going from $n^2$ to $2^n$“, scribbling more math on his wall, we would get interrupted by phone calls. His resident calling to tell him that the neurosurgeons have scheduled a consultation for an acute myeloid leukemia patient who is recovering from surgery earlier that day.

“Only on a Friday afternoon do you get this kind of consult!” Jake fires off, “He’s still in surgery! We can’t do anything for at least a few days – schedule him for Monday.”

The call was on speakerphone, but I could not keep up with the conversation. After years of training and experience, this was an effortless context-shift for Jake. He went from the heavy skepticism of a scientist staring at a blackboard to the certainty of a doctor that needed to get shit done, and back, in moments. I couldn’t imagine having this sort of confidence in my judgements, mostly because I have no training in medicine, but also because I am not expected to be certain. That is why I lean towards using abductive models versus insilications for clinial research; I have more confidence in machine learning than in my own physical and biological intuitions about cancer. Even if that approach might produce less understanding.

In recent weeks, I’ve noticed a theme in some of the (news and blog) articles I’ve been reading. In this post, I wanted to provide an annotated collection of some of these links, along with my reflections on what they say about the tension between expertise and skepticism and how that can radicalize us, both in mundane ways and in drastic ones. And what role philosophy can play in helping us cope. I will end up touching on recent events and politics as a source context, but hopefully we can keep the overall conversation more or less detached from current events.
Read more of this post

## Emotional contagion and rational argument in philosophical texts

Last week I returned to blogging with some reflections on reading and the written word more generally. Originally, I was aiming to write a response to Roger Schank’s stance that “reading is no way to learn”, but I wandered off on too many tangents for an a single post or for a coherent argument. The tangent that I left for this post is the role of emotion and personality in philosophical texts.

In my last entry, I focused on the medium independent aspects of Schank’s argument, and identified two dimensions along which a piece of media and our engagement with it can vary: (1) passive consumption versus active participation, and (2) the level of personalization. The first continuum has a clearly better end on the side of more active engagement. If we are comparing mediums then we should prefer ones that foster more active engagement from the participants. The second dimension is more ambiguous: sometimes a more general piece of media is better than a bespoke piece. What is better becomes particularly ambiguous when being forced to adapt a general approach to your special circumstances encourages more active engagement.

In this post, I will shift focus from comparing mediums to a particular aspect of text and arguments: emotional engagement. Of course, this also shows up in other mediums, but my goal this time is not to argue across mediums.

## Misbeliefs, evolution and games: a positive case

A recurrent theme here in TheEGG is the limits and reliability of knowledge. These get explored from many directions: on epistemological grounds, from the philosophy of science angle, but also formally, through game theory and simulations. In this post, I will explore the topic of misbeliefs as adaptations. Misbeliefs will be intended as ideas about reality that a given subject accepts as true, despite them being wrong, inaccurate or otherwise mistaken. The notion that evolution might not systematically and exclusively support true beliefs isn’t new to TheEGG, but it has also been tackled by many other people, by means of different methodologies, including my own personal philosophising. The overarching question is whether misbeliefs can be systematically adaptive, a prospect that tickles my devious instincts: if it were the case, it would fly in the face of naïve rationalists, who frequently assume that evolution consistently favours the emergence of truthful ways to perceive the world.

Given our common interests, Artem and I have had plenty of long discussions in the past couple of years, mostly sparked by his work on Useful Delusions (see Kaznatcheev et al., 2014), for some more details on our exchanges, as well as a little background on myself, please see the notes[1]. A while ago,  I found an article by McKay and Dennett (M&D), entitled “The evolution of misbelief” (2009)[2], Artem offered me the chance to write a guest post on it, and I was very happy to accept.

What follows will mix philosophical, clinical and mathematical approaches, with the hope to produce a multidisciplinary synthesis.
Read more of this post

## From realism to interfaces and rationality in evolutionary games

As I was preparing some reading assignments, I realized that I don’t have a single resource available that covers the main ideas of the interface theory of perception, objective versus subjective rationality, and their relationship to evolutionary game theory. I wanted to correct this oversight and use it as opportunity to comment on the philosophy of mind. In this post I will quickly introduce naive realism, critical realism, and the interface theory of perception and sketch how we can use evolutionary game theory to study them. The interface theory of perception will also give me an opportunity to touch on the difference between subjective and objective rationality. Unfortunately, I am trying to keep this entry short, so we will only skim the surface and I invite you to click links aggressively and follow the references papers if something catches your attention — this annotated list of links might be of particular interest for further exploration.
Read more of this post

## Defining empathy, sympathy, and compassion

When discussing the evolution of cooperation, questions about empathy, sympathy, and compassion are often close to mind. In my computational work, I used to operationalize-away these emotive concepts and replace them with a simple number like the proportion of cooperative interactions. This is all well and good if I want to confine myself to a behaviorist perspective, but my colleagues and I have been trying to move to a richer cognitive science viewpoint on cooperation. This has confronted me with the need to think seriously about empathy, sympathy, and compassion. In particular, Paul Bloom‘s article against empathy, and a Reddit discussion on the usefulness of empathy as a word has reminded me that my understanding of the topic is not very clear or critical. As such, I was hoping to use this opportunity to write down definitions for these three concepts and at the end of the post sketch a brief idea of how to approach some of them with evolutionary modeling. My hope is that you, dear reader, would point out any confusion or disagreement that lingers.
Read more of this post

## Useful delusions, interface theory of perception, and religion

As you can guess from the name, evolutionary game theory (EGT) traces its roots to economics and evolutionary biology. Both of the progenitor fields assume it impossible, or unreasonably difficult, to observe the internal representations, beliefs, and preferences of the agents they model, and thus adopt a largely behaviorist view. My colleagues and I, however, are interested in looking at learning from the cognitive science tradition. In particular, we are interested in the interaction of evolution and learning. This interaction in of itself is not innovative, it has been a concern for biologists since Baldwin (1886, 1902), and Smead & Zollman (2009; Smead 2012) even brought the interaction into an EGT framework and showed that rational learning is not necessarily a ‘fixed-point of Darwinian evolution’. But all the previous work that I’ve encountered at this interface has made a simple implicit assumption, and I wanted to question it.

It is relatively clear that evolution acts objectively and without regard for individual agents’ subjective experience except in so far as that experience determines behavior. On the other hand, learning, from the cognitive sciences perspective at least, acts on the subjective experiences of the agent. There is an inherent tension here between the objective and subjective perspective that becomes most obvious in the social learning setting, but is still present for individual learners. Most previous work has sidestepped this issue by either not delving into the internal mechanism of how agents decide to act — something that is incompatible with the cognitive science perspective — or assuming that subjective representations are true to objective reality — something for which we have no a priori justification.

A couple of years ago, I decided to look at this question directly by developing the objective-subjective rationality model. Marcel and I fleshed out the model by adding a mechanism for simple Bayesian learning; this came with an extra perk of allowing us to adopt Masel’s (2007) approach to looking at quasi-magical thinking as an inferential bias. To round out the team with some cognitive science expertise, we asked Tom to join. A few days ago, after an unhurried pace and over 15 relevant blog posts, we released our first paper on the topic (Kaznatcheev, Montrey & Shultz, 2014) along with its MatLab code.
Read more of this post

## Interface theory of perception can overcome the rationality fetish

I might be preaching to the choir, but I think the web is transformative for science. In particular, I think blogging is a great form or pre-pre-publication (and what I use this blog for), and Q&A sites like MathOverflow and the cstheory StackExchange are an awesome alternative architecture for scientific dialogue and knowledge sharing. This is why I am heavily involved with these media, and why a couple of weeks ago, I nominated myself to be a cstheory moderator. Earlier today, the election ended and Lev Reyzin and I were announced as the two new moderators alongside Suresh Venkatasubramanian, who is staying on to for continuity and to teach us the ropes. I am extremely excited to work alongside Suresh and Lev, and to do my part to continue devoloping the great community that we nurtured over the last three and a half years.

However, I do expect to face some challenges. The only critique raised against our outgoing moderators, was that an argumentative attitude that is acceptable for a normal user can be unfitting for a mod. I definitely have an argumentative attitude, and so I will have to be extra careful to be on my best behavior.

Thankfully, being a moderator on cstheory does not change my status elsewhere on the website, so I can continue to be a normal argumentative member of the Cognitive Sciences StackExchange. That site is already home to one of my most heated debates against the rationality fetish. In particular, I was arguing against the statement that “a perfect Bayesian reasoner [is] a fixed point of Darwinian evolution”. This statement can be decomposed into two key assumptions: a (1) perfect Bayesian reasoner makes the most veridical decisions given its knowledge, and (2) veridicity has greater utility for an agent and will be selected for by natural selection. If we accept both premises then a perfect Bayesian reasoner is a fitness-peak. Of course, as we learned before: even if something is a fitness-peak doesn’t mean we can ever find it.

We can also challenge both of the assumptions (Feldman, 2013); the first on philosophical grounds, and the second on scientific. I want to concentrate on debunking the second assumption because it relates closely to our exploration of objective versus subjective rationality. To make the discussion more precise, I’ll approach the question from the point of view of perception — a perspective I discovered thanks to TheEGG blog; in particular, the comments of recent reader Zach M.
Read more of this post

## Evolution as a risk-averse investor

I don’t know about you, but most of my money is in my savings account and not in more volatile assets like property, bonds, or stocks. This is a consequence of either laziness to explore my options, or — the more comforting alternative — extreme risk-aversion. Although it would be nice to have a few thousand dollars more to my name, it would be devastating to have a few thousand dollars less. As such if I was given a lottery where I had a 50% chance of loosing \$990 or a 50% chance of winning \$1000 then I would probably choose not to play, even though there is an expected gain of \$10; I am risk averse, the extra variance of the bet versus the certainty of maintaining my current holdings is not worth \$10 for me. I most cases, so are most investors, although the degree of expected profit to variance trade-off differs between agents.

Daniel Bernoulli (8 February 1700 – 17 March 1782) was one of the mathematicians in the famous Bernoulli family of Basal, Switzerland, and contemporary and friend of Euler and Goldbach. He is probably most famous for Bernoulli’s principle in hydrodynamics that his hyper-competitive father Johann publishing in a book he pre-dated by ten years to try and claim credit. One of Daniel’s most productive times was working alongside Euler and Goldbach in the golden days (1724-1732) of the St. Petersburg Academy. It was in Russia that he developed his solution to the St. Petersburg paradox by introducing risk-aversion, and made his contribution to probability, finance, and — as we will see — evolution.
Read more of this post

## Enriching evolutionary games with trust and trustworthiness

Fairly early in my course on Computational Psychology, I like to discuss Box’s (1979) famous aphorism about models: “All models are wrong, but some are useful.” Although Box was referring to statistical models, his comment on truth and utility applies equally well to computational models attempting to simulate complex empirical phenomena. I want my students to appreciate this disclaimer from the start because it avoids endless debate about whether a model is true. Once we agree to focus on utility, we can take a more relaxed and objective view of modeling, with appropriate humility in discussing our own models. Historical consideration of models, and theories as well, should provide a strong clue that replacement by better and more useful models (or theories) is inevitable, and indeed is a standard way for science to progress. In the rapid turnover of computational modeling, this means that the best one could hope for is to have the best (most useful) model for a while, before it is pushed aside or incorporated by a more comprehensive and often more abstract model. In his recent post on three types of mathematical models, Artem characterized such models as heuristic. It is worth adding that the most useful models are often those that best cover (simulate) the empirical phenomena of interest, bringing a model closer to what Artem called insilications.
Read more of this post