Useful delusions, interface theory of perception, and religion

As you can guess from the name, evolutionary game theory (EGT) traces its roots to economics and evolutionary biology. Both of the progenitor fields assume it impossible, or unreasonably difficult, to observe the internal representations, beliefs, and preferences of the agents they model, and thus adopt a largely behaviorist view. My colleagues and I, however, are interested in looking at learning from the cognitive science tradition. In particular, we are interested in the interaction of evolution and learning. This interaction in of itself is not innovative, it has been a concern for biologists since Baldwin (1886, 1902), and Smead & Zollman (2009; Smead 2012) even brought the interaction into an EGT framework and showed that rational learning is not necessarily a ‘fixed-point of Darwinian evolution’. But all the previous work that I’ve encountered at this interface has made a simple implicit assumption, and I wanted to question it.

It is relatively clear that evolution acts objectively and without regard for individual agents’ subjective experience except in so far as that experience determines behavior. On the other hand, learning, from the cognitive sciences perspective at least, acts on the subjective experiences of the agent. There is an inherent tension here between the objective and subjective perspective that becomes most obvious in the social learning setting, but is still present for individual learners. Most previous work has sidestepped this issue by either not delving into the internal mechanism of how agents decide to act — something that is incompatible with the cognitive science perspective — or assuming that subjective representations are true to objective reality — something for which we have no a priori justification.

A couple of years ago, I decided to look at this question directly by developing the objective-subjective rationality model. Marcel and I fleshed out the model by adding a mechanism for simple Bayesian learning; this came with an extra perk of allowing us to adopt Masel’s (2007) approach to looking at quasi-magical thinking as an inferential bias. To round out the team with some cognitive science expertise, we asked Tom to join. A few days ago, after an unhurried pace and over 15 relevant blog posts, we released our first paper on the topic (Kaznatcheev, Montrey & Shultz, 2014) along with its MatLab code.
Read more of this post

Models, modesty, and moral methodology

In highschool, I had the privilege to be part of a program that focused on the humanities and social sciences, critical thinking, and building research skills. The program’s crown was a semester of grade eleven (early 2005) dedicated to working on independent research for a project of our own design. For my project, I poured over papers and books at the University of Saskatchewan library, trying to come up with a semi-coherent thesis on post-cold war religious violence. Maybe this is why my first publications in college were on ethnocentrism? It’s a hard question to answer, but I doubt that the connection was that direct. As I was preparing to head to McGill, I had ambition to study political science and physics, but I was quickly disenchanted with the idea, and ended up focusing on theoretical computer science, physics, and math. When I returned to the social sciences in late 2008, it was with the arrogance typical of a physicist first entering a new field.

In the years since — along with continued modeling — I have tried to become more conscious of the types and limitations of models and their role in knowledge building and rhetoric. In particular, you might have noticed a recent trend of posts on the social sciences and various dangers of Scientism. These are part of an on-going discussions with Adam Elkus and reading the Dart-Throwing Chimp. Recently, Jay Ulfelder shared a fun quip on why skeptics make bad pundits:

First Rule of Punditry: I know everything; nothing is complicated.

First Rule of Skepticism: I know nothing; everything is complicated.

Which gets at an important issue common to many public-facing sciences, like climate, social, or medicine, among others. Academics are often encouraged to be skeptical, both of their work and others, and precise in the scope of their predictions. Although self-skepticism and precision is sometimes eroded away by the need to publish ‘high-impact’ results. I would argue that without factions, divisions, and debate, science would find progress — whatever that means — much more difficult. Academic rhetoric, however, is often incompatible with political rhetoric, since — as Jay Ulfelder points out — the latter relies much more on certainty, conviction, and the force with which you deliver your message. What should a policy oriented academic do?
Read more of this post

Cross-validation in finance, psychology, and political science

A large chunk of machine learning (although not all of it) is concerned with predictive modeling, usually in the form of designing an algorithm that takes in some data set and returns an algorithm (or sometimes, a description of an algorithm) for making predictions based on future data. In terminology more friendly to the philosophy of science, we may say that we are defining a rule of induction that will tell us how to turn past observations into a hypothesis for making future predictions. Of course, Hume tells us that if we are completely skeptical then there is no justification for induction — in machine learning we usually know this as a no-free lunch theorem. However, we still use induction all the time, usually with some confidence because we assume that the world has regularities that we can extract. Unfortunately, this just shifts the problem since there are countless possible regularities and we have to identify ‘the right one’.

Thankfully, this restatement of the problem is more approachable if we assume that our data set did not conspire against us. That being said, every data-set, no matter how ‘typical’ has some idiosyncrasies, and if we tune in to these instead of ‘true’ regularity then we say we are over-fitting. Being aware of and circumventing over-fitting is usually one of the first lessons of an introductory machine learning course. The general technique we learn is cross-validation or out-of-sample validation. One round of cross-validation consists of randomly partitioning your data into a training and validating set then running our induction algorithm on the training data set to generate a hypothesis algorithm which we test on the validating set. A ‘good’ machine learning algorithm (or rule for induction) is one where the performance in-sample (on the training set) is about the same as out-of-sample (on the validating set), and both performances are better than chance. The technique is so foundational that the only reliable way to earn zero on a machine learning assignments is by not doing cross-validation of your predictive models. The technique is so ubiquotes in machine learning and statistics that the StackExchange dedicated to statistics is named CrossValidated. The technique is so…

You get the point.

If you are a regular reader, you can probably induce from past post to guess that my point is not to write an introductory lecture on cross validation. Instead, I wanted to highlight some cases in science and society when cross validation isn’t used, when it needn’t be used, and maybe even when it shouldn’t be used.
Read more of this post

Big data, prediction, and scientism in the social sciences

Much of my undergrad was spent studying physics, and although I still think that a physics background is great for a theorists in any field, there are some downsides. For example, I used to make jokes like: “soft isn’t the opposite of hard sciences, easy is.” Thankfully, over the years I have started to slowly grow out of these condescending views. Of course, apart from amusing anecdotes, my past bigotry would be of little importance if it wasn’t shared by a surprising number of grown physicists. For example, Sabine Hossenfelder — an assistant professor of physics in Frankfurt — writes in a recent post:

If you need some help with the math, let me know, but that should be enough to get you started! Huh? No, I don't need to read your thesis, I can imagine roughly what it says.It isn’t so surprising that social scientists themselves are unhappy because the boat of inadequate skills is sinking in the data sea and physics envy won’t keep it afloat. More interesting than the paddling social scientists is the public opposition to the idea that the behavior of social systems can be modeled, understood, and predicted.

As a blogger I understand that we can sometimes be overly bold and confrontational. As an informal medium, I have no fundamental problem with such strong statements or even straw-men if they are part of a productive discussion or critique. If there is no useful discussion, I would normally just make a small comment or ignore the post completely, but this time I decided to focus on Hossenfelder’s post because it highlights a common symptom of interdisciplinitis: an outsider thinking that they are addressing people’s critique — usually by restating an obvious and irrelevant argument — while completely missing the point. Also, her comments serve as a nice bow to tie together some thoughts that I’ve been wanting to write about recently.
Read more of this post

Kleene’s variant of the Church-Turing thesis

KleeneIn 1936, Alonzo Church, Alan Turing, and Emil Post each published independent papers on the Entscheidungsproblem and introducing the lambda calculus, Turing machines, and Post-Turing machines as mathematical models of computation. A myriad of other models followed, many of them taking seemingly unrelated approaches to the computable: algebraic, combinatorial, linguistic, logical, mechanistic, etc. Of course, all of these models were shown to be equivalent in what they could compute and this great heuristic coherence lead mathematicians to formulate the Church-Turing thesis. As with many important philosophical notions, over the last three-quarters of a century, the thesis has gradually changed. In a semi-historic style, I will identify three progressively more empirical formulations with Kleene, Post, and Gandy. For this article, I will focus on the purely mathematical formulation by Kleene, and reserve the psychological and physical variants for next time.

Mathematicians and logicians begat the Church-Turing thesis, so at its inception it was a hypothesis about the Platonic world of mathematical ideas and not about the natural world. There are those that follow Russell (and to some extent Hilbert) and identify mathematics with tautologies. This view is not typically held among mathematicians, who following in the footsteps of Godel know how important it is to distinguish between the true and the provable. Here I side with Lakatos in viewing logic and formal systems as tools to verify and convince others about our intuitions of the mathematical world. Due to Godel’s incompleteness theorems and decades of subsequent results, we know that no single formal system will be a perfect lens on the world of mathematics, but we do have prefered one like ZFC.
Read more of this post

Misunderstanding falsifiability as a power philosophy of Scientism

KarlPopperI think that trying to find one slogan that captures all of science and nothing else is a fool’s errand. However, it is an appealing errand given our propensity to want to classify and delimit the things we care about. It is also an errand that often takes a central role in the philosophy of science.

Just like with almost any modern thought, if we try hard enough then we can trace philosophy of science back to the Greeks and discuss the contrasting views of Plato and Aristotle. As fun as such historical excursions might be, it seems a little silly given that the term scientist was not coined until 1833 and even under different names our current conception of scientists would not stretch much further back than the natural philosophers of the 17th century. Even the early empiricism of these philosophers, although essential as a backdrop and a foundation shift in view, is more of an overall metaphysical outlook than a dedicate philosophy of science.
Read more of this post

Algorithmic Darwinism

The workshop on computational theories of evolution started off on Monday, March 17th with Leslie Valiant — one of the organizers — introducing his model of evolvability (Valiant, 2009). This original name was meant to capture what type of complexity can be achieved through evolution. Unfortunately — especially at this workshop — evolvability already had a different, more popular meaning in biology: mechanisms that make an organism or species ‘better’ at evolving, in the sense of higher mutations rates, de novo genes, recombination through sex, etc. As such, we need a better name and I am happy to take on the renaming task.
Read more of this post

Why academics should blog and an update on readership

It’s that time again, TheEGG has passed a milestone — 150 posts under our belt!– and so I feel obliged to reflect on blogging plus update the curious on the readerships statistics.

About a month ago, Nicholas Kristof bemoaned the lack of public intellectuals in the New York Times. Some people responded with defenses of the ‘busy academic’, and others agreement but with a shift of conversation medium to blogs from the more traditional media Kristof was focused on. As a fellow blogger, I can’t help but support this shift, but I also can’t help but notice the conflation of two very different notions: the public intellectual and the public educator.
Read more of this post

Computational theories of evolution

If you look at your typical computer science department’s faculty list, you will notice the theorists are a minority. Sometimes they are further subdivided by being culled off into mathematics departments. As such, any institute that unites and strengthens theorists is a good development. That was my first reason for excitement two years ago when I learned that a $60 million grant would establish the Simons Institute for the Theory of Computing at UC, Berkeley. The institute’s mission is close to my heart: bringing the study of theoretical computer science to bear on the natural sciences; an institute for the algorithmic lens. My second reason for excitement was that one of the inaugural programs is evolutionary biology and the theory of computing. Throughout this term, a series workshops are being held to gather and share the relevant experience.

Right now, I have my conference straw hat on, as I wait for a flight transfer in Dallas on my way to one of the events in this program, the workshop on computational theories of evolution. For the next week I will be in Berkeley absorbing all there is to know on the topic. Given how much I enjoyed Princeton’s workshop on natural algorithms in the sciences, I can barely contain my excitement.
Read more of this post

From heuristics to abductions in mathematical oncology

As Philip Gerlee pointed out, mathematical oncologists has contributed two main focuses to cancer research. In following Nowell (1976), they’ve stressed the importance of viewing cancer progression as an evolutionary process, and — of less clear-cut origin — recognizing the heterogeneity of tumours. Hence, it would seem appropriate that mathematical oncologists might enjoy Feyerabend’s philosophy:

[S]cience is a complex and heterogeneous historical process which contains vague and incoherent anticipations of future ideologies side by side with highly sophisticated theoretical systems and ancient and petrified forms of thought. Some of its elements are available in the form of neatly written statements while others are submerged and become known only by contrast, by comparison with new and unusual views.

If you are a total troll or pronounced pessimist you might view this as even leading credence to some anti-scientism views of science as a cancer of society. This is not my reading.

For me, the important takeaway from Feyerabend is that there is no single scientific method or overarching theory underlying science. Science is a collection of various tribes and cultures, with their own methods, theories, and ontologies. Many of these theories are incommensurable.
Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 2,270 other followers