Bounded rationality: systematic mistakes and conflicting agents of mind

Before her mother convinced her to be a doctor, my mother was a ballerina. As a result, whenever I tried to blame some external factor for my failures, I was met with my mother’s favorite aphorism: a bad dancer’s shoes are always too tight.

“Ahh, another idiosyncratic story about the human side of research,” you note, “why so many?”

Partially these stories are to broaden TheEGG blog’s appeal, and to lull you into a false sense of security before overrunning you with mathematics. Partially it is a homage to the blogs that inspired me to write, such as Lipton and Regan’s “Godel’s Lost Letters and P = NP”. Mostly, however, it is to show that science — like everything else — is a human endeavour with human roots and subject to all the excitement, disappointments, insights, and biases that this entails. Although science is a human narrative, unlike the similar story of pseudoscience, she tries to overcome or recognize her biases when they hinder her development.

selfservingbias

The self-serving bias has been particularily thorny in decision sciences. Humans, especially individuals with low self-esteem, tend to attribute their success to personal skill, while blaming their failures on external factors. As you can guess from my mother’s words, I struggle with this all the time. When I try to explain the importance of worst-case analysis, algorithmic thinking, or rigorous modeling to biologist and fail, my first instinct is to blame it on the structural differences between the biological and mathematical community, or biologists’ discomfort with mathematics. In reality, the blame is with my inability to articulate the merits of my stance, or provide strong evidence that I can offer any practical biological results. Even more depressing, I might be suffering from a case of interdisciplinitis and promoting a meritless idea while completely failing to connect to the central questions in biology. However, I must maintain my self-esteem, and even from my language here, you can tell that I am unwilling to fully entertain the latter possibility. Interestingly, this sort of bias can propagate from individual researchers into their theories.

One of the difficulties for biologists, economists, and other decision scientists has been coming to grips with observed irrationality in humans and other animals. Why wouldn’t there be a constant pressure toward more rational animals that maximize their fitness? Who is to blame for this irrational behavior? In line with the self-serving bias, it must be that crack in the sidewalk! Or maybe some other feature of the environment.
Read more of this post

How teachers help us learn deterministic finite automata

Many graduate students, and even professors, have a strong aversion to teaching. This tends to produce awful, one-sided classes that students attend just to transcribe the instructor’s lecture notes. The trend is so bad that in some cases instructors take pride in their bad teaching, and at some institutions — or so I hear around the academic water-cooler — you might even get in trouble for being too good a teacher. Why are you spending so much effort preparing your courses, instead of working on research? And it does take a lot of effort to be an effective teacher, it takes skill to turn a lecture theatre into an interactive environment where information flows both ways. A good teacher has to be able to asses how the students are progressing, and be able to clarify misconceptions held by the students even when the students can’t identify those conceptions as misplaced. Last week I had an opportunity to excercise my teaching by lecturing Prakash Panangaen’s COMP330 course.
Read more of this post

Minimizing finite state automata

Computer science is a strange mix of engineering, science, and math. This is captured well by the historic roots of deterministic finite state automata (DFAs). The first ideas that can be recognized as a precursor to DFAs can be found with Gilberth & Gilberth (1921) introducing flow process charts into mechanical and industrial engineering. Independently, McCullock & Pitts (1943) used nerve-nets as a model of neural activity. This preceded Turing’s 1948 entry into brain science with B-type neural networks, and Rosenblatt’s perceptrons (1957). Unlike Turing and Rosenblatt, McCullock & Pitts model did not incorporate learning. However, the nerve nets went on to have a more profound effect on ratiocination because — as Kleene (1951) recognized — they became the modern form of DFAs. Although DFAs are now in less vogue than they were a few decades ago, they are an essential part of a standard computer science curriculum due to their centrality in computer science. Yesterday, I had the privilege of familiarizing students with the importance of DFAs by giving a lecture of Prakash Panangaden’s COMP 330 course. Since Prakash already introduced the students to the theory of DFAs, regular expressions, and languages, I was tasked with explaining the more practical task of DFA minimization.

DFAFirstExample
Read more of this post

What is the algorithmic lens?

If you are a regular reader then you are probably familiar with my constant mention of the algorithmic lens. I insist that we must apply it to everything: biology, cognitive science, ecology, economics, evolution, finance, philosophy, probability, and social science. If you’re still reading this then you have incredible powers to resist clicking links. Also, you are probably mathematically inclined and appreciate the art of good definitions. As such, you must be incredibly irked by my lack of attempt at explicitly defining the algorithmic lens. You are not the only one, dear reader, there have been many times where I have been put on the spot and asked to define this ‘algorithmic lens’. My typical response has been the blank stare; not the “oh my, why don’t you already know this?” stare, but the “oh my, I don’t actually know how to define this” stare. Like the artist, continental philosopher, or literary critic, I know how ‘algorithmic lens’ makes me feel and what it means to me, but I just can’t provide a binding definition. Sometimes I even think it is best left underspecified, but I won’t let that thought stop me from attempting a definition.
Read more of this post

Randomness, necessity, and non-determinism

If we want to talk philosophy then it is necessary to mention Aristotle. Or is it just a likely beginning? For Aristotle, there were three types of events: certain, probable, and unknowable. Unfortunately for science, Aristotle considered the results of games of chance to be unknowable, and probability theory started — 18 centuries later — with the analysis of games of chance. This doomed much of science to an awkward fetishisation of probability, an undue love of certainty, and unreasonable quantification of the unknowable. A state of affairs that stems from our fear of admitting when we are ignorant, a strange condition given that many scientists would agree with Feynman’s assessment that one of the main features of science is acknowledging our ignorance:

Unfortunately, we throw away our ability to admit ignorance when we assign probabilities to everything. Especially in settings where there is no reason to postulate an underlying stochastic generating process, or a way to do reliable repeated measurements. “Foul!” you cry, “Bayesians talk about beliefs, and we can hold beliefs about single events. You are just taking the dated frequentist stance.” Well, to avoid the nonsense of the frequentist vs. Bayesian debate, let me take literately the old adage “put your money where you mouth is” and use the fundamental theorem of asset pricing to define probability. I’ll show an example of a market we can’t price, and ask how theoretical computer science can resolve our problem with non-determinism.
Read more of this post

Semi-smooth fitness landscapes and the simplex algorithm

Leonid_KantorovichAs you might have guessed from my strange and complicated name, I’m Russian. One of the weird features of this is that even though I have never had to experience war, I still feel a strong cultural war-weariness. This stems from an ancestoral memory of the Second World War, a conflict that had an extremely disruptive affect on Russian society. None of my great-grandfathers survived the war; one of them was a train engineer that died trying to drive a train of provisions over the Road of Life to resuply Leningrad during its 29 month seige. Since the Germans blocked all the land routes, part of road ran over the ice on Lake Ladoga — trucks had to be optimally spaced to not crack the ice that separated them from a watery grave while maximizing the amount of supplies transported into the city. Leonid Kantorovich — the Russian mathematician and economist that developed linear programming as the war was starting in western Europe — ensured safety by calculating the optimal distance between cars depending on the ice thickness and air temperature. In the first winter of the road, Kantorovich would personally walk between trucks on the ice to ensure his guidelines were followed and to reassure the men of the reliability of mathematical programming. Like his British counterpart, Kantorovich was aplying the algorithmic lens to help the Allied war effort and the safety of his people. Although I can never reciprocate the heroism of these great men, stories like this convince me that the algorithmic lens can provide a powerful perspective in economics, engineering, or science. This is one of the many inspirations behind my most recent project (Kaznatcheev, 2013) applying the tools of theoretical computer science and mathematical optimization — such as linear programming — to better understand the rate of evolutionary dynamics.
Read more of this post

What can theoretical computer science offer biology?

If there is anything definitive that can be said about biology then it is that biology is messy and complicated. The stereotypical image of a biology major is in the library memorizing volumes of loosely (at best) related facts only to have them overturned in the next semester. Concepts are related only vaguely, to the point that it looks like stamp-collecting to outsiders. The only unifying theme is evolution, and even that is presented with a smorgasbord of verbal and mathematical models, with many lacking predictive power or sometimes even solid empirical foundations. This might seem like the polar opposite of a theoretical computer scientist with her strict hierarchies and logical deductions. Even the complexity zoo seems perfectly tame compared to any biological taxonomy. However, since I’ve been promoting algorithmic theories of biology and even trying my hand at applying cstheory to models of evolution (Kaznatcheev, 2013), I must think that there is some possibility for a productive partnership. So, what can theoretical computer science offer biology? CStheory can provide the abstractions and technical tools to systematize and organize biology’s heuristic models.
Read more of this post

Three types of mathematical models

Whenever asked to label myself, I am overcome by existential dread: what am I? A mathematician? A computer scientist? A modeler? A crazy man with a blog? Each has its own connotations and describes aspects of my approach to thought, but none (except maybe the last) represents my mindset accurately. I have experienced mathematical modeling in three very different setting during my research and education: theoretical computer science, physics, and modeling in social and biological sciences. In the process, I’ve concluded that there are at least three fundamentally different kinds of modeling, and three different levels of presenting them. This is probably not exhaustive, but I have searched for some time and could not find extensions, maybe you can suggest some. Since this post is motivated by names, let’s name the three types of models as abstractions, heuristics, and insilications and the three presentations as analytic, algorithmic, and computational.
Read more of this post

Infographic history of evolutionary thought

Most of you are probably familiar with some variant of George Santayana’s aphorism: “Those who cannot remember the past are condemned to repeat it”. The quote is common to the point of cliche for a reason, in fact if we look at cliodynamics then we can even mathematically demonstrate the cyclic nature of history. This is especially true with the history of thought, and an even easier mistake to make when I am working in an interdisciplinary setting. To avoid interdisciplinitis as I delve deeper into models of evolution, I am always eager to learn more about the progress of evolutionary thought. As such, I was happy to see this new infographic from Tania Jenkins, Miriam Quick and Stefanie Posavec for the European Society for Evolutionary Biology:

EvolutionPoster
Read more of this post