Bounded rationality: systematic mistakes and conflicting agents of mind

Before her mother convinced her to be a doctor, my mother was a ballerina. As a result, whenever I tried to blame some external factor for my failures, I was met with my mother’s favorite aphorism: a bad dancer’s shoes are always too tight.

“Ahh, another idiosyncratic story about the human side of research,” you note, “why so many?”

Partially these stories are to broaden TheEGG blog’s appeal, and to lull you into a false sense of security before overrunning you with mathematics. Partially it is a homage to the blogs that inspired me to write, such as Lipton and Regan’s “Godel’s Lost Letters and P = NP”. Mostly, however, it is to show that science — like everything else — is a human endeavour with human roots and subject to all the excitement, disappointments, insights, and biases that this entails. Although science is a human narrative, unlike the similar story of pseudoscience, she tries to overcome or recognize her biases when they hinder her development.

selfservingbias

The self-serving bias has been particularily thorny in decision sciences. Humans, especially individuals with low self-esteem, tend to attribute their success to personal skill, while blaming their failures on external factors. As you can guess from my mother’s words, I struggle with this all the time. When I try to explain the importance of worst-case analysis, algorithmic thinking, or rigorous modeling to biologist and fail, my first instinct is to blame it on the structural differences between the biological and mathematical community, or biologists’ discomfort with mathematics. In reality, the blame is with my inability to articulate the merits of my stance, or provide strong evidence that I can offer any practical biological results. Even more depressing, I might be suffering from a case of interdisciplinitis and promoting a meritless idea while completely failing to connect to the central questions in biology. However, I must maintain my self-esteem, and even from my language here, you can tell that I am unwilling to fully entertain the latter possibility. Interestingly, this sort of bias can propagate from individual researchers into their theories.

One of the difficulties for biologists, economists, and other decision scientists has been coming to grips with observed irrationality in humans and other animals. Why wouldn’t there be a constant pressure toward more rational animals that maximize their fitness? Who is to blame for this irrational behavior? In line with the self-serving bias, it must be that crack in the sidewalk! Or maybe some other feature of the environment.

Two ways to blame the environment

A popular stance in evolutionary psychology is that animals behave irrationally in settings different from their environment of evolutionary adaptation (Laland & Brown, 2002; I’ll call this self-serving approach one). Whenever you successfully Ctrl+F “paleolithic” in an evolutionary psychology paper then, chances are, they are using this sort of argument. Although it is a valid argument in some contexts, as an all purpose justification it is too easy of an answer and can explain away all irrationality in modern humans and controllected experiments with animals. If you prefer a falsification, then it fails to explain naturalistic occurrences of irrationality, such as incomplete choice in nest selection among ants (Franks et al., 2003) and bees (Seeley & Visscher, 2004). In the case of humans, it also treats evolution as an artifact of the past and tends to promote the silly stance that we are not evolving anymore.

A slightly more subtle way to blame irrationality on external circumstances is to focus on complete information about either the present or future environment being unavailable (or unreasonably costly to acquire) (Stephens & Charnov, 1982; Hirshleifer & Riley, 1992; I’ll call this self-serving approach two). Since the animal does not have access to perfect information, it is doomed to make mistakes originally, or in accounting for its errors, behave locally irrational, but still rational on some longer time scale that takes into account the possible results of uncertainty. For example, consider time inconsistency in preferences, as summarized by Jason Collins of Evolving Economics:

[R]esearch participants are offered the choice between one bottle of wine a month from now and two bottles of wine one month and one day from now (alternatively, substitute cake, money or some other pay-off for wine). Most people will choose the two bottles of wine. However, when offered one bottle of wine straight away, more people will take that bottle and not wait until the next day to take up the alternative of two bottles.

Here, even evolutionary economists seem to make the fallacy of staying married to rational discounting schemes. As Collins explains while summarizing Sozou (1998):

[U]ncertainty as to the nature of … hazards can explain time inconsistent preferences. Suppose there is a hazard that may prevent the pay-off from being realised … and you do not know what the specific probability of that hazard … [A]s time passes, one can update their estimate of the probability of the underlying hazard. If after a week the hazard has not occurred, this would suggest that the probability of the hazard is not very high, which would allow the person to reduce the rate at which they discount the pay-off. … This example provides a nice evolutionary explanation of the shape of time preferences. In a world of uncertain hazards, it would be appropriate to apply a heavier discount rate for a short-term pay-off.

Of course, to somebody familiar with the Weber-Fechner law, a less self-serving explanation is possible: irrationality is from imperfections in the participants. Humans tend to perceive numbers, brightness, weight, and loudness on a logarithmic scale. This means that adding a pound to a two pound box you are carrying (or an extra day wait to moments-from now) seems much heavier (or longer) than a pound to a fifty pound box (or an extra day wait to a month from now). Since animals have only limited resources to invest in their various decision making processes, it makes sense to have as many functions as possible achieved by the same sub-systems: in this case, comparing quantities. Why should the developing animal invest extra resources in a new quantity comparing subsystem instead of using one it already has in place for non-time-related sensory modalities?

Now as long as you associate a higher cost with delaying by higher perceived time then you can get the preference reversal. This also corresponds more closely to what you might suspect from introspection, and explains why it is possible to use rational reasoning to circumvent your instinctive decision procedure and convince yourself than one day wait now or one day wait 30 days from now is the same; just like a person familiar with arimthmetic will say say 5 not 3 is halfway between 1 and 9. Best of all, this explanation continues to be satisfactory even when there are no hazards to prevent you from getting your wine. For Sozou (1998), however, if you want to explain why the lab participants continue to show preference reversal, you must fall back on the just-so story that the environment of evolutionary adaptation is different from the laboratory setting.

Taking responsibility with bounded rationality

This approach of having the decision maker responsible for their irrationality instead of blaming it on the environment, is central to theories of bounded rationality in economics, and heuristics and biases in psychology. It is also intimately related to computer science, since the limitations come from information processing instead of just the information content of testing (self-serving approach one) or training (self-serving approach two) examples. Livnat & Pippenger (2006, 2008) focus on this by placing Kolmogorov complexity constraints on their decision making agents. Specifically, they model decision makers as fan-out free boolean circuits, and their complexity theoretic constraint is an upper boun on the number of gates in the circuit.

To focus on systematic mistakes, Livnat & Pippenger (2008) consider an ideal boolean function t: \{0,1\}^n \rightarrow \{0,1\} that represents the best (or rational) response (choice 0 or 1) of the animal to every one of the 2^n possible states of the world. As with strings, most boolean functions have high Kolmogorov complexity, and require a circuit with close to 2^n/n gates to compute perfectly. However, Livnat & Pippenger (2008) only allow the agents a fraction 0 < a < 1 of the necessary gates. Since the agents don’t have brains large enough to compute the function perfectly, it is not surprising that they make mistakes on some inputs. The result of simple mistakes would be trivial, and not definitively distinguishable from self-serving approach two: it could be that the smaller brained animals are simply not processing some of the n inputs describing the environment they are in, but making optimal decisions on the subset of bits they are paying attention to. Hence, a systematic mistake is defined as a mistake that is not made by the optimal circuit (one that doesn’t have restrictions on number of gates) that uses only the bits that the small brained animal uses. In other words, a systematic mistake is an error in processing caused by computational restrictions, instead of an error caused by a lack of information. Livnat & Pippenger (2008) show that the optimal computationally constrained animal makes systematic mistakes on most target functions.

In an earlier paper, Livnat & Pippenger (2006) showed a particularly interesting case of this by considering the specific problem of finding shortest paths, and showing that a mind can be made of different agents with conflicting utility functions. For this, they introduce a game-theoretic definition of conflict, as a sort of dual to a Nash equilibrium, and an information theoretic means to infer utility functions of agents (or sub-modules of the mind) a posteriori instead of the a priori and often fact-free assumptions in economics. For Livnat & Pippenger (2006):

[C]onflict exists unless each agent does not benefit from a change in any other agent’s action, all else being equal.

To infer the utility functions of agents in the society (sub-modules in the mind), they ask for the utility function that most parsimoniously (minimizing Kolmogorov complexity) describes each agents behavior. The authors observe that for some carefully constructed graphs, the optimal computationally bounded decider for shortest-path is made up of independent sub-modules that are in conflict with each other.

This approach fits clearly inside Minsky’s (1988) Society of Mind approach to cognition, and side-steps some of its obvious evolutionary criticisms. In particular, by describing the mind via a universal model of computation like circuits, and considering selection on the level of the whole animal, instead of individual agents of mind, Livnat & Pippenger (2006) are able to create a general theory that will hold for any gene-brain-mind mapping. However, by talking about optimal animals, they are making the common assumption that fitness peaks are reachable in a reasonable amount of time (in fact, they are doing one worse than this, they are assuming the global peak is reachable), that regular readers of this blog know to be unreasonable even in static fitness landscapes.

I see no compelling reason to believe just because an unevolvable optimal animal has to make systematic mistakes and have conflicting agents of the mind that an evolvable agents will also have these features (as opposed to not using most of its information bits, and just having one horribly flawed agent of the mind). Alternatively, a person that is comfortable with assuming that evolution reaches optima, would question why an optimal agent would still have these specific restrictions on its complexity, especially when the environment is allowed to be of higher complexity. This is the part of the work I would most like to see improved: right now the results rely on the environment being fundamentally more complex than any achievable organism. This might seem necessary, but believing in P != NP allows us to have an environment that encodes problems that are easy to check (and thus assign rewards to), but hard to solve (and thus learn behaviors for).

Given its current appeal to biologists, formulation and limitations, results like Livnat & Pippenger (2006, 2008) seem like prime candidates to study in Valiant’s (2009) model of evolvability. They are already formulated in terms of ideal functions, behavior as circuits, and fitness evaluation as performance on a distribution of environmental configurations (from this perspective Livnat & Pippenger (2008) study only the uniform distribution). However, Valiant’s approach would allow us to make the computational constraints on agents be not a priori limitations imposed by an external environment, but take the next step in avoiding self-serving bias by showing that these limitations arise from the algorithmic properties of evolution.

References

Franks, N.R., Mallon, E.B., Bray, H.E., Hamilton, M.J., Mischler, T.C. (2003). Strategies for choosing between alternatives with different attributes: exemplified by house-hunting ants. Animal Behaviour, 65: 215-223.

Hirshleifer, J., & Riley, J.G. (1992). The analytics of uncertainty and information. Cambridge University Press, Cambridge.

Laland, K.N., & Brown, G.R. (2002). Sense and nonsense: Evolutionary perspectives on human behaviour. Oxford University Press, Oxford.

Livnat, A. & Pippenger, N. (2006). An optimal brain can be composed of conflicting agents. Proc. Nat’l Acad. of Sci. USA. 103(9): 3198-3202.

Livnat A, & Pippenger N (2008). Systematic mistakes are likely in bounded optimal decision-making systems. Journal of Theoretical Biology, 250 (3), 410-23 PMID: 18022645

Minsky, M. (1988). The Society of Mind;. Simon and Schuster, New York.

Seeley, T.D., & Visscher, P.K. (2004). Group decision making in nest-site selection by honey bees. Apidologie, 35: 101-116.

Sozou, P. (1998). On hyperbolic discounting and uncertain hazard rates. Proceedings of the Royal Society B: Biological Sciences, 265 (1409), 2015-2020

Stephens, .W., & Charnov, E.L. (1982). Optimal foraging: some simple stochastic models. Behav. Ecol. Sociobiol., 10: 251-263.

Valiant, L.G. (2009) Evolvability. Journal of the ACM 56(1): 3

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

12 Responses to Bounded rationality: systematic mistakes and conflicting agents of mind

  1. hadassaab says:

    Reblogged this on hadassaab.

  2. Great article. If mistakes are baked into the control system, what makes them tend toward self-serving? It may be that any system with a signal-based utility function is susceptible to wireheading, and that it’s a constant struggle to distinguish and objectively described external reality from a more favorable virtual reconstruction of it. For me, the fascinating question is: does there exist a control system that is immune to this self-corruption? It becomes especially difficult when considering systems that can self-change at the physical level (and hence all logical functions).

    References:

    Orseau L., Ring M. – Self-Modification and Mortality in Artificial Agents, Artificial General Intelligence (AGI) 2011, Springer, 2011.

    . – Delusion, Survival, and Intelligent Agents, Artificial General Intelligence (AGI) 2011, Springer, 2011.

    • Thanks for the kind words!

      I don’t think that Livnat & Pippenger (2006) are suggesting that the agents of mind are actually optimizing utility functions, but are acting as if they are. Their approach to utility functions is not a priori but is an a posteriori inference of “if I am a scientist then what is the most parsimonious utility function I would come up if I wanted to personify these agents”. The agents themselves, however, are just sub-circuits.

      I am actually very interested in the question of the question of the divide between external reality and a virtual reconstruction of it in game-theoretic settings, and have been working on a related project with some colleagues/fellow bloggers. This approach offers another resolution to the self-serving bias: maybe what we think is rational from a reductionist perspective is actually not a good idea when you take into account the externalities that we ignored during our reductions. This could be especially important when thinking about systemic risk. However, it is not as cleanly separable from the approaches used in self-serving approach two, and so I didn’t discuss it explicitly in the post. Maybe I should write a follow up.

      Note that the circuit model Livnat & Pippenger consider is Turing complete (in fact, since they don’t place uniformity constraints, it is more powerful than TC), and the environment is static. As such, it is not clear how self-change would make a difference. In general, I am very skeptical of the AGI literature, but I will take a look at the two papers your cite!

  3. Pingback: Bounded rationality: systematic mistakes and conflicting agents of mind. « Economics Info

  4. Pingback: Baldwin effect and overcoming the rationality fetish | Theory, Evolution, and Games Group

  5. Pingback: Mathematics in finance and hiding lies in complexity | Theory, Evolution, and Games Group

  6. Pingback: Cooperation through useful delusions: quasi-magical thinking and subjective utility | Theory, Evolution, and Games Group

  7. Pingback: Are all models wrong? | Theory, Evolution, and Games Group

  8. Pingback: Cataloging a year of blogging: from behavior to society and mind | Theory, Evolution, and Games Group

  9. Pingback: Why academics should blog and an update on readership | Theory, Evolution, and Games Group

  10. Pingback: Rationality, the Bayesian mind and its limits | Theory, Evolution, and Games Group

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.