From realism to interfaces and rationality in evolutionary games

As I was preparing some reading assignments, I realized that I don’t have a single resource available that covers the main ideas of the interface theory of perception, objective versus subjective rationality, and their relationship to evolutionary game theory. I wanted to correct this oversight and use it as opportunity to comment on the philosophy of mind. In this post I will quickly introduce naive realism, critical realism, and the interface theory of perception and sketch how we can use evolutionary game theory to study them. The interface theory of perception will also give me an opportunity to touch on the difference between subjective and objective rationality. Unfortunately, I am trying to keep this entry short, so we will only skim the surface and I invite you to click links aggressively and follow the references papers if something catches your attention — this annotated list of links might be of particular interest for further exploration.

So let’s start with naive realism, this is the stance that the world is exactly as we perceive it; we can’t be wrong. This doesn’t mean just that we can’t be wrong about understanding our sense-perceptions but that these sense-perceptions also can’t be wrong about the external world. As you can guess from the naive predicate, this isn’t a stance many hold seriously. Instead, the orthodoxy among vision scientists is critical realism (Brunswih, 1956; Marr, 1982; Palmer, 1999) — perception resembles reality, but doesn’t capture all of it. To borrow an image from Kevin Song, if naive realism is a perfect photograph then critical realism is a blurry photograph. Or, to use a metaphor more familiar to modelers instead of models: our perception is a map of the territory that is reality. Our perception, like the map, distorts, omits many details, adds some labels, and draws emphasis; but largely preserves the main structure of reality.

Such critical realism is popular among more that just vision scientists, I would wager it is the belief of most non-philosophers. But why believe it? The easiest response is that it is self-evident. But it is also self-evident that the Earth is stationary when I’m sober, and the Moon is bigger at the horizon than when it is higher in the sky; yet I don’t take either of those statements as truths about the world. A better response is that critical realism is useful, it has helped me and my ancestors avoid being eaten by tigers, alligators and pythons. A variant of this is the stance that most cognitive scientists prefer. They give the evolutionary justification that an agent whose perception did not capture the statistical regularities of the environment would fare worse at survival than one who did. Thus, evolution would push us towards more and more veridical representations of reality. Of course, this isn’t the only argument for critical realism — this position and slight variants have been held by many philosophers since the ancient Greeks without any knowledge or reliance on evolution — but it is one of the more popular current scientific arguments.

Now, I am happy to suppose that evolution would select for perceptions that maximize fitness — although there are good arguments that can pull the rug out completely from under such adaptationist accounts — but why should we expect that true perceptions are particularly good at maximizing fitness? This question led Hoffman (1998; 2009) to develop the interface theory of perception. The name, and Hoffman’s favorite example, comes from computing machines. Consider your desktop screen: what are the folders? They don’t resemble or approximate the complicated sequence of changes in magnetization in a thin film of ferromagnetic material inside a metal box called your hard-drive, not even at a coarse-grained level. Nor do they show or even hint at the complicated information processing that changes those magnetic fields into the photons that leave your screen. If I had to interact with my computer at a level that accurately — or even partially — represented the underlying physical processes that carry out the information processing then — even with my years of computer science and physics education — I would not be able to write this post. It is an interface that hides the complexity that is unnecessary for my aims.

In the case of evolution, the ‘aim’ is (roughly) maximizing fitness, and thus perception doesn’t need to be truthful, but has to provide an interface through which the agent can act to maximize its’ fitness. Mark et al. (2010) built an evolutionary game theory model to show some conditions under which the interface theory is more adaptive that truthful perceptions. They looked at a game where two players meet at random and are presented with three potential foraging spots. The first agent picks one of the three spots, and then the second agent picks one of the remaining two spots. The signal that the agents receive about the fitness effects of the spot does not vary linearly with the actual effect. Instead it is a Gaussian with a patch sending a mid-level signal producing the highest increase in fitness, while patches with very low-level or high-level signals produce less of a fitness benefit for an agent that chooses them. As such, a critical realist strategy that properly reflects the structure of the signal cannot easily track fitness, while an interface theory can concentrate on the high-fitness regions of the signal, perceiving them as distinct from the lower fitness regions even if that breaks the linear order. This — plus a cost for simply perceiving the fitness effect instead of the complicated signal — leads agents with an interface theory to out-compete the naive and critical realists.

Once you have the objective effects of the game differing from the subjective experience on which agent decisions are based, you have to become much more careful about discussions of rationality. In particular, if an agent’s subjective perceptions of their actions are not in-like with the objective effects of their actions then an agent can act rationally on their subjective experience, while appearing to act irrationally from the perspective of someone that only has access to the objective effects. In theory, this is nothing new to economists: markets are supposed to be a method for rational agents acting on different subjective utilities to come to a mutual understanding and exchange. In practice, the frequent reliance on ‘representative agents’ in modeling throws away this agent heterogeneity, especially since an average of many rational agents with differing utility functions cannot itself be modeled as a rational agent. In the context of evolutionary game theory, this distinction fits into a recent trend of more carefully modeling the genotype-phenotype-behavior map (usually in EGT it is simply assumed that the genotype is the phenotype and behavior — the identity map) and through it psychological mechanisms (McNamara, 2013).

By worrying about the distinction between subjective and objective rationality, Marcel, Tom, and I have shown an even more drastic examples of the interface theory of perception (Kaznatcheev et al., 2014). We had the agents interact in pair-wise prisoner’s dilemma games in a structured population, and let their subjective perceptions of the payoffs evolve. In this context, a naive realist would be an agent that evolves subjective perceptions that are in exact numeric agreement with the objective effects of the game on fitness. A critical realist would be an agent that evolves a subjective perception that is of the same type of game, but maybe with slightly different numbers. An interface theorist would evolve a subjective representation that is a game with completely different payoff structure that leads to a different kind of behavior. In highly competitive environments our agents arrive at critical realism, and in friendlier (but still competitive) environments they evolve an interface. This interface allows them to cooperate and thus maximize overall well-being.

The drastic difference in our work is not just that we don’t tax a cognitive cost for true perceptions — in fact, we could tax a small cognitive cost for building interface and still arrive at the same qualitative results — but that our interfaces don’t maximize individual fitness. Instead, the interfaces that agents evolve in the friendlier environments allow them to maximize inclusive fitness — thus, I prefer to call these representations as social interfaces. They allow the society of agents to interface with the world in such a way that misrepresenting reality allows them to maximize the coherence and the well-being of the society and not just the short-term interests of the individual.

Consider, for example, religion as an example of such a social interface. People often see a tension between the praise of religion for promoting cooperative and moral behavior and the criticism for delusional beliefs. Our model resolves this tension by showing how a delusional interface can arise in order to facilitate cooperation among subjectively rational agents. Our simplistic model can apply to both moralizing and non-moralizing gods and be used to describe some of the earliest social interfaces with reality present in ancient cultures. Of course, these are tentative connections to the cognitive science of religion, but I hope they can serve as the first steps to further exploration by co-opting evolutionary game theory as a modeling tool.

References

Brunswik, E. (1956). Perception and the representative design of psychological experiments. University of California Press, Berkeley.

Hoffman, D.D. (1998). Visual intelligence: How we create what we see. W.W. Norton, New York.

Hoffman, D.D. (2009). The interface theory of perception. In: Dickinson, S., Tarr, M., Leonardis, A., & Schiele, B. (Eds.), Object categorization: Computer and human vision perspectives. Cambridge University Press, Cambridge.

Kaznatcheev, A., Montrey, M., & Shultz, T.R. (2014). Evolving useful delusions: Subjectively rational selfishness leads to objectively irrational cooperation. Proceedings of the 36th annual conference of the cognitive science society. arXiv: 1405.0041v1

Mark, J.T., Marion, B.B., & Hoffman, D.D. (2010). Natural selection and veridical perceptions. Journal of Theoretical Biology, 266(4): 504-515.

Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information, Henry Holt and Co. Inc., New York, NY.

McNamara, J.M. (2013). Towards a richer evolutionary game theory. Journal of the Royal Society, Interface, 10(88).

Palmer, S. E. (1999). Vision science: Photons to phenomenology. The MIT press.

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

13 Responses to From realism to interfaces and rationality in evolutionary games

  1. “Consider, for example, religion as an example of such a social interface. People often see a tension between the praise of religion for promoting cooperative and moral behavior and the criticism for delusional beliefs. Our model resolves this tension by showing how a delusional interface can arise in order to facilitate cooperation among subjectively rational agents.”
    Can you use the model to analyze the idea that men don’t want there to be a God whom they are logically accountable too ?

    • Our model is specifically meant to be abstract, in order to give intuition, and thus it can be applied to thinking about both moralizing and non-moralizing God(s) or folk beliefs. There are certainly models that do look at the details of the moralizing God that you describe. If you are genuinely interested in this and not just polemic then I would recommend these two papers as starting points:

      [1] Roes, F.L., & Raymond, M. (2003). Belief in moralizing gods. Evolution and Human behavior, 24(2): 126-135.

      [2] Johnson, D. & Bering, J. (2006). Hand of god, mind of man: Punishment and cognition in the evolution of cooperation. Evolutionary Psychology, 4.

      If you want to read in a little bit more detail on how our approach is different from the one taken in those two papers, then see the last three paragraphs of Useful delusions, interface theory of perception, and religion.

      • Thanks.
        Is it only a sophisticated arrangement of subjective reasoning that only claims to be objective.
        Science has followed what can be explained to it’s limit in DNA and the first 3 seconds of the origin of the universe. And now Hawkings, and many others have compromised the integrity of scientific reasoning to deny the obvious miraculous conditions that exist.

      • The denial of acountability to a Creator is a strong and convenient force. Even stronger than the premise you are claiming.

  2. This touches upon something I have been thinking about recently after reading that New Yorker article about social sciences and republicans. I believe that the notion of bias that is so widely used is in need of proper critique. Your interface metaphor, among other things, allows for such critique. More often than not a cognitive bias should be seen as a useful filter, and not as a misrepresenting mapping from reality onto our experience (that’s not to say that it is never a problem – there’s always a threshold). I don’t think though that philosophically speaking your view challenges critical realism. To my eyes, it is still a form of critical realism, although allowing for more map variation.

    I have not yet read Whiteheads “Process and Reality”, and my own notion of dynamic (or process, or flow) ontologies stems from studies of Buddhist thought. I think that all definitions are not absolute but operational, and the criterion of efficiency or productivity of thought in relation to a certain aim, be it evolutionarily predetermined or consciously set, is more overarching then that of truthfulness. There are processes that persistently present themselves in our perception as independent from our views of them (even when we poke them with a needle of scientific scrutiny). Such processes constitute reality that is nevertheless continuously redefined and even our most profound notions are subject to change. There is some truth to the premise of naive realism – that we can’t be wrong. I would rephrase it though and say that “it is impossible to perceive something that is not real”. Perceptions or even products of creative imagination are part of reality with a certain status. The issue here that should concern scientists is how to effectively delineate the possible statuses and describe dynamic relations between them properly.

    Also I think a very natural and intuitive interface metaphor other than the computer interface is language. Especially because we are used to evaluate statements of language as true or false.

    Lastly, effects on religion and other similar social phenomena have one more level besides personal and social which can be studied with evolutionary game theory – that of inter-group relations. Inter-group cooperation can be fostered or inhibited by cooperation patterns inside groups and group composition across various factors.

    • I don’t think though that philosophically speaking your view challenges critical realism. To my eyes, it is still a form of critical realism, although allowing for more map variation.

      What would be a challenge to critical realism in your eyes? The process philosophy you touch on below? I am sure more mild challenges are possible. I will elaborate in a philosophical follow up post, I am not sure why the comments here have turned so philosophical, actually.

      I have not yet read Whiteheads “Process and Reality”, and my own notion of dynamic (or process, or flow) ontologies stems from studies of Buddhist thought.

      I’ve recently read Whitehead’s Modes of Thought and I don’t think I really “got it”. I would need to consult my notes again, but I definitely wasn’t able to extract a ‘spark notes’ summary, or a good feeling of how it fits in with other philosophies that I am more familiar with. The replacement of state by process is of course very friendly to a cstheorist, looking at their duality would be even more so, but that didn’t feel like the main point. I will have to immerse myself more in this when I can.

      There is some truth to the premise of naive realism – that we can’t be wrong. I would rephrase it though and say that “it is impossible to perceive something that is not real”.

      Epicurus would agree, and since his is my favorite Hellenistic school, the original draft of this post commented on this connection. Unfortunately, I decided to cut all direct references to past philosophy from this post (since it is intended as reading for cognitive science students that might not want to go into the details of history of philosophy). Again, I am surprised that it came up so much in the comments. Stay tuned for the next post!

      Also I think a very natural and intuitive interface metaphor other than the computer interface is language.

      So, for Lakoff & Johnson, I would agree that language is an interface; for the logical atomists, however, the view seems to be of critical realism. Again, something I will touch on more. Hopefully with more coherent thoughts.

      Lastly, effects on religion and other similar social phenomena have one more level besides personal and social which can be studied with evolutionary game theory – that of inter-group relations.

      Definitely, and since we study ethnocentrism pretty extensively, this is a direction we’ve thought about a bit. I comment on it in the last three paragraphs of a past post focused on our paper, if you have further comments then we should discuss them there! Tom has also blogged about it briefly here.

      • What would be a challenge to critical realism in your eyes?

        I don’t think that critical realism can be challenged, only better articulated and both the interface theory of perception and process philosophy contribute to this. My thinking is that the duality captured in critical realism reflects representational nature of information and therefore can not be either bypassed nor extended.

        Stay tuned for the next post!

        Sure thing!

        in the last three paragraphs of a past post focused on our paper, if you have further comments then we should discuss them there!

        I’ll check it out!

  3. mike james says:

    There is a flaw in this reasoning.
    You assume a concrete reality.
    In the case of the computer you assume that the electrons, magnetic fields etc ARE the reality and the interface is a construct that is not reality.
    You have no way of knowing that the physics is “real” and the “interface” is not.

    The point is that there are no “real” perceptions everything is a construct that is more or less effective.

    In the computer example the high level “interface” is much more effective a reality than the deep physics. Just as the quoted deep physics is more effective than an even deeper one that references quantum field theory.

    A perception of reality cannot be judged by how “real” it is only by how effective it is.
    and of course you are free to define “effective” as “fitness”.
    mikej
    PS I’m a physicist :-)
    PPS I really enjoyed reading this post.

    • There is no ontological reasoning here, although there will be some in a follow up post. If you do not like the use of the term ‘objective reality’ — which I also cringe at most of the time — then you can replace it by ‘reductive experimentally measured reality’ versus the subjective experience. If you want to get philosophical — which I always do — then you can call one the scientific image and the other the manifest image.

      Judging ontology, or truth more generally, based on its utility is something I take a keen interest in at times. The most developed accounts of this would be the philosophical pragmatism of Dewey or Peirce. I am sure that my misrepresentations of Peirce in this comment will be corrected by Jon Awbrey if he spots this discussion.

      These are points that I will touch on in follow up posts, but I really didn’t mean to get too philosophical here. I really wanted to stick to how our typical sense organ perceptions match up to the world as measured by other senses in a more general view (such as our intellect or technological senses).

      I am glad that you enjoyed this post, and I hope that you continue commenting on TheEGG!

      • mike james says:

        Sorry to be so slow in replying.
        My difficulty with the argument is lower down the food chain than philosophy.
        I cannot see how you can arrive at an operational definition of “interface” without having a reference “true reality” to judge it against. At the operational level you have no single model of reality that can serve to identify when another model is an interface to that reality.
        So one persons interface is another’s reality and vice versa.
        The understanding of a computer in terms of icons and mice is just as good a “scientific image” as any and not distinct in type from accounts of gates, or electrons, or quarks or…
        I cannot see how you can run an experiment that compares an interface with another distinctly different type of model. Rationality is an expression/measure of effectiveness.
        .

  4. By the way, as a follow up to our previous discussion with Artem, what Mike says is also one (but not the only one) of the problems of the Kantian system. There is no way of postulating a Ding an sich or anything in Artem’s words fundamentally “ineffable”, including the transcendental.

  5. Pingback: Cataloging a year of blogging: the philosophical turn | Theory, Evolution, and Games Group

  6. Pingback: Colour, psychophysics, and the scientific vs. manifest image of reality | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: