Interface theory of perception can overcome the rationality fetish
January 28, 2014 19 Comments
I might be preaching to the choir, but I think the web is transformative for science. In particular, I think blogging is a great form or pre-pre-publication (and what I use this blog for), and Q&A sites like MathOverflow and the cstheory StackExchange are an awesome alternative architecture for scientific dialogue and knowledge sharing. This is why I am heavily involved with these media, and why a couple of weeks ago, I nominated myself to be a cstheory moderator. Earlier today, the election ended and Lev Reyzin and I were announced as the two new moderators alongside Suresh Venkatasubramanian, who is staying on to for continuity and to teach us the ropes. I am extremely excited to work alongside Suresh and Lev, and to do my part to continue devoloping the great community that we nurtured over the last three and a half years.
However, I do expect to face some challenges. The only critique raised against our outgoing moderators, was that an argumentative attitude that is acceptable for a normal user can be unfitting for a mod. I definitely have an argumentative attitude, and so I will have to be extra careful to be on my best behavior.
Thankfully, being a moderator on cstheory does not change my status elsewhere on the website, so I can continue to be a normal argumentative member of the Cognitive Sciences StackExchange. That site is already home to one of my most heated debates against the rationality fetish. In particular, I was arguing against the statement that “a perfect Bayesian reasoner [is] a fixed point of Darwinian evolution”. This statement can be decomposed into two key assumptions: a (1) perfect Bayesian reasoner makes the most veridical decisions given its knowledge, and (2) veridicity has greater utility for an agent and will be selected for by natural selection. If we accept both premises then a perfect Bayesian reasoner is a fitness-peak. Of course, as we learned before: even if something is a fitness-peak doesn’t mean we can ever find it.
We can also challenge both of the assumptions (Feldman, 2013); the first on philosophical grounds, and the second on scientific. I want to concentrate on debunking the second assumption because it relates closely to our exploration of objective versus subjective rationality. To make the discussion more precise, I’ll approach the question from the point of view of perception — a perspective I discovered thanks to TheEGG blog; in particular, the comments of recent reader Zach M.
Assuming that the mind approximates the structure of the world and describing this process through Bayesian models is now a mainstay for theories of cognition and perception (Chater et al., 2006; Chater & Oaksford, 2008). The precise evolutionary justification for these sort of models — they’re successful because they have a direct connection to the statistics of the natural world — dates back more than half-a-century (Brunswik, 1956), with some roots tracing back to Aristotle. The underlying philosophical stance is critical realism — perception resembles reality, although it doesn’t capture all of it. This is the orthodox view for vision scientists (Brunswih, 1956; Marr, 1982; Yuille & Bulthoff, 1996; Palmer, 1999). Even Donald D. Hoffman — our hero for this post — started in this orthodoxy over thirty years ago, endorsing it implicitly as a questioning recent PhD (Hoffman, 1983; emphasis mine):
First, why does the visual system need to organize and interpret the images formed on the retina? Second, how does it remain true to the real world in the process? Third, what rules of inference does it follow?
But toward the turn of the millennium, his argumentative attitude lead him to start questioning some of these implicit assumptions and develop the interface theory of perception (Hoffman, 1998; 2009). The name, and Hoffman’s favorite example, comes from computing machines: consider your desktop screen, what are the folders? They don’t resemble or approximate the complicated sequence of changes in magnetization in a thin film of ferromagnetic material inside a metal box called your hard-drive, not even at a course grained level. Nor do they show or even hint at the complicated information processing that changes those magnetic fields into the photons that leave your screen. It is an interface that hides the complexity that is unnecessary for your aims. In the case of evolution, the ‘aim’ is (roughly) maximizing fitness, and thus perception doesn’t need to be truthful, but has to provide an interface through which the agent can act to maximize its’ fitness. Hoffman explains this well in a segment of the following talk (unfortunately, I think that everything after 13m28s mark is content-less speculation and I don’t recommend watching past that point):
The evolutionary game theory that Hoffman mentions was done by Mark, Marion, & Hoffman (2010) to show some conditions under which the interface theory is more adaptive that truthful perceptions. They did this by looking at a game where two players meet at random and are presented with three potential foraging spots. The first agent picks one of the three spots, and then the second agent picks one of the remaining two spots. The two basic realist strategies the authors consider are:
- A naive realist strategy called truth that tells the agent exactly how many resources (from 0 to 100) are at each spot, and then the agent rationally selects the spot with the most resources of those available, or
- a critical realist strategy called simple that has a fixed boundary, such that if the resources are above that boundary value it sees the spot as ‘high’ otherwise as ‘low’. The agent then acts rationally on this coarse-grained information: if spots are all ‘low’ or all ‘high’ then the agent chooses at random, if one spot is ‘high’ then the agent chooses that spot, and if two spots are ‘high’ but one is ‘low’ then it chooses one of the two ‘high’s at random.
Unfortunately, to get any interesting dynamics out of this model, the authors have to introduce cognitive costs. In particular, since the truth strategy requires more detailed perception, it is always second to act (unless the two truths are competing against each other, in which case the order is random) and also sustains a small fitness penalty for the larger brain required for this perception. The authors solve the replicator dynamics for this model, and notice that truth does not go extinct only if the simple strategies’ boundary is badly placed (either below ~33 or above ~77) and the cost per extra bit of information is low; in some of these cases truth drives simple to extinction, and in others they co-exist. However, note that even when the cost per bit of information is zero, truth still has a penalty in acting second, so the results are not surprising.
To introduce an interface strategy, the authors consider a setting with three signals; lets call them ‘low’, ‘med’, and ‘high’. In the figure below, they are shown by the colors red, yellow, green. They also introduce the assumption that more resources doesn’t necessarily result in a greater increase in fitness. In particular, they assume that the increase in fitness is a Gaussian function (given by the black curves below) of the amount of resource, with a max at 50. In other words, going to a spot with 50 resources gives you the most possible resources, while a spot with 30 or a spot with 70 gives you less.
- For a critical realist strategy, the signals reflect reality, so every spot that maps to ‘low’ has fewer resources than every spot that maps to ‘med’ which has fewer resources than every spot that maps to ‘high’. This is shown in the top graph of the figure at right. Preferences are set by the area under the curve in the signal regions, hence a critical rational agent prefers ‘med’ over ‘high’ and ‘high’ over ‘low’. Unsurprisingly, this does not maximize fitness among 3 signal strategies.
- The interface strategy gives ‘high’ the the chunk of the resource distribution that has the highest payoff, ‘med’ to the medium, and ‘low’ to the low end, as shown in the bottom figure above. Just like the critical realist, the interface agents are rationals, so with their signals they prefers ‘high’ over ‘med’ over ‘low’.
Of course, interface has the higher expected fitness and it is not surprising that it will always out-compete the critical realist strategy, and will also beat the naive realist strategy depending on how much extra truthful perception costs.
This shows that the fitness is more important than ‘truth’ to the agent, and if perception is expensive then the agent will tune its perceptive coarse-graining to reflect the fitness distribution (something that depends on how the agent can interact with the environment, not just the external environment) not the amount of resources (a property of only the external environment). Although it is interesting, it is not surprising. In particular, the agent still acts rationally, myoptically, and selfishly (although there is no social dilemma here) with respect to the objective fitness. The interface theory overturns rational fetish belief that our perception is always tuned to accurately reflect the external world. Instead, Mark, Marion, and Hoffman (2010) show that it is tuned to accurately reflect the interaction between agent and external world.
Marcel, Thomas, and I have extended beyond this to show that sometimes the tuning reflects not just the agent’s but society’s interaction with the world. Even without a penalty, in certain settings agents will evolve misrepresentations of the world that tell them incorrect fitness-information. What’s even more mind-blowing is that these incorrect assessments of objective fitness information actually help agents overcome their selfish tendencies and promote the social good. This happens despite the fact that the agents are acting completely rationally on what Hoffman would call their perceptions and what I call the subjective experience.
It is nice to know that we’ve been unknowingly extending an existing theory. Without Zach and this blog, I would have probably never learned of Hoffman’s work. In other words, today is a day of happiness at the usefulness of online communities from cstheory to TheEGG!
Brunswik, E. (1956). Perception and the representative design of psychological experiments. University of California Press, Berkeley.
Chater, N., Tenenbaum, J. B., & Yuille, A. (2006). Probabilistic models of cognition: Conceptual foundations. Trends in Cognitive Sciences, 10(7): 287-291.
Chater, N., & Oaksford, M. (Eds.). (2008). The probabilistic mind: Prospects for Bayesian cognitive science. Oxford University Press.
Feldman, J. (2013). Tuning your priors to the world. Topics in Cognitive Science, 5(1), 13-34.
Hoffman, D.D. (1983). The interpretation of visual illusions. Scientific American, 249: 154-162.
Hoffman, D.D. (1998). Visual intelligence: How we create what we see. W.W. Norton, New York.
Hoffman, D.D. (2009). The interface theory of perception. In: Dickinson, S., Tarr, M., Leonardis, A., & Schiele, B. (Eds.), Object categorization: Computer and human vision perspectives. Cambridge University Press, Cambridge.
Mark, J.T., Marion, B.B., & Hoffman, D.D. (2010). Natural selection and veridical perceptions. Journal of Theoretical Biology, 266 (4), 504-15 PMID: 20659478
Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information, Henry Holt and Co. Inc., New York, NY.
Palmer, S. E. (1999). Vision science: Photons to phenomenology. The MIT press.
Yuille, A., & Bulthoff, H. (1996). Bayesian decision theory and psychophysics. In Knill, D. C., & Richards, W. (Eds.). Perception as Bayesian inference. Cambridge University Press.