Interface theory of perception can overcome the rationality fetish

I might be preaching to the choir, but I think the web is transformative for science. In particular, I think blogging is a great form or pre-pre-publication (and what I use this blog for), and Q&A sites like MathOverflow and the cstheory StackExchange are an awesome alternative architecture for scientific dialogue and knowledge sharing. This is why I am heavily involved with these media, and why a couple of weeks ago, I nominated myself to be a cstheory moderator. Earlier today, the election ended and Lev Reyzin and I were announced as the two new moderators alongside Suresh Venkatasubramanian, who is staying on to for continuity and to teach us the ropes. I am extremely excited to work alongside Suresh and Lev, and to do my part to continue devoloping the great community that we nurtured over the last three and a half years.

cubeHowever, I do expect to face some challenges. The only critique raised against our outgoing moderators, was that an argumentative attitude that is acceptable for a normal user can be unfitting for a mod. I definitely have an argumentative attitude, and so I will have to be extra careful to be on my best behavior.

Thankfully, being a moderator on cstheory does not change my status elsewhere on the website, so I can continue to be a normal argumentative member of the Cognitive Sciences StackExchange. That site is already home to one of my most heated debates against the rationality fetish. In particular, I was arguing against the statement that “a perfect Bayesian reasoner [is] a fixed point of Darwinian evolution”. This statement can be decomposed into two key assumptions: a (1) perfect Bayesian reasoner makes the most veridical decisions given its knowledge, and (2) veridicity has greater utility for an agent and will be selected for by natural selection. If we accept both premises then a perfect Bayesian reasoner is a fitness-peak. Of course, as we learned before: even if something is a fitness-peak doesn’t mean we can ever find it.

We can also challenge both of the assumptions (Feldman, 2013); the first on philosophical grounds, and the second on scientific. I want to concentrate on debunking the second assumption because it relates closely to our exploration of objective versus subjective rationality. To make the discussion more precise, I’ll approach the question from the point of view of perception — a perspective I discovered thanks to TheEGG blog; in particular, the comments of recent reader Zach M.

Assuming that the mind approximates the structure of the world and describing this process through Bayesian models is now a mainstay for theories of cognition and perception (Chater et al., 2006; Chater & Oaksford, 2008). The precise evolutionary justification for these sort of models — they’re successful because they have a direct connection to the statistics of the natural world — dates back more than half-a-century (Brunswik, 1956), with some roots tracing back to Aristotle. The underlying philosophical stance is critical realism — perception resembles reality, although it doesn’t capture all of it. This is the orthodox view for vision scientists (Brunswih, 1956; Marr, 1982; Yuille & Bulthoff, 1996; Palmer, 1999). Even Donald D. Hoffman — our hero for this post — started in this orthodoxy over thirty years ago, endorsing it implicitly as a questioning recent PhD (Hoffman, 1983; emphasis mine):

First, why does the visual system need to organize and interpret the images formed on the retina? Second, how does it remain true to the real world in the process? Third, what rules of inference does it follow?

But toward the turn of the millennium, his argumentative attitude lead him to start questioning some of these implicit assumptions and develop the interface theory of perception (Hoffman, 1998; 2009). The name, and Hoffman’s favorite example, comes from computing machines: consider your desktop screen, what are the folders? They don’t resemble or approximate the complicated sequence of changes in magnetization in a thin film of ferromagnetic material inside a metal box called your hard-drive, not even at a course grained level. Nor do they show or even hint at the complicated information processing that changes those magnetic fields into the photons that leave your screen. It is an interface that hides the complexity that is unnecessary for your aims. In the case of evolution, the ‘aim’ is (roughly) maximizing fitness, and thus perception doesn’t need to be truthful, but has to provide an interface through which the agent can act to maximize its’ fitness. Hoffman explains this well in a segment of the following talk (unfortunately, I think that everything after 13m28s mark is content-less speculation and I don’t recommend watching past that point):

The evolutionary game theory that Hoffman mentions was done by Mark, Marion, & Hoffman (2010) to show some conditions under which the interface theory is more adaptive that truthful perceptions. They did this by looking at a game where two players meet at random and are presented with three potential foraging spots. The first agent picks one of the three spots, and then the second agent picks one of the remaining two spots. The two basic realist strategies the authors consider are:

  • A naive realist strategy called truth that tells the agent exactly how many resources (from 0 to 100) are at each spot, and then the agent rationally selects the spot with the most resources of those available, or
  • a critical realist strategy called simple that has a fixed boundary, such that if the resources are above that boundary value it sees the spot as ‘high’ otherwise as ‘low’. The agent then acts rationally on this coarse-grained information: if spots are all ‘low’ or all ‘high’ then the agent chooses at random, if one spot is ‘high’ then the agent chooses that spot, and if two spots are ‘high’ but one is ‘low’ then it chooses one of the two ‘high’s at random.

Unfortunately, to get any interesting dynamics out of this model, the authors have to introduce cognitive costs. In particular, since the truth strategy requires more detailed perception, it is always second to act (unless the two truths are competing against each other, in which case the order is random) and also sustains a small fitness penalty for the larger brain required for this perception. The authors solve the replicator dynamics for this model, and notice that truth does not go extinct only if the simple strategies’ boundary is badly placed (either below ~33 or above ~77) and the cost per extra bit of information is low; in some of these cases truth drives simple to extinction, and in others they co-exist. However, note that even when the cost per bit of information is zero, truth still has a penalty in acting second, so the results are not surprising.

To introduce an interface strategy, the authors consider a setting with three signals; lets call them ‘low’, ‘med’, and ‘high’. In the figure below, they are shown by the colors red, yellow, green. They also introduce the assumption that more resources doesn’t necessarily result in a greater increase in fitness. In particular, they assume that the increase in fitness is a Gaussian function (given by the black curves below) of the amount of resource, with a max at 50. In other words, going to a spot with 50 resources gives you the most possible resources, while a spot with 30 or a spot with 70 gives you less.

utilFig

  • For a critical realist strategy, the signals reflect reality, so every spot that maps to ‘low’ has fewer resources than every spot that maps to ‘med’ which has fewer resources than every spot that maps to ‘high’. This is shown in the top graph of the figure at right. Preferences are set by the area under the curve in the signal regions, hence a critical rational agent prefers ‘med’ over ‘high’ and ‘high’ over ‘low’. Unsurprisingly, this does not maximize fitness among 3 signal strategies.
  • The interface strategy gives ‘high’ the the chunk of the resource distribution that has the highest payoff, ‘med’ to the medium, and ‘low’ to the low end, as shown in the bottom figure above. Just like the critical realist, the interface agents are rationals, so with their signals they prefers ‘high’ over ‘med’ over ‘low’.

Of course, interface has the higher expected fitness and it is not surprising that it will always out-compete the critical realist strategy, and will also beat the naive realist strategy depending on how much extra truthful perception costs.

This shows that the fitness is more important than ‘truth’ to the agent, and if perception is expensive then the agent will tune its perceptive coarse-graining to reflect the fitness distribution (something that depends on how the agent can interact with the environment, not just the external environment) not the amount of resources (a property of only the external environment). Although it is interesting, it is not surprising. In particular, the agent still acts rationally, myoptically, and selfishly (although there is no social dilemma here) with respect to the objective fitness. The interface theory overturns rational fetish belief that our perception is always tuned to accurately reflect the external world. Instead, Mark, Marion, and Hoffman (2010) show that it is tuned to accurately reflect the interaction between agent and external world.

Marcel, Thomas, and I have extended beyond this to show that sometimes the tuning reflects not just the agent’s but society’s interaction with the world. Even without a penalty, in certain settings agents will evolve misrepresentations of the world that tell them incorrect fitness-information. What’s even more mind-blowing is that these incorrect assessments of objective fitness information actually help agents overcome their selfish tendencies and promote the social good. This happens despite the fact that the agents are acting completely rationally on what Hoffman would call their perceptions and what I call the subjective experience.

It is nice to know that we’ve been unknowingly extending an existing theory. Without Zach and this blog, I would have probably never learned of Hoffman’s work. In other words, today is a day of happiness at the usefulness of online communities from cstheory to TheEGG!

References

Brunswik, E. (1956). Perception and the representative design of psychological experiments. University of California Press, Berkeley.

Chater, N., Tenenbaum, J. B., & Yuille, A. (2006). Probabilistic models of cognition: Conceptual foundations. Trends in Cognitive Sciences, 10(7): 287-291.

Chater, N., & Oaksford, M. (Eds.). (2008). The probabilistic mind: Prospects for Bayesian cognitive science. Oxford University Press.

Feldman, J. (2013). Tuning your priors to the world. Topics in Cognitive Science, 5(1), 13-34.

Hoffman, D.D. (1983). The interpretation of visual illusions. Scientific American, 249: 154-162.

Hoffman, D.D. (1998). Visual intelligence: How we create what we see. W.W. Norton, New York.

Hoffman, D.D. (2009). The interface theory of perception. In: Dickinson, S., Tarr, M., Leonardis, A., & Schiele, B. (Eds.), Object categorization: Computer and human vision perspectives. Cambridge University Press, Cambridge.

Mark, J.T., Marion, B.B., & Hoffman, D.D. (2010). Natural selection and veridical perceptions. Journal of Theoretical Biology, 266 (4), 504-15 PMID: 20659478

Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information, Henry Holt and Co. Inc., New York, NY.

Palmer, S. E. (1999). Vision science: Photons to phenomenology. The MIT press.

Yuille, A., & Bulthoff, H. (1996). Bayesian decision theory and psychophysics. In Knill, D. C., & Richards, W. (Eds.). Perception as Bayesian inference. Cambridge University Press.

About these ads

About Artem Kaznatcheev
From the ivory tower of the School of Computer Science and Department of Psychology at McGill University, I marvel at the world through algorithmic lenses. My specific interests are in quantum computing, evolutionary game theory, modern evolutionary synthesis, and theoretical cognitive science. Previously I was at the Institute for Quantum Computing and Department of Combinatorics & Optimization at the University of Waterloo and a visitor to the Centre for Quantum Technologies at the National University of Singapore.

22 Responses to Interface theory of perception can overcome the rationality fetish

  1. Great post Artem, though I do have a question that’s nagging at me: In the model, you mention that Hoffman assumes that fitness is a Gaussian of the amount of resources, but then the critical realist is given a signal for “amount of resources” that clashes with the fitness function (causing it to choose high resources but poor fitness). I understand what he is trying to show, but how is the assumption that fitness is a Gaussian of resources realistic or even fair? Does this strike you as an “engineered” result, or is there some real-world precedent for this sort of scenario?

  2. Sergio Graziosi says:

    I see Marcel’s point, but think he missed yours: if a “med” level of resources gives better pay-off, the critical realist should still be able to recognise this fact, see the “truth” that some spots have more resources but less pay-off and learn to choose maximum pay-off. But hey, this is the point, isn’t it: it shows that a good and efficient approximation to the truth is not the optimal strategy if and when the connection between truth and fitness is not straightforward (e.g. in this case, “more resources” != “more gain”).
    On the other hand, the key argument is about computational cost, and I am absolutely bought by the interface idea (haven’t watched the video yet): we perceive what is useful to us, and don’t even have receptors for whatever else is out there. This is a solid observation across living organisms (applies to vegetables as well): bees visual reception extends to ultra-violet frequencies because flowers use them to advertise their presence, we (humans) don’t rely on this signal and don’t see UV light. The number of examples can continue forever, but it’s perfectly obvious: sensory systems of different organisms are tuned to their own specific needs.
    I see no reasons not to extend this concept and admit that also the interpretation of raw sensory information will follow the same principle. Whatever generalisations are true frequently enough will have a chance to be “built-in” if they happen to lower the computational cost and/or speed up the response time. I’d go even further and say also that our perception-interpretation systems are built to learn how to do this on an experience-informed way. If a particular stimulus is almost always generated by a particular source, we’ll learn to perceive the source and avoid considering all other possible explanations. [This line of thought leads to the one and only sensible solution to the frame problem that I can consider convincing.]
    I think it all boils down to the observation that it doesn’t make evolutionary sense to have a “simple” critical realist strategy as the one described here: what is selected for is the fitness value of perception and since the connection between truth and fitness is not always straightforward, some “wrong” perceptions will be selected for (that’s the wrong perceptions that happen to be useful heuristics or approximations), how could it be anything else?
    Anyway (apologies for the disorganised stream of consciousness!), that’s a great post Artem, and one that has the potential to push me out of my own blogging silence: I need to move on error-making and the evolutionary sources of bias will be my starting point.

    • if a “med” level of resources gives better pay-off, the critical realist should still be able to recognise this fact, see the “truth” that some spots have more resources but less pay-off and learn to choose maximum pay-off.

      Note that there is no learning in this model. It is a near equilibrium model in this regard by assuming that the agents act absolutely optimally on their discretization (or coarse-graining) of the world. It is just because the critical realist is forced to have contingent categorizes, while the interface strategy is allowed to have categories that have different parts in different regions. In other words, they simple have more flexibility, and thus the result is not that surprising. In fact, I would agree with Marcel that it is rather engineered, but I’ll give a more nuanced discussion of that later.

      On the other hand, the key argument is about computational cost, and I am absolutely bought by the interface idea (haven’t watched the video yet): we perceive what is useful to us, and don’t even have receptors for whatever else is out there.

      They actually don’t make this argument explicit, but I really think it is extremely important to take this seriously! I don’t mean in the vague heuristic SFI sense of complexity either. I mean in the precise and rigorous cstheory sense. The coolest part is that this work has started already with Livnat & Peppenger (2006; 2008 — see this post for a summary). They did not directly relate their work to interface theory (because they were not aware of it), but I think they make a much more convincing case for it than Mark, Marion, & Hoffman do. I plan to look into this further.

      This is a solid observation across living organisms (applies to vegetables as well): bees visual reception extends to ultra-violet frequencies because flowers use them to advertise their presence, we (humans) don’t rely on this signal and don’t see UV light. The number of examples can continue forever, but it’s perfectly obvious: sensory systems of different organisms are tuned to their own specific needs.

      I think it all boils down to the observation that it doesn’t make evolutionary sense to have a “simple” critical realist strategy as the one described here: what is selected for is the fitness value of perception and since the connection between truth and fitness is not always straightforward, some “wrong” perceptions will be selected for (that’s the wrong perceptions that happen to be useful heuristics or approximations), how could it be anything else?

      I don’t think you are fully embracing the interface theory, yet. The examples you give can still be consistent with critical realism, this orthodox view does not rule out “wrong” perceptions (else the cube at the start of my post would have caused people to abandon the theory long ago) but they argue as you do that they are useful approximations. The interface theory, instead, says things are useful but aren’t even approximations.

      My results with Marcel take this even further! We show that not only do they not resemble a reality that doesn’t correlate to fitness (as this post had). Sometimes the interface doesn’t even resemble or approximate individual fitness! Instead, in social dilemma, the individual’s interface can instead serve the society’s interest and be objectively irrational for the individual that holds it.

      that’s a great post Artem, and one that has the potential to push me out of my own blogging silence: I need to move on error-making and the evolutionary sources of bias will be my starting point.

      Thank you! I am glad it prompted a response from you. It is always nice to engage in discussion, they often bring the biggest revelations. I look forward to reading your post when it comes out.

      • Sergio Graziosi says:

        This is getting more and more interesting!
        I will have to carefully go through your own “Evolving useful delusions to promote cooperation” post one more time, as that’s the key for what we’re talking about here and important for what I plan to explore personally. I fear that I will come up with some naive questions, in case I won’t be sure that I understood all the details properly.
        Anyway, for the current discussion, as my thoughts are slowly organising themselves, I think I’ve realised what’s my point:
        I currently don’t see a difference between the critical realist and interface strategies, they are one and the same, at least to me. It may be because I don’t see the full depth of the interface idea, but I currently don’t think so (and I won’t lie to hide my possible limits!). I wouldn’t say that the purpose of perception is to hide complexity, and don’t think it’s a reasonable way of describing the evolutionary story of perception. Receptors evolved from simple to complex, capturing more “details” of the world along the way, and diverging on what details are captured depending on the needs of a given species; that’s summarised in

        we perceive what is useful to us, and don’t even have receptors for whatever else is out there.

        Perception does not actively hide complexity because it never got access to it in the first place. However, the function of perception is to maximise fitness, not to model reality faithfully, making systematic mistakes and approximations the norm, not the exception.
        Having said this, I need to point out that there is a particular selective pressure that has got more and more significant throughout the whole evolution, and is constantly accelerating because of a positive feedback loop. That is: as time passes, organisms on earth face more and more the challenge of change. Change of conditions is becoming more and more frequent, and those who can adapt to rapid change more effectively are selected for, and their own adaptations contribute to speed up the pace of change (and drive down biodiversity along the way). I would need a long discussion to support this claim but I hope you will be able to fill in the gaps.
        The result in our case is that having a perception system that faithfully represents reality would actually be useful for those organisms that rely on learning to adapt to a changing environment. There is one driving force that tends to keep perceptions oversimplified, the “need to know” approach (at the centre of interface perception theory, as I understand it), and another that pushes on the other direction (for organisms that can learn across their own life-span, this force gets stronger with time, following the increase in learning powers): the more (~accurate) slices of reality you perceive, the more likely it is that you’ll learn a new useful behaviour. This second selective drive is what “critical realism” focuses on.
        To me, you can’t claim to have a half decent theory if you don’t acknowledge both drives, so, assuming that the proponents of both theories are not locked in a bad case of selective blindness, I would expect them to simply give different importance to both drives. I can live with this, but would strongly reject any theory that refuses to consider both selective pressures.
        Another way of putting my hunch is that interface theory isn’t really saying that false perceptions are normally selected for, but they say that perceptions can (and frequently do) map complex qualities of reality as if they were fundamental: they hide away complex causal chains and show a synthesised result instead. This is a viable and convenient strategy whenever the hidden complex causal chain leads to easily predictable results that follow static rules. E.g. a table mostly occupies empty space, but your hand will never pass through it if you hit it, hence it’s convenient to “see” the table as one unique solid object. Bouncing photons are common, and can be used as a proxy, so they are “measured” by our eyes, etc.

        A side note: Hoffman is spot-on about space-time (at 11m15s, what follows after your 13m28s mark is indeed wacky – had to watch it ’cause I have my own ideas on consciousness). The way we perceive space-time is the direct (and exclusive?) consequence of the domain and modality over which genetic information spreads. Does this make our perceptions of space-time false? Not necessarily, but that’s another long discussion that I won’t start today…

        • Sergio Graziosi says:

          Hi Artem,
          a short FYI one: I’ve finally published the post inspired by this discussion. You can find it here: http://wp.me/p3NcXb-52. Comments, criticism and especially corrections (I’ve tried to summarise some of your work in short-form) are more than welcome.
          Thanks again!

        • Perception does not actively hide complexity because it never got access to it in the first place.

          I am personally a little bit on the fence about this notion of objective complexity. To me, something is complex only once you define it as a problem, and that is an observer imposed thing. A process by itself of “the world” without any further detail cannot be complex, it is only complex in light of questions that we ask of it. Hence, it is difficult for me to wrap around complexity that perception “never go access to”.

          However, I am not set in my views on complexity, it definitely seems like there are some natural notions of it. However, I can’t be sure that I can separate these notions from the questions I find “natural to ask” of the world.

          Change of conditions is becoming more and more frequent, and those who can adapt to rapid change more effectively are selected for, and their own adaptations contribute to speed up the pace of change (and drive down biodiversity along the way). I would need a long discussion to support this claim but I hope you will be able to fill in the gaps.

          Unfortunately, I cannot fill in the gaps here because I think your statement is false and tries to smuggle in the evolutionary ladder that is so dangerous and mind-polluting. It is tempting to view our times as more changing that others, but I would have a hard time believing that our environment is more dynamic than it was during say the Cambrian explosion or the extinction event the ended it.

          To me, you can’t claim to have a half decent theory if you don’t acknowledge both drives, so, assuming that the proponents of both theories are not locked in a bad case of selective blindness, I would expect them to simply give different importance to both drives. I can live with this, but would strongly reject any theory that refuses to consider both selective pressures.

          I agree, but I think the question becomes ‘which is dominant’, even hardline stances of either viewpoint have to acknowledge a little of the other, but they can do so in very dismissive ways. In particular, I would imagine a hardline critical realist as saying: “sure, we have some fictions, but they are like the appendix: artefacts that will be ‘fixed’ by evolution”. On the other hand, a hardline interface theorist might say: “sure, we agree with an objective reality sometime, but that is only because any interface when interpreted in depth enough has to capture some part of reality to be relevant”. I can’t say I agree with either, but at this point I think I lean more toward the latter.

          • Sergio Graziosi says:

            I’ll try to leave complexity out of this: as usual, interesting discussions open too many questions!
            What I find intriguing is your position on change/evolutionary ladder.
            Forgive me, but I see a hint of contradiction in your position. The last paragraph implies that you agree that in a changing environment, accurate perceptions and representations provide a selective advantage. However, you tend to think that this advantage is relatively small, and that therefore a “simplified interface” is the norm. I don’t really see the tension, as I’ve said before (and unfortunately mentioned “complexity”), the simplified interface usually precedes more accurate perceptions for obvious evolutionary reasons, it may then evolve into blatantly “distorted” perception in some cases, but in others, those that happen to be about a domain where significant and unpredictable changes of conditions happen frequently, it will not. An example of the former is olfaction: our receptors rapidly adapt (in this case it means they stop reporting a long lasting stimulus), and stop reporting the presence of persistent odours. This is distortive, but adaptive, because what is usually significant is change (surprise!) and not constant presence. Sight is another matter: it’s designed to capture rapid dynamic change, and in fact it tells us pretty accurately where object are and how they move.
            The whole of the above is to reiterate what I’ve said already, you need both considerations to understand what’s going on. Sure: perfectly accurate perceptions/representations are impossible, making the rationality fetish just an utopia that typically afflicts scientists (those that tend to believe it’s possible to know objective facts, but that’s another story). In the specular way, all perceptions aim to capture some truth out there, so yes, they may be misleading, but would still accurately correlate with some physical aspect of the world.
            Back on track: I don’t know if change now is faster than in other periods of natural history, but I don’t see the problem. Evolution is frequently additive: at each mass extinction, the organisms that could survive a rapid revolutionary change survived, the others didn’t. This means that at the next occurrence, the probability that each species could survive another “revolution” was likely to be somewhat higher. It helps to understand why learning and culture (the ability to transmit learned behaviours) got selected for: they allow to rapidly adapt to changing conditions, in ways that only mono-cellular organisms can match via genetic evolution. If this consideration risks polluting our mind with over-simplistic “ladder” views, I have to accept the risk, because it surely seems to be the case.
            As a consequence, one can see the recent history (2-3000 years) in a new light: the appearance and wide-spread proliferation of a species that is specialised in cultural transmission has transformed the vast majority of the ecosystems on earth. This drives down bio-diversity and favours organisms that are not overly specialised, as well as those that can adapt to change. And the latter group, when looking at multi-cellurar creatures, almost invariably are good at learning new behaviours, and frequently able of cultural transmission. This has been documented in urban birds, for example: http://psycnet.apa.org/index.cfm?fa=search.displayRecord&UID=1985-19403-001 (note that is not imitation! it’s even simpler…).
            My point is that if the frequency of animals that can rapidly acquire and spread new behaviours increases, then necessarily the pace of change will raise, further favouring those able to rapidly adapt. How important this drive is, may remain an open question, but I can’t negate it only because it may give us a dangerous sense of superiority. And still, I find it hard to negate the importance of this positive feedback, because I see its effects all around in my everyday life. Urban wildlife reminds me of it every day: crows, rats, all garden birds, foxes, and now in London even parakeets thrive (and apparently, even leopards are learning to live in urban environments), and guess what? They all can learn and transmit their knowledge.
            There is another element is this already complex mix: adaptability is frequently obtained by aggregating more “simple” units, so to generate a more complex and bigger unit, that can adapt because each of its original parts is expendable. You can read much of evolution through this lens: from pro- to eukaryotes, from uni- to multicellular, from solitaire, to family and then multi-family groups, and then on, to societies, armies, states, companies and multinationals.
            The work you are doing here is bound to shed some light on why/how this happens, and is the reason why I’m following you with a keen eye. Yes, of course, this suggests a dangerous “ladder” concept, but I have to accept the danger, because the evidence points in this direction, and in multiple domains: you can see the same pattern in social, political and economics settings (not to mention the written word). One of my main interests is in finding out why it is so, what the limits are, and where does it lead.
            [As a side note, Taleb's Antifragility book has a lot to do with this. If you haven't read it, I'd suggest you do. Beware! You'll find it infuriating, I'm sure, but some concepts in there are valid just the same.]
            I hope you don’t find any of this confrontational! I’m testing my own views with you, because I know your counter-arguments will be informative: I expect and value some disagreement. Thanks again for the stimulating discussion!

            • I hope you don’t find any of this confrontational! I’m testing my own views with you, because I know your counter-arguments will be informative: I expect and value some disagreement. Thanks again for the stimulating discussion!

              No worries, my only complain would be formatting. These are really long walls of text, would be nice to have some breaks into more clearly spaced paragraphs. However, as long as we don’t start debating in circles, I am happy!

              The last paragraph implies that you agree that in a changing environment, accurate perceptions and representations provide a selective advantage.

              Not exactly, I just think the question is less interesting for environments that change on a timescale that is incompatible with mutation rate. My goal isn’t to make some huge proclamation about biology, that is far too easy to do and often devoid of content, especially when it comes from someone naive of biology like myself. My goal is to look at models that people think they understand and show results that they didn’t expect. I have a strong preference for simple models because that is the only way I see toward mathematizing biology. I think this is where we fundamentally differ, as will become clear in the rest of my response.

              An example of the former is olfaction: our receptors rapidly adapt (in this case it means they stop reporting a long lasting stimulus), and stop reporting the presence of persistent odours. This is distortive, but adaptive, because what is usually significant is change (surprise!) and not constant presence. Sight is another matter: it’s designed to capture rapid dynamic change, and in fact it tells us pretty accurately where object are and how they move.

              This is not what I meant by change at all. I meant change in the fitness landscape. For example, only red frogs were poisonous, but then a new frog was introduced that was blue and not poisonous. Of course, there is a fine line between change from the perspective of individuals and change from the perspective of the population, in the real world the line is completely blurred, especially with very rare events (say there are blue frogs in the environment already, but so rare that experiences with them are dominated by stochasticity over anything else). However, the goal of the modeler is to understand the ideal cases first (before we worry about blurry lines).

              In the specular way, all perceptions aim to capture some truth out there, so yes, they may be misleading, but would still accurately correlate with some physical aspect of the world.

              Here it depends what you mean by correlate. If you mean the most general sense of mutual information then I agree, and I might even argue that it is impossible to disagree. I am not sure if we had something that was completely independent of the environment that we would even be justified in calling it ‘perception’. However, if you mean correlate in the linear case or in some other simple functional relationship then I think disagreeing with that is the point of the interface theory. But I might be mistaken.

              Evolution is frequently additive: at each mass extinction, the organisms that could survive a rapid revolutionary change survived, the others didn’t. This means that at the next occurrence, the probability that each species could survive another “revolution” was likely to be somewhat higher.

              I think that this is wrong for two reasons. (1) you are begging the question with additive; saying evolution is additive is almost the same thing as the ladder. (2) the probability of the species will not be the same or greater unless the environment is exactly the same as it was during the last evolution and then have undergone no evolutionary change since then. Both are completely unreasonable assumptions. You don’t know which way the environment changes, there is no overall “hardiness in all possible environments”; I think this can be shown formally using the no-free-lunch theorem, but we would have to set things up carefully to avoid some trivialities.

              As a consequence, one can see the recent history (2-3000 years) in a new light: the appearance and wide-spread proliferation of a species that is specialised in cultural transmission has transformed the vast majority of the ecosystems on earth.

              Yet the ants still make up more of the biomass, and plankton have much more of an impact on the environment as a species. In fact, I have some vague memory of the biggest drastic change in the chemical composition of the atmosphere and a related mass extinction was due to the rapid growth of some plankton. I fear that viewing recent history as somehow important on a geological scale might be a bit to anthropocentric. However, on the other hand, it does seem that what we do as a species has started to have much more of an impact on the ability of us to continue as a species; it is just not as obvious to me that it matters that much to the rest of life. As the story goes, the cockroaches will inherit our nuclear apocalypse.

              My point is that if the frequency of animals that can rapidly acquire and spread new behaviours increases, then necessarily the pace of change will raise, further favouring those able to rapidly adapt.

              It isn’t clear to me that this frequency is all that all that large, or affect all that much by what we consider to be “highly adaptive” behavior. I think that there this happens, it matters only for things that constitute a lot of the biomass. I also suspect that such rapid change would lead to big fluctuations which leads to extinction and back to a state of not as adaptive organisms (at least by regression to the mean, but I suspect we could even find selective factors). This would lead to a more cyclic structure than the ladder suggests.

              There is another element is this already complex mix: adaptability is frequently obtained by aggregating more “simple” units, so to generate a more complex and bigger unit, that can adapt because each of its original parts is expendable. You can read much of evolution through this lens: from pro- to eukaryotes, from uni- to multicellular, from solitaire, to family and then multi-family groups, and then on, to societies, armies, states, companies and multinationals.

              At TheEGG, Julian is a big advocate of the approach you describe above; ratcheting effects, etc. It is a tempting view, but I am a bit on the fence about it. In particular, most of the models that I’ve seen that show this basically piggy-back this effect on entropy, by effectively having more ways one can be complex than simple. This leads to a “complexity” that isn’t that interesting to me.

              Taleb’s Antifragility book has a lot to do with this. If you haven’t read it, I’d suggest you do. Beware! You’ll find it infuriating, I’m sure, but some concepts in there are valid just the same.

              I’ve commented on Taleb in passing before. I don’t think he is my kind of academic (I think he wants to be a showman, too much), so I probably won’t read his books (there are so many books to read, and only a finite life to read them in!). However, I am vaguely familiar with antifragility from second-hand sources.

              (By the way, if you reply to this comment, could you do it as a top-level comment so we don’t get buried too deep, it gets hard to read as the margin widens)

            • Sergio Graziosi says:

              No worries, my only complain would be formatting. These are really long walls of text, would be nice to have some breaks into more clearly spaced paragraphs. However, as long as we don’t start debating in circles, I am happy!

              Thanks. Lots of food for thought here. I certainly got what I was looking for: you are helping me to straighten up my positions, and it’s really useful (don’t see how, but I do hope it’s not a waste of time for you either).

              Will take my time for a little thinking, and will certainly come back (following your formatting advice, apologies for the wall of text!). If you think we should move to an alternative medium, please let me know!

  3. vznvzn says:

    congrats on winning the mod job in an impressive turnout upset. maybe the “[under] new mgt” will be flexible/transparent/engaged etc than the outgoing mods…. it took some cojones to quote Dilworth on meta and that crazy back-and-forth fireworks…. your blog/writing style reminds me of david brin, have you ever read him? check him out you might like it…. you already seem superbusy & am surprised you volunteered. also hope you might comment some time on what the job is like & how much time it takes so any new contestants/victims can be better informed about the possible implications wink :razz:

  4. vznvzn says:

    ps the hyperlink for “rationality fetish” is missing above. was hoping to click on it and figure out a teeny )( bit more what the heck youre talking about haha… yeah am one of those outlier readers that actually clicks on links haha … re tcs.se, it is like a modern cyber scientific society slash social networking for TCS experts although few there realize this & its implications…. hope to see it grow & prosper & think it has a great chance of that under your influence….

  5. Pingback: Sources of error: we are all biased, but why? | Writing my own user manual

  6. Pingback: Misleading models: “How learning can guide evolution” | Theory, Evolution, and Games Group

  7. Pingback: Evolution is a special kind of (machine) learning | Theory, Evolution, and Games Group

  8. Pingback: Recap #2: getting lost in my own muddle | Writing my own user manual

  9. Pingback: Why academics should blog and an update on readership | Theory, Evolution, and Games Group

  10. Sergio Graziosi says:

    I’ve just finished reading this:
    evolution of misbelief RT McKay, DC Dennett – Behavioral and Brain Sciences, 2009
    It’s exquisitely anthropocentric, but well worth reading, I’m guessing it will be of interest (assuming it’s new for you).
    Enjoy!

    • I was not familiar with that article and it does indeed look very relevant. Thank you!

      Are you planning to review it on your blog? Alternatively, you are welcome to write a guest post on TheEGG about that article; if you’re interested then send me an email.

  11. Pingback: Useful delusions, interface theory of perception, and religion | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,299 other followers