Interdisciplinitis: Do entropic forces cause adaptive behavior?

Reinventing the square wheel. Art by Mark Fiore of San Francisco Chronicle.

Reinventing the square wheel. Art by Mark Fiore of San Francisco Chronicle.

Physicists are notorious for infecting other disciplines. Sometimes this can be extremely rewarding, but most of the time it is silly. I’ve already featured an example where one of the founders of algorithmic information theory completely missed the point of Darwinism; researchers working in statistical mechanics and information theory seem particularly susceptible to interdisciplinitis. The disease is not new, it formed an abscess shortly after Shannon (1948) founded information theory. The clarity of Shannon’s work allowed a metaphorical connections between entropy and pretty much anything. Researchers were quick to swell around the idea, publishing countless papers on “Information theory of X” where X is your favorite field deemed in need of a more thorough mathematical grounding.

Ten years later, Elias (1958) drained the pus with surgically precise rhetoric:

The first paper has the generic title “Information Theory, Photosynthesis and Religion” (title courtesy of D. A. Huffman), and is written by an engineer or physicist. It discusses the surprisingly close relationship between the vocabulary and conceptual framework of information theory and that of psychology (or genetics, or linguistics, or psychiatry, or business organization). It is pointed out that the concepts of structure, pattern, entropy, noise, transmitter, receiver, and code are (when properly interpreted) central to both. Having placed the discipline of psychology for the first time on a sound scientific base, the author modestly leaves the filling in of the outline to the psychologists. He has, of course, read up on the field in preparation for writing the paper, and has a firm grasp of the essentials, but he has been anxious not to clutter his mind with such details as the state of knowledge in the field, what the central problems are, how they are being attacked, et cetera, et cetera, et cetera

I highly recommend reading the whole editorial, it is only one page long and a delight of scientific sarcasm. Unfortunately — as any medical professional will tell you — draining the abscess is treating the symptoms, and without a regime of antibiotics, it is difficult to resolve the underlying cause of interdisciplinitis. Occasionally the symptoms flare up, with the most recent being two days ago in the prestigious Physics Review Letters.

Wissner-Gross & Freer (2013) try to push the relationship between intelligence and entropy maximization by suggesting that the human cognitive niche is explained by causal entropic forces. Entropic force is an apparent macroscopic force that depends on how you define the correspondence between microscopic and macroscopic states. Suppose that you have an ergodic system, in other words: every microscopic state is equally likely (or you have a well-behaved distribution over them) and the system transitions between microscopic states at random such that its long term behavior mimics the state distribution (i.e. the ensemble average and time-average distributions are the same). If you define a macroscopic variable, such that some value of the variable corresponds to more microscopic states than other values then when you talk about the system at the macroscopic level, it will seem like a force is pushing the system towards the macroscopic states with larger microscopic support. This force is called entropic because it is proportional to the entropy gradient.

Instead of defining their microstates as configurations of their system, the authors focus on possible paths the system can follow for time \tau into the future. The macroscopic states are then the initial configurations of those paths. They calculate the force corresponding to this micro-macro split and use it as a real force acting on the macrosystem. The result is a dynamics that tends towards configurations where the system has the most freedom for future paths; the physics way of saying that “intelligence is keeping your options open”.

In most cases to directly invoke the entropic force as a real force would be unreasonably, but the authors use a cognitive justification. Suppose that the agent uses a Monte Carlo simulation of paths out to a time horizon %latex \tau$ and then moves in accordance to the expected results of its’ simulation then the agents motion would be guided by the entropic force. The authors study the behavior of such an agent in four models: particle in a box, inverted pendulum, a tool use puzzle, and a “social cooperation” puzzle. Unfortunately, these tasks are enough to both falsify the authors’ theory and show that they do not understand the sort of questions behavioral scientists are asking.

If you are locked in a small empty (maybe padded, after reading this blog too much) room for an extended amount of time, where would you chose to sit? I would suspect most people would sit in the corner or near one of the walls, where they can rest. That is where I would sit. However, if adaptive behavior is meant to follow Wissner-Gross & Freer (2013) then, as the particle in their first model, you would be expected to remain in the middle of the room. More generally, you could modify any of the authors’ tasks by having the experimenter remove two random objects from the agents’ environment whenever they complete the task of securing a goal object. If these objects are manipulable by the agents, then the authors would predict that the agents would not complete their task, regardless of what the objects are since there are more future paths with the option to manipulate two objects instead of one. Of course, in a real setting, it would depend on what these objects are (food versus neutral) on if the agents would prefer them. None of this is built into the theory, so it is hard to take this as the claimed general theory of adaptive behavior. Of course, it could be that the authors leave “the filling in of the outline to the psychologists”.

Do their experiments address any questions psychologists are actually interested in? This is most clearly interested with their social cooperation task, which is meant to be an idealization of the following task we can see bonobos accomplishing (first minute of the video):

Yay, bonobos! Is the salient feature of this task that the apes figure out how to get the reward? No, it is actually that bonobos will cooperate in getting the reward regardless of it is in the central bin (to be shared between them) or into side bins (for each to grab their own). However, chimpanzees would work together only if the food is in separate bins and not if it is available in the central bin to be split. In the Wissner-Gross & Freer (2013) approach, both conditions would result in the same behavior. The authors are throwing away the relevant details of the model, and keeping the ones that psychologists don’t care about.

The paper seems to be an obtuse way of saying that “agents prefer to maximize their future possibilities”. This is definitely true in some cases, but false in others. However, it is not news to psychologists. Further, the authors abstraction misses the features psychologists care about while stressed irrelevant ones. It is a prime example of interdisciplinitis, and raises the main question: how can we avoid making the same mistake?

Since I am a computer scientists (and to some extent, physicist) working on interdisciplinary questions, this is particularly important for me. How can I be a good connector of disciplines? The first step seems to publish in journal relevant to the domain of the questions being asked, instead of the domain from which the tools being used originate. Although mathematical tools tends to be more developed in physics than biology or psychology, the ones used in Wissner-Gross & Freer (2013) are not beyond what you would see in the Journal of Mathematical Psychology. Mathematical psychologists tend to be well versed in the basics of information theory, since it tends to be important for understanding Bayesian inference and machine learning. As such, entropic forces can be easily presented to them in much the same way as I presented in this post.

By publishing in a journal specific to the field you are trying to make an impact on, you get feedback on if you are addressing the right questions for your target field instead of simply if others’ in your field (i.e. other physicists) think you are addressing the right questions. If your results get accepted then you also have more impact since they appear in a journal that your target audience reads, instead of one your field focuses on. Lastly, it is a show of respect for the existing work done in your target field. Since the goal is to set up a fruitful collaboration between disciplines, it is important to avoid E.O. Wilson’s mistake of treating researchers in other fields as expendable or irrelevant.

References

Elias, P. (1958). Two famous papers. IRE Transactions on Information Theory, 4(3): 99.

Shannon, Claude E. (1948). A Mathematical Theory of Communication. Bell System Technical Journal. 27(3): 379–423.

Wissner-Gross, A.D., & Freer, C.E. (2013). Causal Entropic Forces Phys. Rev. Lett., 110 (16) : 10.1103/PhysRevLett.110.168702

About these ads

About Artem Kaznatcheev
From the ivory tower of the School of Computer Science and Department of Psychology at McGill University, I marvel at the world through algorithmic lenses. My specific interests are in quantum computing, evolutionary game theory, modern evolutionary synthesis, and theoretical cognitive science. Previously I was at the Institute for Quantum Computing and Department of Combinatorics & Optimization at the University of Waterloo and a visitor to the Centre for Quantum Technologies at the National University of Singapore.

25 Responses to Interdisciplinitis: Do entropic forces cause adaptive behavior?

  1. Daniel says:

    The richness of future behaviour is indeed, in some cases (of course, not in all), a good indicator for preferrable behaviour and there are quite some studies on why that actually can make sense in physically embodied agents; check out e.g. predictive information or perception-actuation channel capacity (sometimes called “empowerment”); Prokopenko, Gerasimov, Tanev 2006, Ay & Der 2007 and later publications, and work by Klyubin et al. 2008. These more refined models are linked into the biological/robotic literature with some detail as to when and how such principles may be actual provide unguided behavioural trends. Wissner-Gross & Freer seem not to be aware of this prior work, and neither of Touchette/Lloyd’s information-theoretic constraints for control, Susanne Still’s work, or Todorov’s information-theoretic control models which all predate this work. In the more general field of “intrinsic motivation”, which already has seen quite some other approaches (this is by far not the first one, unlike claimed by the authors), not only the early classic work by Ashby, but also major contributions by Schmidthuber, Steels, Oudeyer & Kaplan, Friston.

    I haven’t yet understood the author’s approach well enough to be sure whether it is a subset of the existing models or a relevant addition to the existing toolbox of methods, but clearly, a lot of context needs to be added to evaluate this. There is some line of thought in this work which comes from the quite interesting direction of Maximum Entropy Production Principle, however, the most important arguments in favour of deriving this MEPP from thermodynamic axioms made by Dewar (2003,2005) have unfortunately been found faulty by Grinwald & Linsker (2007). Wissner-Gross & Freer cite the 2003, 2005 papers by Dewar without reference to that major problem. They may not be aware of it. This does not necessarily invalidate their approach, but given that they miss much relevant literature, and quote outdated sources without being aware of its limitations indicates that, strictly spoken, the status of their results probably should be considered premature in a field that is not exactly premature anymore: the beginning of modern “intrinsic curiosity” research can probably be traced back to Schmidthuber in the early 90s; also, there is a whole Artificial Life community which is interested in useful objective criteria for characterizing life. It is certainly the case that sometimes some naivety is needed to bring fresh air to a field, but given the limited overview that the paper demonstrates over the existing relevant work, and the fact that much of the ideas are clearly related and probably less general versions of earlier work mentioned above, the novelty, if it can at all be sustained, is probably not as high as claimed.

    • Thank you for this thorough comment! I hadn’t thought about connections to the ALife literature, with which I am not very familiar. It would also be interesting to compare this to what minimal cognition folks are doing.

      Would you be interested in expanding your comment into a blog post? You are welcome to contribute a guest post here. Otherwise, do you mind if I use your comment as a launching point for a future post? I would really have to track down and look into the references you mention more carefully.

      • Daniel says:

        I am not experienced with blogging and am not sure what style would be appropriate for that. Of course, I could try, but alternatively, I am happy for you as more experienced blogger to use my post as starting point for your own expansions and, if you like, get in contact with me directly (you should have my email) if you need to check/verify information or get more clarification about the literature.

      • Daniel says:

        Concerning minimal cognition, there is some very nice work, e.g. by Beer and collaborators, but also the classic work on Braitenberg agents and of course the whole branch in the wake of Brooks subsumption architecture philosophy and of Pfeifer’s and Paul’s morphological computation philosophy is very important. These are doubly important, because the success of entropy-based methods to provide “sensible” behaviours crucially depends on the embodiment, i.e. the way the agent’s body is linked into the world. There are some striking demonstrations of that phenomenon. All these various aspects are interlinked, having also interesting consequences for descriptions and for hypotheses concerning drivers for evolution. Clearly, the authors of the paper we discuss had the right instinct where to prod, but their bad luck is that they do not operate in a vacuum.

  2. Two comments.

    1. I didn’t think of Chaitin as a physicist (implied in opening paragraph)

    2. Submission to a physics journal today doesn’t mean there may be submissions to other journals later. I think a physicist may find this interesting.

    Thanks for your other pointers!

    S

    • I didn’t mean to imply that Chaitin is a physicist, he is very much a computer scientist! I wanted to just mention how statistical mechanics and information theory (which for me are two sides of the same coin) are particularly susceptible to to wandering off into other disciplines. Chaitin’s work was just particularly salient because I had blogged about it before. The authors of the paper are also physicists only by training (in the case of author 1) but are both at an AI lab now, so I really meant the term much more loosely to keep in line with popular conceptions and the venue.

      I really hope there isn’t a submission to another journal, at least not until the authors take the time to thoroughly familiarize themselves with the fields for which they are trying to build a “general theory” of. The reason I chose to critique this paper is mostly because I think it encourages the common physics disrespect for other fields. I don’t think the paper is of physics interest, and it is only of interest to readers of PRL in that it panders to their misconceptions of what other fields’ study.

      If you only look at the headline or ignore the disconnect of their models from those that would be of interest to behavioral scientists then it is definitely a catchy result. The problem is that I suspect the readers of PRL won’t see the disconnect with behavioral science :( and the general internet audience won’t look into the math to see what exactly the authors did and didn’t do but will rely on their claims of originality.

      • Let’s check back in a few year on how this paper is referenced in future physics papers.

        Also I noticed reference 8, E.Verlinde, Journal of High Energy Physics 2011. I’m guessing it is http://arxiv.org/abs/1001.0785. Couldn’t make the connection to cosmology, relativity and all the big questions and trendy emergent gravity research in an AI paper.

        S

        • Daniel says:

          Actually, while I think the authors missed a lot of references, I can kind-of see where the connection may lay. If they argue via MEPP (maximum entropy production principle), for which there is, of course, only a tenuously justification line from first principles at this point, then Verlinde’s interpretation of gravity as a kind of force induced by expanding phase space is not so far off – in principle. However, I do think the connection is brushed with a very broad brush and more imagination-enhancing than concrete, and I believe a much more promising line of argument is, in my opinion, an evolutionary argument. I mentioned to Artem that we are trying to prepare a response and we’ll try to make that point a bit clearer there, because it is actually quite a long story why and how that would look like.

          The evolutionary argument also doesn’t touch the probable motive of Wissner-Gross and Freer in developing their model, namely trying to get to a kind of origin-of-life type mechanism from physical first principles. Again, there might be a route there that may rely on weaker assumptions than required for a MEPP-type argument, but this is very speculative at this point.

  3. metalliska says:

    It wouldn’t surprise me if this happens more in the future. I know I’m guilty of this, especially once you start looking at different piles of data, any shortcuts which might be applied between two disciplines seem to look ‘correct’, or ‘one-to-one’, without necessarily being so.

    There’s a sense of over-eagerness to assume that the findings in one theatre always correlate to all others, and that the bridge-building between theatres is somewhat ‘optional’.

    • I agree. I am always cautious that I might have a case of interdisciplinitis myself. I am currently trying to introduce some tools from computational complexity (of the sort typically practiced by Theory A folks in TCS) into reasoning about evolutionary dynamics. I am petrified that I might not be connecting to what theoretical biologists are talking about and writing a paper like the one I critique here. However, I do plan to follow my own advice and submit to a biology and not a CS or physics venue, even though it would be easier to publish in the latter.

  4. Pingback: Evolution explains the fundamental constants of physics | Theory, Evolution, and Games Group

  5. otakucode says:

    Why did you take a paper which is about tactics for achieving intelligent-looking systems in artificial intelligence and pretend like it was about human behavior? Where in the paper did they say that a human being would choose to balance the rod on the crate, or try to get the ball out of the box by using the other ball? They never said any such thing. They said that if you want an artificially intelligent system to have a motivation to do things, capturing the largest number of futures results in behavior that makes sense. This is very important in artificial intelligence because it either solves or makes irrelevant the initial problem of motivation. How do you take a system of joints and limbs and train it to walk? By default it’s going to judge walking properly as equivalent to simply lying around. You can define a fitness landscape for it and select for systems that tend to move furthest from their starting point, but this is both difficult to do and usually very limited in its consequences. Optimizing the way they did in the paper is much more effective.

    • Thanks for the comment, and sorry that it’s taken me a while to get back to you: I’ve been away at a workshop. Right in their abstract the authors write: “possible deep connection between intelligence and entropy maximization, but no formal physical relationship between them has yet been established. Here, we explicitly propose a first step toward such a relationship” These sort of claims are further advertised on their website, and most other popular media coverage of their paper. The goal of my post was specifically to cut down this hype.

      I am very glad that you raised the point of the difficulty of defining interesting fitness landscapes, and how it is often problem dependent. What Wissner-Gross & Freer do is not fundamentally different from this approach, except instead of defining their fitness function as “systems that tend to move furthest from their starting point” they consider “configurations that have the most future paths out to some finite time-horizon”. This is definitely a useful utility function in some settings (and one that has been heavily studied in the AI literature without these claims of grandeur), but is also limited in other settings.

      • otakucode says:

        Thanks for your response. I went back and re-read the paper again some time after I’d written my comment, and I see what you were addressing. I had glossed over their grandiose claims as simple speculation – which probably didn’t need to be in the paper at all. I can see how it might obscure their findings. I was just confused when you mentioned things like the idea that if you were in a room you would sit against a wall for support, while their system would stay in the center of the room to maximize future possibilities. As their paper addressed very basic systems, I didn’t think it was appropriate to presume that the ‘capture maximum futures’ technique they were using would be operated on such a high level. An actual simulated human being would be made up of trillions of systems using the same entropy-maximizing technique, and would take into account things like physiology, comfort, etc. I never took their paper, even though they claim that the mechanism might underlie intelligence, to apply to such macroscopic systems. I just assumed, without real cause, that they were talking about a low level mechanism that would operate on the levels of cells or similar.

        You are clearly much more well-versed in this subject area than I am, and I am not aware of the literature that you mention that evaluates the same or similar approaches. Could I trouble you to make a suggestion of some papers/books or authors to look into? The paper that was the subject of your post interested me as it seems to jive with similar ideas of increased complexity driving evolution even in situations with no survival or reproductive advantage (such as the presence of null mutations conferring survival benefits solely in situations of abrupt environmental change) and I’ve not seen such a thing formalized before. Much of what I’ve read about AI, genetic algorithms, is a bit dated at this point I’m sure.

  6. Yeah, I agree, your critique is a bit harsh, you’re putting words in their mouth, making a straw man and knocking it down.

    Here’s maybe a better AI example. Suppose we have to program a base-ball catching robot. Suppose the pitcher and batter are symmetric, so that the most likely path for the baseball is to the middle of center field. So, obviously, the robot should stand in center field (at the expectation value, mean path location).

    Suppose now that the baseball diamond has a hedgerow and some trees surrounding first base. Where should the robot stand? Well, the single most likely path for the baseball is unaffected: it still goes to the middle of center field. But standing there would now be wrong: instead, one should consider *all possible paths* for the baseball (including those that get lost in the bushes), and stand in that place that maximizes access to the greatest number of paths.

    If you are not standing at the point that maximizes access to the greatest number of paths, then you should move towards it: this is your gradient, your ‘force’.

    That, I think, is essentially the point: in decision-making, one should take care in writing down your probabilities and sums and (path!) integrals. Some formula-fiddling allows one to call these things entropies and forces, but the formulas still work correctly even if you didn’t know they were called “entropy” and “force”.

    • I agree with your example.

      However, I am not clear on where I built a staw-man. My whole concern with the paper was not the validity of their basic idea (i.e. maximize access to the greatest number of paths), my concern was with their claim that this idea is (a) new to the psychologists, (b) provides a unifying framework for looking at behavioral sciences, or (c) provides a significant step forward for AI.

      Maybe I am misreading their paper and attributing too much of the popular media and their website’s hype to the publication. However, in the second to last paragraph, they do claim that their paper has broad and significant relevance to: condensed matter physics, particle theory, econophysics, cosmology, and biophysics. The close their paper with:

      We found that some of these systems exhibited sophisticated spontaneous behaviors associated with the human ‘‘cognitive niche,’’ including tool use and social cooperation, suggesting a potentially general thermodynamic model of adaptive behavior as a nonequilibrium process in open systems.

      I just wanted to point out that they did not actually address the questions they think they did (such as social cooperation), since they abstracted away the parts relevant for behavioral scientists.

  7. Pingback: EGT Reading Group 41 – 45 and a photo | Theory, Evolution, and Games Group

  8. Pingback: Infographic history of evolutionary thought | Theory, Evolution, and Games Group

  9. Pingback: Bounded rationality: systematic mistakes and conflicting agents of mind | Theory, Evolution, and Games Group

  10. Pingback: Cataloging a year of blogging: from behavior to society and mind | Theory, Evolution, and Games Group

  11. Jared Mimms says:

    Nah they won’t win the Nobel for the Entropic Theory of Evolution from this. Coming at this story from the wrong angle.

  12. Pingback: Big data, prediction, and scientism in the social sciences | Theory, Evolution, and Games Group

  13. Sergio HC says:

    I am a mathematician with good background inphicology ( my wife is phicologist) and I have been plying with the paper from it came out, and I have found profund relationship between both fields, I also work as progrsmmer so I made my own implementation of the ideas and pushed it into a fully behaviour generating machine.

    It does work remarcably well in simulating complex behaviours, but instead of onfolding it here I would recomend visiting my blog and just look at the most recent videos… you will be amazed I think!

    http:/entropicai.blogspot.com

    In fact I actually beliebe this approach can join quite nicely with deep learning algorithms to conform a more natural intelligent, but it doesnt really change the fact that, without the use of neuronal networkd, you do get organic like intelligent behaviour out of the box.

  14. There is a whole set of related methodologies (the most common term in use for this field is “intrinsic motivation”) that attempt to do just that. Probably the closest in spirit are the predictive information maximization (related to Der’s homeokinesis) and “empowerment” (actuator-perceptor channel capacity) maximization, where empowerment is a very similar concept to Wissner-Gross and Freer’s Causal Entropic Force. It has become clear in the last decade of studying “empowerment” based dynamics that it is indeed possible to get a lot of interesting behaviours from such principles, including game-playing, survival scenarios, pole-balancing, bicycle riding, acrobot, obstacle avoidance, sensor evolution, context construction, sensorimotor contingency exploration etc. etc.. Furthermore, there are very good arguments from evolutionary perspectives why that could be the case.

    The open question is how, as postulated by the authors, a causal physical dynamics would turn into such a “future-looking” model such as CEF. We have no indication that the Maximum Entropy Production Principle indeed can arise from first principles (Dewar’s derivation was disproved by Linsker). I suspect strongly that we need some additional property/postulate about the physical world for this to work. The evolutionary perspective leaves this gap unexplained.

  15. Pingback: A Theorist’s Apology | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,318 other followers