Local maxima and the fallacy of jumping to fixed-points

An economist and a computer scientist are walking through the University of Chicago campus discussing the efficient markets hypothesis. The computer scientist spots something on the pavement and exclaims: “look at that $20 on the ground — seems we’ll be getting a free lunch today!”

The economist turns to her without looking down and replies: “Don’t be silly, that’s impossible. If there was a $20 bill there then it would have been picked up already.”

This is the fallacy of jumping to fixed-points.

In this post I want to discuss both the importance and power of local maxima, and the dangers of simply assuming that our system is at a local maximum.

So before we dismiss the economist’s remark with laughter, let’s look at a more convincing discussion of local maxima that falls prey to the same fallacy. I’ll pick on one of my favourite YouTubers, THUNK:

In his video, THUNK discusses a wide range of local maxima and contrasts them with the intended global maximum (or more desired local maxima). He first considers a Roomba vacuum cleaner that is trying to maximize the area that it cleans but gets stuck in the local maximum of his chair’s legs. And then he goes on to discuss similar cases in physics, chemisty, evolution, psychology, and culture.

It is a wonderful set of examples and a nice illustration of the power of fixed-points.

But given that I write so much about algorithmic biology, let’s focus on his discussion of evolution. THUNK describes evolution as follows:

Evolution is a sort of hill-climbing algorithm. One that has identified local maxima of survival and replication.

This is a common characterization of evolution. And it seems much less silly than the economist passing up $20. But it is still an example of the fallacy of jumping to fixed-points.

My goal in this post is to convince you that THUNK describing evolution and the economist passing up $20 are actually using the same kind of argument. Sometimes this is a very useful argument, but sometimes it is just a starting point that without further elaboration becomes a fallacy.

The Fallacy of Jumping to Fixed-Points

Let’s start with a discussion of the fallacy itself.

As with many fallacies, the fallacy of jumping to fixed-points starts from a good place. It starts from taking a good idea but using it uncritically. That good idea is fixed-point theorems.

Fixed-point theory is some of the most useful theory in mathematics. The typical result has the form of: take some function F and if it has certain general properties then there exists at least one point x such that F(x) = x. The most famous example of this is Brouwer’s fixed-point theorem. In the case of Broewer’s theorem, the conditions on F are that it is continuous function and that it maps a compact convex set to itself/ If those conditions are met then there exists an x in the domain of F such that F(x) = x.

For a simple consequence of this theorem. Consider any accurate map of a city. If you lay it on a table somewhere in that city then there will be a point on that map that corresponds exactly to “You are Here”. In other words, the point in the real world will coincide with the physical location of it’s representational point on the map.

An obvious but fun fact to add to our repertoire of knowledge on maps and models.

Of course, just because that point exists, doesn’t mean that you will actually know which point it is. Otherwise, you’d never need a GPS.

The fallacy comes from taking this point for granted. Taking its existence as indicating that we can use it as our starting point in further reasoning.

A less trivial and much more interesting consequence of Brouwer’s fixed-point theorem — actually, it is usually prove from the more powerful (but less famous) Kakutani fixed-point theorem — is Nash’s result about the existence of best-response equilibria (i.e. Nash equilibrium) in games with a finite set of pure strategies.

But as the active implications of ‘best-response’ and ‘equilibrium’ suggest, this will make more sense if we focus on dynamic systems.

Dynamic systems and equilibrium

Often, F is not just some function but a dynamic process that maps a domain to itself. It is an update rule, or time-step, or generator of motion. In that case, the fixed-point is a point that remains at equilibrium and does not change through time.

The simple example of Brouwer’s fixed-point theorem for dynamic system is stirring a cup of coffee. As you stir in your lump of sugar, there will always be one point on the surface of the liquid that isn’t rotating.

Now, this doesn’t mean that the dynamics will ‘lead to’ this point. In particular, you could, for example, have closed orbits or cycles around the fixed points. On these cycles, points will go around in a circle due to the application of F and not reach the fixed-point.

In the case of game theory: think about Rock-Paper-Scissors. There is a fixed-point or Nash-equilibrium strategy: the mixed strategy that says to play each of rock, paper, scissors with probability 1/3. But there are also lots of cycles where rock gets replaced by paper which gets replaced by scissors which gets replaced by rock.

As such, we often care about particular kinds of dynamics F where we can guarantee that cycles don’t occur. In these cases, it can be helpful to rename our fixed-point from equilibrium to local maximum.

A classic example of this would be adaptive strong-selection weak-mutation dynamics on a finite static-fitness landscape. In this case, we can — at most times — imagine our population as monomorphic and thus representable by a single point in genotype space. And evolution is the update rule that at each ‘step’ it picks a genotype y of higher fitness adjacent to our the genotype x corresponding to our population and moves our population there. This dynamic will have fixed-points at ‘local peaks’ of the fitness landscape: where no adjacent mutation is adaptive.

This was what THUNK was referencing with his description of evolution.

Arrow-Debreu model and the efficient market hypothesis

When I think of economics 101, I think of supply and demand. Supply and demand is a concept that is so embedded in the popular psychie that it can even be a good anchor for metaphors about evolution.

Let me briefly remind you of supply and demand.

The popular conception of it is as a two dimensional graph with the quantity of the product as the x-axis and the price as the y-axis. The demand curve then proceeds from the top left of the graph towards the bottom right and represent how much product customers are willing to purchase at a given price. The supply curve proceeds from the bottom left to the top right and represents how much product can be produces for a given cost. Where these two curves intersect, we have a fixed-point at which the market has ‘solved’ the problem of supply and demand.

But economics doesn’t stop at 101. And considers markets with much more dimensions. In this case, economists often turn to the Arrow-Debreu model which generalizes the above intuitions to the case of a large number n of commodities with various potentially co-varying supply and demand functions.

Unsurprisingly — given the context of this blog post — the Arrow-Debreu model has a fixed-point theorem. Proving it is usually an application of Nash-equilibrium. The fixed-points of the Arrow-Debrue model are competitive (or Walrasian) equilibria and correspond to a set of prices such that aggregate supplies will equal aggregate demands for every commodity in the economy.

Knowing that such a pricing exists is very interesting, and a wonderful application of fixed-point theorem.

Believing that such an equilibrium — or something similar — is achieved in practice — at least when thinking about financial products — is the efficient-market hypothesis. It is usually associated with the Chicago school of economics. In such an equilibrium, there is no reliable opportunity for arbitrary and an investor cannot consistently outperform the market.

That is why the economist doesn’t believe that a $20 is just laying there on the ground.

PPAD and the economist’s mistake

There are many possible and reasonable criticisms of efficient-market hypothesis. But given my interest in theoretical computer science, I want to offer the algorithmic critique.

Around 1994, Christos Papadimitriou introduced the PPAD complexity class. PPAD stands for Polynomial Parity Arguments on Directed graphs. He used this to argue that the existence proofs of certain fixed-point theorems like Brouwer’s cannot be transformed into efficient algorithms for finding the guaranteed fixed-points. Since then, we have come to believe that PPAD-complete problems are not solvable in polynomial time — although as with most cstheory conjectures (like P != NP), this remains open.

By 2005, Chen & Deng had proved that finding Nash equilibrium is PPAD-complete. And in the years that followed, this was transformed into results that it is a PPAD-complete problem to find a competitive equilibrium in the Arrow-Debreu model (even with relatively simple preference functions; Chen et al., 2009), or to even approximate it (Deng & Du, 2008). Thus, it is reasonable to believe that a competitive equilibrium cannot — in general — be found in polynomial time. And since a market has a polynomial number of agents and each has polynomial bounded computational power, this means that the market itself cannot — in general — find a competitive equilibrium.

This means that there can always be relatively easy to spot but constantly shifting opportunities for arbitrage. And you might in fact see $20 just laying on the ground.

PLS and evolution

When Papadimitriou was defining PPAD, he was actually building on previous work of his that defined a similar complexity class: Polynomial Local Search (PLS; Johnson, Papadimitriou & Yannakakis, 1988). This class corresponds roughly to the difficulty of finding local maxima. Recently, I’ve used this to argue that on hard fitness landscapes evolution — no matter what dynamic it follows — won’t be able to find even a local fitness peak in polynomial time (Kaznatcheev, 2013; 2019). In other words, on hard fitness landscapes, an evolving population will be in perpetual maladaptive disequilibrium. There will always be nearby mutants that have higher fitness. For a more careful discussion of both PLS and my results, see my old post on the computational complexity of evolutionary equilibria.

But let us return to THUNK’s description of evolution.

When he says that evolution has identified a local maximum, he is saying that there isn’t a small mutation that will improve fitness. In the analogy to the economist, he is saying that there cannot be $20 on the ground. Now, it might certainly be the case that there are no adaptive mutations available, but this is not something that we should simply take as following from the definitions of evolution. That would be the fallacy of jumping to fixed-points. Instead, we should try to look at particular populations on particular fitness landscapes and see if any nearby advantageous mutations are available. This is the same thing as the economist not assuming the efficient market hypothesis but instead looking down on the ground and checking if the $20 is there or not.

After all, a free lunch is on the line.

References

Chen, X., Dai, D., Du, Y., & Teng, S. H. (2009). Settling the complexity of Arrow-Debreu equilibria in markets with additively separable utilities. 50th Annual IEEE Symposium on Foundations of Computer Science (FOCS): pp. 273-282.

Chen, X., & Deng, X. (2006). Settling the complexity of two-player Nash equilibrium. 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS): pp. 261-272.

Deng, X., & Du, Y. (2008). The computation of approximate competitive equilibrium is PPAD-hard. Information Processing Letters, 108(6), 369-373.

Johnson, D. S., Papadimitriou, C. H., & Yannakakis, M. (1988). How easy is local search?. Journal of computer and system sciences, 37(1), 79-100.

Kaznatcheev, A. (2013). Complexity of evolutionary equilibria in static fitness landscapes. arXiv preprint: 1308.5094.

Kaznatcheev, A. (2019). Computational complexity as an ultimate constraint on evolution. Genetics: genetics.119.302000.

Papadimitriou, C. H. (1994). On the complexity of the parity argument and other inefficient proofs of existence. Journal of Computer and System Sciences, 48(3): 498-532.

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

6 Responses to Local maxima and the fallacy of jumping to fixed-points

  1. Andrew Krause says:

    There are some interesting analogies here to problems where one neglects the dynamics a priori, but the actual dynamical system never reaches an equilibrium. We can sometimes dismiss cycles and other things in low-dimensional systems, but depending on the structure of the model, arbitrarily long chaotic transients (chimeras), actual asymptotic chaos, and other complex behaviours become more and more generic as the dimensionality of the system increases, and in particular we learn less about the system by analyzing fixed points. Offhand I don’t know if economic or evolutionary systems would necessarily have these behaviours, but I can imagine that some phenomena in these fields really are high-dimensional and not readily well-described by any notion of fixed point.

    • This is a good point, Andrew.

      I think that a particularly interesting question to think about with regard to this is given some suitable language of models (say the NK-model for fitness landscapes or convex, monotonic utilities for markets), can we cut up the space of models in a useful way so that on the one side we have the region where fixed points are useful information for making sense of the system, and on the other side where they are not very informative.Then maybe we can start to reason empirically about which of these two subspaces has more probability in the ‘real world’.

      My usual issue with chaotic system is on if we care about exact specific quantitative predictions (which becomes impossible given error in initial condition measurements) or qualitative features of the system (which, at least accoriding to Mark Braverman, is often easy to arrive at). It is nice to have a notion of difficulty of prediction that can abstract over reasonable classes of things we might want to predict about the system.

  2. Tracy W says:

    What you don’t mention in this discussion of the efficient markets hypothesis is that from the 1900s to the 1960s there was a mass of empirical work indicating that stock prices followed a random walk (See for example econlib on the EMH. The EMH was formed as a way of explaining this empirical evidence.

    In other words, economists don’t assume that there’s no $20 bills on the ground because of the EMH, they argue for the EMH because of all the time that people spent looking for $20 bills and not finding them reliably. It’s hardly a fallacy to use real world data to choose between theories.

    Of course since the EMH became the hypothesis to beat, there’s been a bunch of work finding anomalies and trying to disprove it. But any replacement theory for the EMH is also going to have to explain why stock prices follow something that’s so close to a random walk and why it’s so seldom that people pick up $20 bills.

    • We should totally use real world data to pick between models. And that is certainly not a fallacy. We just need to know that we have options to pick between. In other words, we need to first divide the world of conceivable markets into easy vs hard and then we can argue that hard markets don’t occur in nature (because it is inconsistent with our experience of not seeing $20 bills laying around). This would then be a useful bit of information in shaping future models.

      Where I think the fallacy of jumping to fixed points comes in is in not realizing that hard families of instances are a possibility of our models and thus passing up this opportunity to improve our knowledge. I’m not accusing anybody of having committed this fallacy (well, maybe I am accusing THUNK? But in good fun), I am just using evolution and economics as two examples to illustrate this distinction (because these are the area where I am aware of some fun work was done on this).

  3. Pingback: The gene-interaction networks of easy fitness landscapes | Theory, Evolution, and Games Group

  4. Pingback: Idealization vs abstraction for mathematical models of evolution | Theory, Evolution, and Games Group

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.