Local maxima and the fallacy of jumping to fixed-points

An economist and a computer scientist are walking through the University of Chicago campus discussing the efficient markets hypothesis. The computer scientist spots something on the pavement and exclaims: “look at that $20 on the ground — seems we’ll be getting a free lunch today!”

The economist turns to her without looking down and replies: “Don’t be silly, that’s impossible. If there was a $20 bill there then it would have been picked up already.”

This is the fallacy of jumping to fixed-points.

In this post I want to discuss both the importance and power of local maxima, and the dangers of simply assuming that our system is at a local maximum.

So before we dismiss the economist’s remark with laughter, let’s look at a more convincing discussion of local maxima that falls prey to the same fallacy. I’ll pick on one of my favourite YouTubers, THUNK:

In his video, THUNK discusses a wide range of local maxima and contrasts them with the intended global maximum (or more desired local maxima). He first considers a Roomba vacuum cleaner that is trying to maximize the area that it cleans but gets stuck in the local maximum of his chair’s legs. And then he goes on to discuss similar cases in physics, chemisty, evolution, psychology, and culture.

It is a wonderful set of examples and a nice illustration of the power of fixed-points.

But given that I write so much about algorithmic biology, let’s focus on his discussion of evolution. THUNK describes evolution as follows:

Evolution is a sort of hill-climbing algorithm. One that has identified local maxima of survival and replication.

This is a common characterization of evolution. And it seems much less silly than the economist passing up $20. But it is still an example of the fallacy of jumping to fixed-points.

My goal in this post is to convince you that THUNK describing evolution and the economist passing up $20 are actually using the same kind of argument. Sometimes this is a very useful argument, but sometimes it is just a starting point that without further elaboration becomes a fallacy.

Read more of this post

Supply and demand as driving forces behind biological evolution

Recently I was revisiting Xue et al. (2016) and Julian Xue’s thought on supply-driven evolution more generally. I’ve been fascinated by this work since Julian first told me about it. But only now did I realize the economic analogy that Julian is making. So I want to go through this Mutants as Economic Goods metaphor in a bit of detail. A sort of long-delayed follow up to my post on evolution as a risk-averse investor (and another among many links between evolution and economics).

Let us start by viewing the evolving population as a market — focusing on the genetic variation in the population, in particular. From this view, each variant or mutant trait is a good. Natural selection is the demand. It prefers certain goods over others and ‘pays more’ for them in the currency of fitness. Mutation and the genotype-phenotype map that translates individual genetic changes into selected traits is the supply. Both demand and supply matter to the evolutionary economy. But as a field, we’ve put too much emphasis on the demand — survival of the fittest — and not enough emphasis on the supply — arrival of the fittest. This accusation of too much emphasis on demand has usually been raised against the adaptationist program.

The easiest justification for the demand focus of the adapatationist program has been one of model simplicity — similar to the complete market models in economics. If we assume isotropic mutations — i.e. there is the same unbiased chance of a trait to mutate in any direction on the fitness landscape — then surely mutation isn’t an important force in evolution. As long as the right genetic variance is available then nature will be able to select it and we can ignore further properties of the mutation operator. We can make a demand based theory of evolution.

But if only life was so simple.
Read more of this post

A year in books: Neanderthals to the National Cancer Act to now

A tradition I started a couple of years ago is to read at least one non-fiction book per month and then to share my thoughts on the reading at the start of the following year. Last year, my dozen books were mostly on philosophy, psychology, and political economy. My brief comments on them ended up running a long 3.2 thousand words. This time the list had expanded to around 19 books. So I will divide the summaries into thematic sets. For the first theme, I will start with a subject that is new for my idle reading: cancer.

As a new researcher in mathematical oncology — and even though I am located in a cancer hospital — my experience with cancer has been mostly confined to the remote distance of replicator dynamics. So above all else these three books — Nelson’s (2013) Anarchy in the Organism, Mukherjee’s (2010) The Emperor of All Maladies, and Leaf’s (2014) The Truth in Small Doses — have provided me with insights into the personal experiences of the patient and doctor.

I hope that based on these reviews and the ones to follow, you can suggest more books for me to read in 2016. Better yet, maybe my comments will help you choose your next book. Much of what I read in 2015 came from suggestions made by my friends and readers, as well as articles, blogs, and reviews I’ve stumbled across.[1] In fact, each of these cancer books was picked for me by someone else.

If you’ve been to a restaurant with me then you know that I hate choosing between close-to-equivalent options. To avoid such discomfort, I outsourced the choosing of my February book to G+ and Nelson’s Anarchy in the Organism beat out Problems of the Self by a narrow margin to claim a spot on the reading list. As I was finishing up Nelson’s book — which I will review last in this post — David Basanta dropped off The Emperor of All Maladies on my desk. So I continued my reading on cancer. Finally, Leaf’s book came towards the end of the year based on a recommendation from Jacob Scott. It helped reinvigorate me after a summer away from the Moffitt Cancer Center.
Read more of this post

Double public goods games and acid-mediated tumor invasion

Although I’ve spent more time thinking about pairwise games, I’ve recently expanded my horizons to more serious considerations of public-goods games. They crop up frequently when we are modeling agents at the cellular level, since interacts are often indirect through production of some sort of common extra-cellular signal. Unlike the trivial to characterize two strategy pairwise games, two strategy public-goods have a more sophisticated range of possible dynamics. However, through a nice trick using the properties of Bernstein polynomials, Archetti (2013,2014) and Peña et al. (2014a) have greatly increased our understanding of the public good, and I will be borrowing heavily from the toolbag and extending it slightly in this post. I will discuss the obvious continuation of this work by considering more than two strategies and several public goods together. Unfortunately, the use of public goods games here — and of evolutionary game theory (EGT) more generally — is not without controversy. This extension is not meant to address the controversy of spatial structure (although for progress on this, see Peña et al., 2014b), but the rigorous qualitative analysis that I’ll use (mostly in a the next post on this project) will allow me to side-step much of the parameter-fitting issues.

Of course, having two public goods games is only interesting if we couple them to each other. In this case, we will have one public good from which everyone benefits, but the second good is anti-correlated in the sense that only those that don’t contribute to the first can benefit from the second. A more general analysis of all possible ways to correlate two public-goods game might be a fun future direction, but at this point it is not clear what other correlations would be useful for modeling; at least in mathematical oncology.

By the way, if you are curious what mathematical oncology research looks like, it is often just scribbles like this emailed back and forth:

equations

I’ll use the rest of this post to guide you through the ideas behind the above sketch, and thus introduce you to the joint project that I am working on with Robert Vander Velde, David Basanta, and Jacob Scott. Treat this as a page from my open research notebook.

Read more of this post

A year in books: philosophy, psychology, and political economy

If you follow the Julian calendar — which I do when I need a two week extension on overdue work — then today is the first day of 2015.

Happy Old New Year!

This also means that this is my last day to be timely with a yet another year-in-review post; although I guess I could also celebrate the Lunar New Year on February 19th. Last year, I made a resolution to read one not-directly-work-related book a month, and only satisfied it in an amortized analysis; I am repeating the resolution this year. Since I only needed two posts to catalog the practical and philosophical articles on TheEGG, I will try something new with this one: a list and mini-review of the books I read last year to meet my resolution. I hope that based on this, you can suggest some books for me to read in 2015; or maybe my comments will help you choose your next book to read. I know that articles and blogs I’ve stumbled across have helped guide my selection. If you want to support TheEGG directly and help me select the books that I will read this year then consider donating something from TheEGG wishlist.

Read more of this post

Memes, compound strategies, and factoring the replicator equation

When you work with evolutionary game theory for a while, you end up accumulating an arsenal of cute tools and tricks. A lot of them are obvious once you’ve seen them, but you usually wouldn’t bother looking for them if you hadn’t know they existed. In particular, you become very good friends with the replicator equation. A trick that I find useful at times — and that has come up recently in my on-going project with Robert Vander Veldge, David Basanta, and Jacob Scott — is nesting replicator dynamics (or the dual notion of factoring the replicator equation). I wanted to share a relatively general version of this trick with you, and provide an interpretation of it that is of interest to people — like me — who care about the interaction of evolution in learning. In particular, we will consider a world of evolving agents where each agent is complex enough to learn through reinforcement and pass its knowledge to its offspring. We will see that in this setting, the dynamics of the basic ideas — or memes — that the agents consider can be studied in a world of selfish memes independent of the agents that host them.
Read more of this post

Evolution as a risk-averse investor

DanielBernoulliI don’t know about you, but most of my money is in my savings account and not in more volatile assets like property, bonds, or stocks. This is a consequence of either laziness to explore my options, or — the more comforting alternative — extreme risk-aversion. Although it would be nice to have a few thousand dollars more to my name, it would be devastating to have a few thousand dollars less. As such if I was given a lottery where I had a 50% chance of loosing $990 or a 50% chance of winning $1000 then I would probably choose not to play, even though there is an expected gain of $10; I am risk averse, the extra variance of the bet versus the certainty of maintaining my current holdings is not worth $10 for me. I most cases, so are most investors, although the degree of expected profit to variance trade-off differs between agents.

Daniel Bernoulli (8 February 1700 – 17 March 1782) was one of the mathematicians in the famous Bernoulli family of Basal, Switzerland, and contemporary and friend of Euler and Goldbach. He is probably most famous for Bernoulli’s principle in hydrodynamics that his hyper-competitive father Johann publishing in a book he pre-dated by ten years to try and claim credit. One of Daniel’s most productive times was working alongside Euler and Goldbach in the golden days (1724-1732) of the St. Petersburg Academy. It was in Russia that he developed his solution to the St. Petersburg paradox by introducing risk-aversion, and made his contribution to probability, finance, and — as we will see — evolution.
Read more of this post

Liquidity hoarding and systemic failure in the ecology of banks

As you might have guessed from my recent posts, I am cautious in trying to use mathematics to build insilications for predicting, profiting from, or controlling financial markets. However, I realize the wealth of data available on financial networks and interactions (compared to similar resources in ecology, for example) and the myriad of interesting questions about both economics and humans (and their institutions) more generally that understanding finance can answer. As such, I am more than happy to look at heuristics and other toy models in order to learn about financial systems. I am particularly interested in understanding the interplay between individual versus systemic risk because of analogies to social dilemmas in evolutionary game theory (and the related discussions of individual vs. inclusive vs. group fitness) and recently developed connections with modeling in ecology.

Three-month Libor-overnight Interest Swap based on data from Bloomberg and figure 1 of Domanski & Turner (2011). The vertical line marks 15 September 2008 -- the day Lehman Brothers filed for bankruptcy.

Three-month Libor-overnight Interest Swap based on data from Bloomberg and figure 1 of Domanski & Turner (2011). The vertical line marks 15 September 2008 — the day Lehman Brothers filed for bankruptcy.

A particular interesting phenomenon to understand is the sudden liquidity freeze during the recent financial crisis — interbank lending beyond very short maturities virtually disappeared, three-month Libor (a key benchmarks for interest rates on interbank loans) skyrocketed, and the world banking system ground to a halt. The proximate cause for this phase transition was the bankruptcy of Lehman Brothers — the fourth largest investment bank in the US — at 1:45 am on 15 September 2008, but the real culprit lay in build up of unchecked systemic risk (Ivashina & Scharfstein, 2010; Domanski & Turner, 2011; Gorton & Metrick, 2012). Since I am no economist, banker, or trader, the connections and simple mathematical models that Robert May has been advocating (e.g. May, Levin, & Sugihara (2008)) serve as my window into this foreign land. The idea of a good heuristic model is to cut away all non-essential features and try to capture the essence of the complicated phenomena needed for our insight. In this case, we need to keep around an idealized version of banks, their loan network, some external assets with which to trigger an initial failure, and a way to represent confidence. The question then becomes: under what conditions is the initial failure contained to one or a few banks, and when does it paralyze or — without intervention — destroy the whole financial system?
Read more of this post

Cooperation through useful delusions: quasi-magical thinking and subjective utility

GoBoardEconomists that take bounded rationality seriously treat their research like a chess game and follow the reductive approach: start with all the pieces — a fully rational agent — and kill/capture/remove pieces until the game ends, i.e. see what sort of restrictions can be placed on the agents to deviate from rationality and better reflect human behavior. Sometimes these restrictions can be linked to evolution, but usually the models are independent of evolutionary arguments. In contrast, evolutionary game theory has traditionally played Go and concerned itself with the simplest agents that are only capable of behaving according to a fixed strategy specified by their genes — no learning, no reasoning, no built in rationality. If egtheorists want to approximate human behavior then they have to play new stones and take a constructuve approach: start with genetically predetermined agents and build them up to better reflect the richness and variety of human (or even other animal) behaviors (McNamara, 2013). I’ve always preferred Go over chess, and so I am partial to the constructive approach toward rationality. I like to start with replicator dynamics and work my way up, add agency, perception and deception, ethnocentrism, or emotional profiles and general condition behavior.

Most recently, my colleagues and I have been interested in the relationship between evolution and learning, both individual and social. A key realization has been that evolution takes cues from an external reality, while learning is guided by a subjective utility, and there is no a priori reason for those two incentives to align. As such, we can have agents acting rationally on their genetically specified subjective perception of the objective game. To avoid making assumptions about how agents might deal with risk, we want them to know a probability that others will cooperate with them. However, this depends on the agent’s history and local environment, so each agent should learn these probabilities for itself. In our previous presentation of results we concentrated on the case where the agents were rational Bayesian learners, but we know that this is an assumption not justified by evolutionary models or observations of human behavior. Hence, in this post we will explore the possibility that agents can have learning peculiarities like quasi-magical thinking, and how these peculiarities can co-evolve with subjective utilities.
Read more of this post

Mathematics in finance and hiding lies in complexity

Sir Andrew Wiles

Sir Andrew Wiles

Mathematics has a deep and rich history, extending well beyond the 16th century start of the scientific revolution. Much like literature, mathematics has a timeless quality; although its trends wax and wane, no part of it becomes out-dated or wrong. What Diophantus of Alexandria wrote on solving algebraic equations in the 3rd century was still as true in the 16th, 17th, or today. In fact, it was in 1637 in the margins of Diophantus’ Arithmetica that Pierre de Fermat scribbled the statement of his Last Theorem. that the margin was too narrow to contain[1]. In modern notation it is probably one of the most famous Diophantine equations a^n + b^n = c^n with the assertion that it has no solutions for n > 2 and a,b,c as positive integers. A statement that almost anybody can understand, but one that is far from easy to prove or even approach[2].
Read more of this post