Enriching evolutionary games with trust and trustworthiness

Fairly early in my course on Computational Psychology, I like to discuss Box’s (1979) famous aphorism about models: “All models are wrong, but some are useful.” Although Box was referring to statistical models, his comment on truth and utility applies equally well to computational models attempting to simulate complex empirical phenomena. I want my students to appreciate this disclaimer from the start because it avoids endless debate about whether a model is true. Once we agree to focus on utility, we can take a more relaxed and objective view of modeling, with appropriate humility in discussing our own models. Historical consideration of models, and theories as well, should provide a strong clue that replacement by better and more useful models (or theories) is inevitable, and indeed is a standard way for science to progress. In the rapid turnover of computational modeling, this means that the best one could hope for is to have the best (most useful) model for a while, before it is pushed aside or incorporated by a more comprehensive and often more abstract model. In his recent post on three types of mathematical models, Artem characterized such models as heuristic. It is worth adding that the most useful models are often those that best cover (simulate) the empirical phenomena of interest, bringing a model closer to what Artem called insilications.
Read more of this post

Advertisements

Software through the lens of evolutionary biology

My preferred job title is ‘theorist’, but that is often too ambiguous in casual and non-academic conversation, so I often settle for ‘computer scientist’. Unfortunately, it seems that the overwhelming majority of people equate computer scientists to programmers or some general ‘tech person’, forgetting M.R. Fellows rallying cry: “Computer science is not about machines, in the same way that astronomy is not about telescopes.” Although — like most theorists — I know how to program, the programming I do is nothing like what (I hear) is in industry. In particular, all of my code is relatively small and with concentration, or maybe a single sheet of paper, I can usually keep the whole thing in my head. In fact, the only time I’ve worked in a large code base was developing extensions for MediaWiki during my first summer of college to be used by some groups at the Canadian Light Source. Combined with the preceeding semester of drawing UML diagrams and writing up req&spec documents, I was convinced that I would never be a software engineer. However, I did learn a valuable lessons: real world projects are big and unwieldy, logistics have to be taken seriously, comments and documentation are your friends, and for a sufficiently large software project there is no single person that knows the whole code.

FirefoxBugsWith that much unknown, it is not surprising that bugs abound. Even back in 2002 software bugs cost the US $59.5 billion annually or 0.6% of the GDP, and I imagine the cost has only gone up. If you count ultrafast extreme events or flash crashes of automated hight-frequency traders as bugs, then some argue that you have part of our recent financial crises to blame on software errors (Johnson et al., 2013). To get a feel for the numerosity, a big project like Mozilla Firefox can easily get 2000 new bugs in a year (see figure at left), and Yet most of these bugs are not particularly difficult, and don’t require major overhauls to fix. Even the most serious failures can be fixed by a 12 year-old, why not let evolution have a go?
Read more of this post

Liquidity hoarding and systemic failure in the ecology of banks

As you might have guessed from my recent posts, I am cautious in trying to use mathematics to build insilications for predicting, profiting from, or controlling financial markets. However, I realize the wealth of data available on financial networks and interactions (compared to similar resources in ecology, for example) and the myriad of interesting questions about both economics and humans (and their institutions) more generally that understanding finance can answer. As such, I am more than happy to look at heuristics and other toy models in order to learn about financial systems. I am particularly interested in understanding the interplay between individual versus systemic risk because of analogies to social dilemmas in evolutionary game theory (and the related discussions of individual vs. inclusive vs. group fitness) and recently developed connections with modeling in ecology.

Three-month Libor-overnight Interest Swap based on data from Bloomberg and figure 1 of Domanski & Turner (2011). The vertical line marks 15 September 2008 -- the day Lehman Brothers filed for bankruptcy.

Three-month Libor-overnight Interest Swap based on data from Bloomberg and figure 1 of Domanski & Turner (2011). The vertical line marks 15 September 2008 — the day Lehman Brothers filed for bankruptcy.

A particular interesting phenomenon to understand is the sudden liquidity freeze during the recent financial crisis — interbank lending beyond very short maturities virtually disappeared, three-month Libor (a key benchmarks for interest rates on interbank loans) skyrocketed, and the world banking system ground to a halt. The proximate cause for this phase transition was the bankruptcy of Lehman Brothers — the fourth largest investment bank in the US — at 1:45 am on 15 September 2008, but the real culprit lay in build up of unchecked systemic risk (Ivashina & Scharfstein, 2010; Domanski & Turner, 2011; Gorton & Metrick, 2012). Since I am no economist, banker, or trader, the connections and simple mathematical models that Robert May has been advocating (e.g. May, Levin, & Sugihara (2008)) serve as my window into this foreign land. The idea of a good heuristic model is to cut away all non-essential features and try to capture the essence of the complicated phenomena needed for our insight. In this case, we need to keep around an idealized version of banks, their loan network, some external assets with which to trigger an initial failure, and a way to represent confidence. The question then becomes: under what conditions is the initial failure contained to one or a few banks, and when does it paralyze or — without intervention — destroy the whole financial system?
Read more of this post

Cooperation through useful delusions: quasi-magical thinking and subjective utility

GoBoardEconomists that take bounded rationality seriously treat their research like a chess game and follow the reductive approach: start with all the pieces — a fully rational agent — and kill/capture/remove pieces until the game ends, i.e. see what sort of restrictions can be placed on the agents to deviate from rationality and better reflect human behavior. Sometimes these restrictions can be linked to evolution, but usually the models are independent of evolutionary arguments. In contrast, evolutionary game theory has traditionally played Go and concerned itself with the simplest agents that are only capable of behaving according to a fixed strategy specified by their genes — no learning, no reasoning, no built in rationality. If egtheorists want to approximate human behavior then they have to play new stones and take a constructuve approach: start with genetically predetermined agents and build them up to better reflect the richness and variety of human (or even other animal) behaviors (McNamara, 2013). I’ve always preferred Go over chess, and so I am partial to the constructive approach toward rationality. I like to start with replicator dynamics and work my way up, add agency, perception and deception, ethnocentrism, or emotional profiles and general condition behavior.

Most recently, my colleagues and I have been interested in the relationship between evolution and learning, both individual and social. A key realization has been that evolution takes cues from an external reality, while learning is guided by a subjective utility, and there is no a priori reason for those two incentives to align. As such, we can have agents acting rationally on their genetically specified subjective perception of the objective game. To avoid making assumptions about how agents might deal with risk, we want them to know a probability that others will cooperate with them. However, this depends on the agent’s history and local environment, so each agent should learn these probabilities for itself. In our previous presentation of results we concentrated on the case where the agents were rational Bayesian learners, but we know that this is an assumption not justified by evolutionary models or observations of human behavior. Hence, in this post we will explore the possibility that agents can have learning peculiarities like quasi-magical thinking, and how these peculiarities can co-evolve with subjective utilities.
Read more of this post

Mathematics in finance and hiding lies in complexity

Sir Andrew Wiles

Sir Andrew Wiles

Mathematics has a deep and rich history, extending well beyond the 16th century start of the scientific revolution. Much like literature, mathematics has a timeless quality; although its trends wax and wane, no part of it becomes out-dated or wrong. What Diophantus of Alexandria wrote on solving algebraic equations in the 3rd century was still as true in the 16th, 17th, or today. In fact, it was in 1637 in the margins of Diophantus’ Arithmetica that Pierre de Fermat scribbled the statement of his Last Theorem. that the margin was too narrow to contain[1]. In modern notation it is probably one of the most famous Diophantine equations a^n + b^n = c^n with the assertion that it has no solutions for n > 2 and a,b,c as positive integers. A statement that almost anybody can understand, but one that is far from easy to prove or even approach[2].
Read more of this post

Computational complexity of evolutionary stable strategies

John Maynard Smith and John Forbes Nash, Jr.

John Maynard Smith and John Forbes Nash, Jr.

Yesterday, I shared a video of John Maynard Smith introducing evolutionary game theory (EGT) to the London Mathematical Society. I suggested that at its foundation, EGT was just like classical game theory, and based on equilibrium analysis — the evolutionary stable strategy (Maynard Smith & Price, 1973). Given a utility function U(\sigma,\sigma') that gives the expected payoff for mixed strategy \sigma interacting with mixed strategy \sigma, we say that \sigma^* is an evolutionary stable strategy (ESS) if:

  1. For all \sigma' \neq \sigma^* we have \quad U(\sigma^*,\sigma^*) \geq U(\sigma',\sigma^*), and
  2. If U(\sigma^*,\sigma^*) = U(\sigma',\sigma^*) then U(\sigma^*,\sigma') > U(\sigma',\sigma').

Or we look at the contrapositive: a strategy \sigma' can invade a host strategy \sigma if (1) it has a better payoff in a population of \sigmas than \sigmas have against themselves, or (2) if they are neutral then a small population of \sigma' benefits themselves more than they benefit the host population. If some strategy \sigma^* has no invading strategy \sigma' then it is an evolutionary stable strategy.

If you are an observant reader and familiar with classical game theory then you probably noticed that the first condition of the direct definition is equivalent to the definition of a Nash equilibrium (NE). The second clause joined with an “and” means that every NE is an ESS, but not every ESS is an NE. However, that second condition seems innocuous enough, surely it can’t change the the qualitative nature of the equilibrium concept?
Read more of this post

John Maynard Smith: games animals play

Although this blog has recently been focused on static fitness landscapes and the algorithmic lens, it’s url and a big chunk of the content focuses of evolutionary game theory (EGT). Heck, I even run a G+ community on the topic. If you are a biologist and asked me to define EGT then I would say it is a general treatment of frequency-dependent selection. If you were a mathematician then I might say that it is classical game theory done backwards: instead of assuming fully rational decision makers, imagine simple agents whose behavior is determined by their genes, and instead of analyzing equilibrium, look at the dynamics. If you are a computer scientists, I might even say to look at chapter 29 of your Algorithmic Game Theory book: it’s just a special case of AGT. However, all of these answers would be historically inaccurate.
Read more of this post

Baldwin effect and overcoming the rationality fetish

G.G. Simpson and J.M. Baldwin

G.G. Simpson and J.M. Baldwin

As I’ve mentioned previously, one of the amazing features of the internet is that you can take almost any idea and find a community obsessed with it. Thus, it isn’t surprising that there is a prominent subculture that fetishizes rationality and Bayesian learning. They tend to accumulate around forums with promising titles like OvercomingBias and Less Wrong. Since these communities like to stay abreast with science, they often offer evolutionary justifications for why humans might be Bayesian learners and claim a “perfect Bayesian reasoner as a fixed point of Darwinian evolution”. This lets them side-stepped observed non-Bayesian behavior in humans, by saying that we are evolving towards, but haven’t yet reached this (potentially unreachable, but approximable) fixed point. Unfortunately, even the fixed-point argument is naive of critiques like the Simpson-Baldwin effect.

Introduced in 1896 by psychologist J.M. Baldwin then named and reconciled with the modern synthesis by leading paleontologist G.G. Simpson (1953), the Simpson-Baldwin effect posits that “[c]haracters individually acquired by members of a group of organisms may eventually, under the influence of selection, be reenforced or replaced by similar hereditary characters” (Simpson, 1953). More explicitly, it consists of a three step process (some of which can occur in parallel or partially so):

  1. Organisms adapt to the environment individually.
  2. Genetic factors produce hereditary characteristics similar to the ones made available by individual adaptation.
  3. These hereditary traits are favoured by natural selection and spread in the population.

The overall result is that originally individual non-hereditary adaptation become hereditary. For Baldwin (1886,1902) and other early proponents (Morgan 1886; Osborn 1886, 1887) this was a way to reconcile Darwinian and strong Lamarkian evolution. With the latter model of evolution exorcised from the modern synthesis, Simpson’s restatement became a paradox: why do we observe the costly mechanism and associated errors of individual learning, if learning does not enhance individual fitness at equilibrium and will be replaced by simpler non-adaptive strategies? This encompass more specific cases like Rogers’ paradox (Boyd & Richerson, 1985; Rogers, 1988) of social learning.
Read more of this post

Parallelism, products, and automata

I have an unreasonable love of alliterations, so I wish I knew a common word for automata that started with P. I even considered calling this post “Paralleilism, Products, Producers”, but feared introducing non-standard nomenclature and confusion into Professor Prakash Panangaden’s class. Hence, the current title; not too bad? If we count “and automata” as an alliteration then I can claim to have introduced an example of parallelism as used in rhetoric right in the title. Unfortunately, the post’s on parallelism in processing; sorry, having too much fun.

Proving that the left half of a regular language is regular was the hardest question on the first assignment. It was also challenge for me as a TA, because I couldn’t find many avenues for hints and advice that didn’t reveal the answer directly. From grading however, I was impressed by the number of students that solved it, but a bit disappointed by those that Googled “half of a regular language” and clicked on the first result like Ben Reichardt’s notes. Since there are solutions already online, I decided that I can give answers, too. Although Prakash provided a full solution set, I thought I would sketch a pedantic treatment of question 5.

One of the best tools from theory of computing is showing that potentially distant seeming languages or even models of computation actually have the same or similar complexity. The regular languages serve as a microcosm to learn and hone these tools. When you first see the boringly serial finite state machines presented, a natural question is: what if I run two DFAs in parallel, is that still a DFA? Well, pedantically no, it doesn’t match the definitions, but in reality — yes, we just have to be more precise.
Read more of this post

Programming language for biochemistry

Computer scientists that think of nature as literally computing, often take the stance that biological organisms are nothing more than protein interaction networks. For example, this is the stance that Leslie Valiant (2009) takes when defining ecorithms: biology is just a specialization of computer science focused on evolvable circuits. User @exploderator summarized the realist computational view of biology on Reddit while answering what theoretical computer science can offer biology:

[B]iology is primarily chemo-computation, chemical information systems and computational hardware.
Theoretical comp sci is the only field that is actually specifically dedicated to studying the mathematics / logic of computation. Therefore, although biology is an incredibly hard programming problem (only a fool thinks nature simple), it is indeed more about programming and less about the hardware it’s running on.

Although it is an easy stance for a theoretician to take, it is a little bit more involved for a molecular biologist, chemist, or engineer. Yet for the last 30 years, even experimentalists have been captivated by this computational realism and promise of engineering molecular devices (Drexler, 1981). Half a year ago, I even reviewed Bonnet et al. (2013) taking steps towards building transcriptors. They are focusing on the hardware side of biological computation and building a DNA-analogue of the von Neumann architecture. However, what we really need is a level of abstraction: a chemical programming language that can be compiled into biocompatible reactions.
Read more of this post