Presentation on evolutionary game theory and cognition

Last week Julian sent me an encouraging email:

did you know that you come up first in google for a search “evolutionary theory mcgill university”? You beat all the profs!

The specific link he was talking about was to my slides from the first time I gave a guest lecture for Tom’s Cognitive Science course in 2009. Today, I gave a similar lecture again; my 3rd year in a row giving a guest lecture for PSYC532. The slides are available here.

I am very happy Tom invited me. It is always fun to share my passion for EGT with students, and I like motivating the connections to cognition. As if often the case, some of the questions during the presentation got me thinking. A particular question I enjoyed, was along the lines of:

If humanitarians cooperate with everyone, and ethnocentrics only cooperate with in-group, then how can we have lower levels of cooperation when the world is dominated by humanitarians?

This was in reference to a result I presented in [Kaz10] about the decrease in cooperative interactions as the cognitive costs of ethnocentrism increases. In particular, even though ethnocentrics are replaced by humanitarians in the population, we don’t see an increase in the proportion of cooperative interactions. In fact, it triggers a decrease in the proportion of cooperative interactions.

I started with my usual answer of the humanitarians allowing more selfish agents to survive, but then realized a second important factor. When the ethnocentric agents are a minority, they no longer form giant same-tag clusters, and are thus much more likely to be defecting (since they are meeting agents of other tags) than cooperating. Thus, the sizable minority of ethnocentrics tend to defect and decrease the proportion of cooperation when living among a majority of humanitarians. On the other hand, when the ethnocentrics are in the majority they are in same-tag clumps and thus tend to cooperate. Of course, I should more closely analyze the simulation data to test this story.

Another attentive student caught a mistake on slide 14 (page 19 of pdf). I have left it for consistency, but hopefully other attentive readers will also notice it. Thank you for being sharp and catching an error I’ve had in my slides for 2 or 3 years now!

To all the students that listened and asked great questions: thank you! If you have any more queries please ask them in the comments. To everyone: how often do you get new insights from the questions you receive during your presentations?

References

[Kaz10] Kaznatcheev, A. (2010) “The cognitive cost of ethnocentrism.” Proceedings of the 32nd annual conference of the cognitive science society. [pdf]

Irreversible evolution

Nine for mortal men doomed to die.

In the last post I wrote about the evolution of complexity and Gould’s and McShea’s approaches to explaining the patterns of increasing complexity in evolution. That hardly exhausts the vast multitude of theories out there, but I’d like put down some of my own thoughts on the matter, as immature as they may seem.

My intuition is that if we chose a random life-form out of all possible life-forms, truly a random one — without respect to history, the time it takes to evolve that life-form, etc, then this randomly chosen life form will be inordinately complex, with a vast array of structure and hierarchy. I believe this because there are simply many more ways to be alive if one is incredibly complex, there’s more ways to arrange one’s self. This intuition gives me a way to define an entropic background such that evolution is always tempted, along or against fitness considerations, to relax to high entropy and become this highly complex form of life.

I think this idea is original, at least I haven’t heard of it yet elsewhere — but in my impoverished reading I might be very wrong, as wrong as when I realized that natural selection can’t optimize mutation rates or evolvability (something well known to at least three groups of researchers before me, as I realized much later). If anyone knows someone who had this idea before, let me know!

I will try to describe how I think this process might come about.

Consider the space of all possible systems, \mathbf{S}. Any system S\in\mathbf{S} is made up of components, chosen out of the large space \mathbf{C}. A system made out of n components I shall call an n-system. Of the members of \mathbf{S}, let there be a special property called “viability”. We will worry later about what exactly viability means, for now let’s simply make it an extremely rare property, satisfied by a tiny fraction, 0<p_v\ll1, of \mathbf{S}.

At the beginning of the process, let there be only 1-systems, or systems of one component.  If \mathbf{C} is large enough, then somewhere in this space is at least one viable component, call this special component C_v. Somehow, through sheer luck, the process stumbles on C_v. The process then dictates some operations that can happen to C_v. For now, let us consider three processes: addition of a new component, mutation of the existing component, and removal of an existing component. The goal is to understand how these three operations affect the evolution of the system while preserving viability.

Let us say that viability is a highly correlated attribute, and systems close to a viable system is much more likely to be viable than a randomly chosen system. We can introduce three probabilities here, one for the probability of viability upon the addition of a new component, upon the removal of an existing component, and upon the mutation of an existing component. For now, however, since the process is at at a 1-system, removal of components cannot preserve viability — as Gould astutely observed. Thus, we can consider additions and mutations. For simplicity I will consider only one probability, p_e, the probability of viability upon an edit.

It turns out that two parameters, |\mathbf{C}| (the size or cardinality of \mathbf{C}) and p_e, are critical to the evolution of the system. There are two types of processes that I’m interested in, although there are more than what I list below:

1) “Easy” processes: |\mathbf{C}| is small and p_e is large. There are only a few edits / additions we can make to the system, and most of them are viable.

2) “Hard” processes: |\mathbf{C}| is very large and p_e is small, but not too small. There are many edits possible and only a very small fraction of these edits are viable. However, p_e is not so small that none of these edits are viable. In fact, p_e is large enough that not only some edits are viable, but also these edits can be discovered in reasonable time and population size, once we add these ingredients to this model (not yet).

The key point is that easy processes are reversible and hard processes are not. Most of existing evolutionary theory so far as dealt with easy processes, which leads to a stable optimum driven only by environmental dictates of what is fittest, because the viable system space is strongly connected. Hard processes, on the other hand, have a viable system space that is connected — but very sparsely so. This model really is an extension of Gavrilets’ models, which is why I spent so much time reviewing them!

Now let’s see how a hard process proceeds. It’s actually very simple: the C_v either mutates around to other viable 1-systems, or adds a component to become a viable 2-system. By the definition of a hard process, these two events are possible, but might take a bit of time. Let’s say we are at a 2-system, C_vC_2. Mutations of the two system might also hit a viable system. Sooner or later, we will hit a viable C_3C_2 as a mutation of C_vC_2. At this point, it’s really hard for C_3C_2 to become a 1-system. It needs to have a mutation back to C_vC_2 and then a loss to C_v. This difficulty is magnified if we hit C_iC_2 as C_3C_2 continues to mutate, C_i might be a mutation neighbor to C_3 but not C_v. Due to the large size of the set \mathbf{C}, reverse mutation to C_v becomes virtually impossible. On the other hand, let’s say we reached C_iC_j. Removing a component results in either C_i or C_j. The probability that at least one of them is viable is 1-(1-p_e)^2, which for p_e very small, is still small. Thus, while growth in size is possible, because a system can grow into many, many different things, reduction is size is much more difficult, because one can only reduce into a limited number of things. Since most things are not viable, reduction is much more likely to result in a unviable system. This isn’t to say reduction never happens or is impossible, but overall there is a very strong trend upwards.

All this is very hand waving, and in fact a naive formalization of it doesn’t work — as I will show in the next post. But the main idea should be sound: it’s that reduction of components is very easy in the time right after the addition of a component (we can just lose the newly added component), but if no reduction happens for a while (say by chance), then mutations lock the number of components in. Since the mutation happened in a particular background of components, the viability property after mutation is true only with respect to that background. Changing that background through mutation or addition is occasionally okay, because there is a very large space things that one can grow or mutate in to, but all the possible systems that one can reduce down to may be unviable. For a n-system, there are n possible reductions, but |\mathbf{C}| possible additions and |\mathbf{C}-1|\cdot n possible mutations. For as long as |\mathbf{C}| \gg n, this line of reasoning is possible. In fact, it is possible until 1-(1-p_e)^n becomes large, at which point the probability that the probability that a system can lose a component and remain viable becomes significant.

Phew. In the next post I shall try to tighten this argument.