Replicator dynamics of cooperation and deception

In my last post, I mentioned how conditional behavior usually implied a transfer of information from one agent to another, and that conditional cooperation was therefore vulnerable to exploitation through misrepresentation (deception). Little did I know that an analytic treatment of that point had been published a couple of months before.

McNally & Jackson (2013), the same authors who used neural networks to study the social brain hypothesis, present a simple game theoretic model to show that the existence of cooperation creates selection for tactical deception. As other commentators have pointed out, this is a rather intuitive conclusion, but of real interest here are how this relationship is formalized and whether this model maps onto reality in any convincing way. Interestingly, the target model is reminiscent of Artem’s perception and deception models, so it’s worth bringing them up for comparison; I’ll refer to them as Model 1 and Model 2.
Read more of this post

Conditional cooperation and emotional profiles

I haven’t been delving into evolutionary game theory and agent-based modeling for very long, and yet I find that in that little time something quite eerie happens once I’m immersed in these models and simulations: I find myself oscillating between two diametrically opposed points of view. As I watch all of these little agents play their games using some all-too-simplistic strategy, I feel like a small God*. I watch cooperators cooperate, and defectors defect oblivious to what’s in their best interest at the moment. Of course, in the end, my heart goes out to the cooperators, who unfortunately can’t understand that they are being exploited by the defectors. That is what pushes me at the other end of the spectrum of omniscience, and with a nudge of empathy I find myself trying to be a simpleton agent in my over-simplified world.

In that state of mind, I begin to wonder what information exists in the environment, in particular information about the agents I am going to play against. I suppose I’m able to access it and use it to condition my move. Admittedly, that makes me a bit more complex than my original simpleton, and that complexity is likely to come at a cost, but I leave it to evolution to figure out whether the trade-off is worthwhile.
Read more of this post

Cooperation and the evolution of intelligence

One of the puzzles of evolutionary anthropology is to understand how our brains got to grow so big. At first sight, the question seems like a no brainer (pause for eye-roll): big brains make us smarter, more adaptable and thus result in an obvious increase in fitness, right? The problem is that brains need calories, and lots of them. Though it accounts for only 2% of your total weight, your brain will consume about 20-25% of your energy intake. Furthermore, the brain from behind its barrier doesn’t have access to the same energy resources as the rest of your body, which is part of the reason why you can’t safely starve yourself thin (if it ever crossed your mind).

So maintaining a big brain requires time and resources. For us, the trade-off is obvious, but if you’re interested in human evolutionary history, you must keep in mind that our ancestors did not have access to chain food stores or high fructose corn syrup, nor were they concerned with getting a college degree. They were dealing with a different set of trade-offs and this is what evolutionary anthropologists are after. What is it that our ancestors’ brains allowed them to do so well that warranted such unequal energy allocation?
Read more of this post