Defining empathy, sympathy, and compassion

PaulBloomWhen discussing the evolution of cooperation, questions about empathy, sympathy, and compassion are often close to mind. In my computational work, I used to operationalize-away these emotive concepts and replace them with a simple number like the proportion of cooperative interactions. This is all well and good if I want to confine myself to a behaviorist perspective, but my colleagues and I have been trying to move to a richer cognitive science viewpoint on cooperation. This has confronted me with the need to think seriously about empathy, sympathy, and compassion. In particular, Paul Bloom‘s article against empathy, and a Reddit discussion on the usefulness of empathy as a word has reminded me that my understanding of the topic is not very clear or critical. As such, I was hoping to use this opportunity to write down definitions for these three concepts and at the end of the post sketch a brief idea of how to approach some of them with evolutionary modeling. My hope is that you, dear reader, would point out any confusion or disagreement that lingers.
Read more of this post

Advertisements

The evolution of compassion by Robert Wright at TED

An enjoyable video from Robert Wright about the evolution of compassion:

How would you model the evolution of compassion? How would your model differ from standard models of evolution of cooperation? Does a model of compassion necessarily need the agents to have model minds/emotions to feel compassion or can we address it purely operationally like cooperation?

Methods and morals for mathematical modeling

About a year ago, Vincent Cannataro emailed me asking about any resources that I might have on the philosophy and etiquette of mathematical modeling and inference. As regular readers of TheEGG know, this topic fascinates me. But as I was writing a reply to Vincent, I realized that I don’t have a single post that could serve as an entry point to my musings on the topic. Instead, I ended up sending him an annotated list of eleven links and a couple of book recommendations. As I scrambled for a post for this week, I realized that such an analytic linkdex should exist on TheEGG. So, in case others have interests similar to Vincent and me, I thought that it might be good to put together in one place some of the resources about metamodeling and related philosophy available on this blog.

This is not an exhaustive list, but it might still be relatively exhausting to read.

I’ve expanded slightly past the original 11 links (to 14) to highlight some more recent posts. The free association of the posts is structured slightly, with three sections: (1) classifying mathematical models, (2) pros and cons of computational models, and (3) ethics of models.

Read more of this post

Systemic change, effective altruism and philanthropy

Keep your coins. I want change.The topics of effective altruism and social (in)justice have weighed heavy on my mind for several years. I’ve even touched on the latter occasionally on TheEGG, but usually in specific domains closer to my expertise, such as in my post on the ethics of big data. Recently, I started reading more thoroughly about effective altruism. I had known about the movement[1] for some time, but had conflicting feelings towards it. My mind is still in disarray on the topic, but I thought I would share an analytic linkdex of some texts that have caught my attention. This is motivated by a hope to get some guidance from you, dear reader. Below are three videos, two articles, two book reviews and one paper alongside my summaries and comments. The methods range from philosophy to comedy and from critical theory to social psychology. I reach no conclusions.

Read more of this post

A detailed update on readership for the first 200 posts

It is time — this is the 201st article on TheEGG — to get an update on readership since our 151st post and lament on why academics should blog. I apologize for this navel-gazing post, and it is probably of no interest to you unless you are really excited about blog statistics. I am writing this post largely for future reference and to celebrate this arbitrary milestone.

The of statistics in this article are largely superficial proxies — what does a view even mean? — and only notable because of how easy they are to track. These proxies should never be used to seriously judge academics but I do think they can serve as a useful self-tracking tool. Making your blog’s statistics available publicly can be a useful comparison for other bloggers to get an idea of what sort of readership and posting habits are typical. In keeping with this rough and lighthearted comparison, according to Jeromy Anglim’s order-of-magnitude rules of thumb, in the year since the last update the blog has been popular in terms of RSS subscribers and relatively popular in terms of annual page views.

As before, I’ll start with the public self-metrics of the viewership graph for the last 6 and a half months:

Columns are views per week at TheEGG blog since the end of August, 2014. The vertical lines separate months, and the black line is average views per day for each month. The scale for weeks is on the left, it is different from the scale for daily average, those are labeled at each height.

Columns are views per week at TheEGG blog since the end of August, 2014. The vertical lines separate months, and the black line is average views per day for each month. The scale for weeks is on the left, it is different from the scale for daily average, those are labeled at each height.

If you’d like to know more, dear reader, then keep reading. Otherwise, I will see you on the next post!
Read more of this post

Seeing edge effects in tumour histology

Some of the hardest parts of working towards the ideal of a theorist, at least for me, are: (1) making sure that I engage with problems that can be made interesting to the new domain I enter and not just me; (2) engaging with these problems in a way and using tools that can be made compelling and useful to the domain’s existing community, and (3) not being dismissive of and genuinely immersing myself in the background knowledge and achievements of the domain, at least around the problems I am engaging with. Ignoring these three points, especially the first, is one of the easiest ways to succumb to interdisciplinitis; a disease that catches me at times. For example, in one of the few references to TheEGG in the traditional academic literature, Karel Mulder writes on the danger of ignoring the second and third points:

Sometimes scientists are offering a helping hand to another discipline, which is all but a sign of compassion and charity… It is an expression of disdain for the poor colleagues that can use some superior brains.

The footnote that highlights an example of such “disciplinary arrogance/pride” is a choice quote from the introduction of my post on what theoretical computer science can offer biology. Mulder exposes my natural tendency toward a condescension. Thus, to be a competent theorist, I need to actively work on inoculating myself against interdisciplinitis.

One of the best ways I know to learn humility is to work with great people from different backgrounds. In the domain of oncology, I found two such collaborators in Jacob Scott and David Basanta. Recently we updated our paper on edge effects in game theoretic dynamics of spatially structured tumours (Kaznatcheev et al., 2015); as always that link leads to the arXiv preprint, but this time — in a first for me — we have also posted the paper to the bioRxiv[1]. I’ve already blogged about the Basanta et al. (2008) work that inspired this and our new technical contribution[2], including the alternative interpretation of the transform of Ohtsuki & Nowak (2006) that we used along the way. So today I want to discuss some of the clinical and biological content of our paper; much of it was greatly expanded upon in this version of the paper. In the process, I want to reflect on the theorist’s challenge learning the language and customs of a newly entered domain.

Read more of this post

Cataloging a year of blogging: the philosophical turn

Passion and motivation are strange and confusing facets of being. Many things about them feel paradoxical. For example, I really enjoy writing, categorizing, and — obviously, if you’ve read many of the introductory paragraphs on TheEGG — blabbing on far too long about myself. So you’d expect that I would have been extremely motivated to write up this index of posts from the last year. Yet I procrastinated — although in a mildly structured way — on it for most of last week, and beat myself up all weekend trying to force words into this textbox. A rather unpleasant experience, although it did let me catch up on some Batman cartoons from my childhood. Since you’re reading this now, I’ve succeeded and received my hit of satisfaction, but the high variance in my motivation to write baffles me.

More fundamentally, there is the paradox of agency. It feels like my motivations and passions are aspects of my character, deeply personal and defining. Yet, it is naive to assume that they are determined by my ego; if I take a step back, I can see how my friends, colleagues, and even complete strangers push and pull the passions and motivations that push and pull me. For example, I feel like TheEGG largely reflects my deep-seated personal interests, but my thoughts do not come from me alone, they are shaped by my social milieu — or more dangerously by Pavlov’s buzzer of my stats page, each view and comment and +1 conditioning my tastes. Is the heavy presence of philosophical content because I am interested in philosophy, or am I interested in philosophy because that is what people want to read? That is the tension that bothers me, but it is clear that my more philosophical posts are much more popular than the practical. If we measure in terms of views then in 2014 new cancer-related posts accounted for only 4.7% of the traffic (with 15 posts), the more abstract cstheory perspective on evolution accounted for 6.6% (with 5 posts), while the posts I discuss below accounted for 57.4% (the missing chunk of unity went to 2014 views of post from 2012 and 2013). Maybe this is part of the reason why there was 24 philosophical posts, compared to the 20 practical posts I highlighted in the first part of this catalog.

Of course, this example is a little artificial, since although readership statistics are fun distraction, they are not particularly relevant just easy to quantify. Seeing the influence of the ideas I read is much more difficult. Although I think these exercises in categorization can help uncover them. In this post, I review the more philosophical posts from last year, breaking them down less autobiographically and more thematically: interfaces and useful delusions; philosophy of the Church-Turing thesis; Limits of science and dangers of mathematics; and personal reflections on philosophy and science. Let me know if you can find some coherent set of influences.

Read more of this post

Where is this empathy that we are so proud of?

This post will be a momentary departure from the usual theme of mathematical and computational modeling that has defined this blog. I apologize for discussing non-scientific news and pushing this medium outside its comfortable academic niche. This post has also been edited after original posting to reflect on a comment discussion

The regular reader might notice that some of the models I try to examine concern themselves with topics like evolution of cooperation, compassion, and empathy. In fact, in my last lecture half of the questions I posed were about compassion:

2. What does Wright say compassion is from a biological point of view? Do you think this is a reasonable definition?
3. Can a rational agent be compassionate? Is understanding the indirect benefits (to yourself or your genes) that your actions produce essential for compassion?
5. Can compassion or cooperation evolve in an inviscid environment? What about a spatially structured one?
7. What is a zero-sum game? Does a non-zero-sum relationship guarantee that compassion will emerge?

As cynical as I am and as cold as the stereotype of a mathematician is, I have always functioned under the assumption that humans are empathetic, caring, compassionate beings. I did not ask if that is a reasonable axiom, but instead would be more interested in questions like: what were the earliest compassionate ancestors of modern humans? But as I reflect on misdeeds in my own personal life (don’t worry, the blog won’t get that personal) and stories in the news… I can’t help but wonder: Is compassion just a story we tell ourselves to sleep at night? Arm-chair hypothesizing on the beauty of our own morality? Where is this empathy that we are so proud of?

Yesterday afternoon, a deranged 29 year old man pushed a 58 year old Ki Suk Han onto the tracks in a Midtown, NYC subway station. There was not enough time for the train to come to a halt before Han was caught between it and the platform. The husband and father-of-one was pronounced dead on arrival at Roosevelt Hospital. Mayor Bloomberg commented: “It’s one of those great tragedies, it’s a blot on all of us, and if you could do anything to stop it, you would.”

What happened on the platform? One of the grizzling developments of the story (and unfortunately the reason it has become big) is a ghastly photo by NY Post photographer R. Umar Abbasi. I have not reproduced the photo here out of respect for Han: it captures the moment before his death, as he tried to scramble back onto the platform with the train bearing down upon him. The Post ran the image as its cover, with caption: “Doomed: pushed on the subway track, this man is about to die”.

As you can tell from this post, I am not a journalist and it is not my place to question the Post’s editorial decisions. I am also not writing to join the attacks on photographer Abassi. Actually, I think that (with the available evidence) twitter and the media are putting too much blame and guilt on him, already. My concern is with everybody on the platform.

At that time on Monday, I had just returned to Montreal from a trip to New York City. In my time in the city (and my earlier years of visits and living there) I have almost never seen a platform that deserted. Empty. Abandoned. There is nobody trying to reach over the edge to help Han; there is nobody running towards him. All you can see in the photo is a crowd of people standing nearly 10 meters down the platform simply looking. Watching. Paralyzed.

Can we continue to think that we are a compassionate species? Can we claim that we feel empathy? Is our groupthink so overbearing that it can paralyze us from helping a man in distress? Would I have had the strength to run over and try to help? Even knowing the physics of the situation, low likelihood of my success, and high chance of personal injury? Or would I have just remained motionless? As if robbed of my agency. What about you?

As pointed out by Joe Fitzsimons, the situation is not that simple:

When something like this happens right out of the blue, it takes most people a considerable amount of time to parse what has happened. Not through a lack of empathy, but through a mixture of incomplete information (presumably most people only look *after* something has happened), a lack of context, and so on. People instinctively hesitate. That’s why we train those people we expect to have to deal with such situations (for example soldiers, police, fire fighters, etc.) so that they don’t lock-up. Without training or prior experience, people do lock up, simply not react at all, or make the situation worse by doing something stupid. It’s nothing to do with a lack of empathy.

In such a situation, the confusion and surprise can be paralyzing. I have definitely felt this before and in much less stressful situations. In such a context, it is meaningless to discuss empathy since the stress robs humans of their agency. It is this setting that is most likely. The question becomes: what can we do to prevent ourselves from such paralysis? Is it reasonable to define empathy on such short timescales? Is compassion meaningful without the agency to act on it? How would I feel in a situation where I was witnessing a horror that I could not will my body to prevent? Would I feel robbed of the sense agency that defines my humanity? What about you?

I was not there. My questions should be secondary to the testimony of witnesses. I cannot reflect (or even understand) the mental state of the people caught up in this awful situation. After the incident, it is reported that Dr. Laura Kaplan (who did not see the incident occur) and a security guard, rushed over to try to administer CPR, but “there was no pulse, never, no reflexes.”

My heart goes out to the widow and child of Ki Suk Han. Although they will never read these words, I hope that there is still some cosmic comfort in them. In no way do I want to assign guilt to the bystanders on the platform, I am not in a position where I can glean understanding. The only clarity to me is that the assumption of humans as empathetic, compassionate, and caring is much more complicated than I had believed. Empathy needs to be operationalized and carefully studied. Its connection to agency, fear, and rationality needs to be carefully examined. We need to understand what makes humans seem so inhumane in such tragic situations.

PSYC 532 (2012): Evolutionary Game Theory and Cognition

Past Thursday was my fourth time guest lecturing for Tom’s Cognitive Science course. I owe Tom and the students a big thank you. I had a great time presenting, and hope I was able to share some of my enthusiasm for evolutionary games.

I modified the presentation (pdf slides) by combining lecture and discussion. Before the lecture the students read my evolving cooperation post and watched Robert Wright’s “Evolution of compassion”. Based on this, they prepared discussion points and answers to:

  1. What is kin selection? What is green-beard effect or ethnocentrism? How do you think kin selection could be related to the green-beard effect or ethnocentrism?
  2. What does Wright say compassion is from a biological point of view? Do you think this is a reasonable definition?
  3. Can a rational agent be compassionate? Is understanding the indirect benefits (to yourself or your genes) that your actions produce essential for compassion?
  4. What simplifying assumptions does evolutionary game theory make when modeling agents? Are these assumptions reasonable?
  5. Can compassion or cooperation evolve in an inviscid environment? What about a spatially structured one?
  6. What is reciprocal altruism, direct reciprocity and indirect reciprocity?
  7. What is a zero-sum game? Does a non-zero-sum relationship guarantee that compassion will emerge?
  8. Is the Prisoner’s dilemma a zero-sum game? Can you have a competitive environment that is non-zero sum?

During the lecture, we would pause to discuss these questions. As always, the class was enthusiastic and shared many unique viewpoints on the topics. Unfortunately, I did not sufficiently reduce the material from last year and with the discussion we ran out of time. This means that we did not get to the ethnocentrism section of the slides. For students that want to understand that section, I recommend: Evolution of ethnocentrism in the Hammond and Axelrod model.

To the students: thank you for being a great audience and I encourage you to continue discussing the questions above in the comments of this post.

Introduction to evolving cooperation

Since 2009, I’ve had a yearly routine of guest lecturing for Tom’s Cognitive Science course. The way I’ve structured the class was by assigning videos to watch before the lecture so that I could build on them. Last year, I started posting the video ahead of time on the blog: my 2009 TEDxMcGill talk, Robert Wright’s evolution of compassion, and Howard Rheingold’s new power of collaboration. However, instead of just presenting a link with very little commenatry, this time I decided to write a transcript with my talk that I seeded with references and links for the curious. The text is not an exact recreation of the words, but a pretty close fit that is meant to serve as a gentle introduction to the evolution of cooperation.

Earlier today, we heard about the social evolution of language and to a certain extent we heard about the emergence and evolution of zero. We even heard about our current economic affairs and such. I am going to talk about all of these things and, in particular, continue the evolutionary theme and talk about the evolution of cooperation in society and elsewhere.

We’ve all come across ideas of the greater good, altruism, cooperation or the sacrifice of an individual for the good of others. In biology, we have an analogous concept where we look at the willingness of certain individuals to give up some of their reproductive potential to increase the reproductive potential of others. This paradoxical concept in the social sciences is grappled with by philosophers, sociologists, and political scientists. In the biological context, it is obviously an important question to biologists.

Now, the question really becomes as to how and why does this cooperation emerge? First, we are going to look at this from the biological point of view, connect it to the social sciences, and then to everything else.

Currently, biology is really shaped by Darwin, Wallace and their theory of evolution by natural selection. It is a unifying theme and tie of modern biology. The interesting feature of biology is that it is an explicitly competitive framework: organisms compete against other organisms for their reproduction. Our question becomes: how does cooperation emerge in such a competitive environment?

We know this cooperation does emerge because it is essential for all the complexity we see. It is essential for single cells to come together into multi-cellular organisms, for the emergence of ant colonies, and even human society. We want to study this and try to answer these questions. But how do you create a competitive environment in a mathematical framework? We borrow from game theory the idea of Prisoner’s dilemma, or in my case I prefer the Knitter’s dilemma. This is one of many possible models of a competitive environment, and the most used in the literature.

In the Knitter’s dilemma there are two players. One of them is Alice. Alice produces yarn, but she doesn’t have any needles, and she wants to sew a sweater. In the society that she lives, knitting sweaters is frowned upon, so she can’t go ask for needles publicly. Bob, on the other hand, produces needles but not yarn. He also wants to sew a sweater. So they decide: “okay, lets go out into the woods late at night, bring briefcases with our respected goods and trade”.

Alice has a dilemma: should she include yarn in her briefcase (indicated by the green briefcase in the figure below)? Or should she not (signified by the red)? If Bob includes needles (first column), and Alice includes yarn then she gets the benefit b of going home and knitting a sweater, but she does pay a small cost c for giving away some of her yarn. Alternatively, if Bob brings needles, but she’s tricky and doesn’t bring her yarn then she gets all the benefit of going home and making a sweater without paying even the marginal cost of giving away some of her yarn. If Bob brings an empty briefcase (second column), and Alice brings yarn as she said she would then Alice pays a small cost in giving some of her yarn away without benefit of being able to make a sweater. Alternatively, if she also brings an empty briefcase then they just met in the middle of the night, traded empty briefcases, and everybody goes back with the no payoff.

Knitter's dilemma

It seems that no matter what Bob does, it is better for Alice to bring an empty briefcase, what we call defection, than to cooperate by bringing a full briefcase. This sets up the basic idea of a competitive environment. The rational strategy, or the Nash equilibrium, for this game is for both individuals to defect and bring empty briefcases. However, from outside the game we can see that if they both do what they said they would and cooperate then they are both better of. That is captured by the Pareto optimum in green.

Of course, as mentioned earlier by Andy, we cannot always expect people to be rational and make all these decisions based on reasoning. Evolutionary game theory comes from the perspective of modeling Alice and Bob as simple agents that have a trait that is passed down to their offspring. This is shown below by green circles for players that cooperate and red circles for ones that don’t. In the standard model, we will pair them off randomly and they will play the game. So a green and a green is two cooperators; they both went home and made a sweater. Two reds both went empty handed. After interaction we disseminate them through the population and let them reproduce according to how the game affected their potential. Higher for people that received a large benefit, and lower chance to reproduce to people who only paid costs. We cycle this for a while, and what we observe is more and more red emerging. All the green cooperation starts to go away. This captures the basic intuition that a competitive environment breeds defection.

Of course, you and I can think of some ways to overcome this dilemma. Evolutionary game theorists have also been there and thought of it (Nowak, 2006). They thought of three models of how to avoid it. The first is Hamilton’s (1964) kin selection: Bob’s actually your uncle, and you’re willing to work with him. You’ll bring the yarn as you said you would. Alternatively, you’ve encountered Bob many times before and he has always included needles in his briefcase. You are much more willing to work with him. This is Trivers’ (1971) direct reciprocity, and you’ll include your yarn. Finally, indirect reciprocity (Nowak & Sigmund, 1998): you’ve heard that Bob is an honest man that always brings needles as he says he will. So you are much more likely to cooperate with him.

All these things seem pretty simple to us, but if we’re an amoeba floating around in some soup (and microbes do play games; Lenski & Velicer 2001) then it’s not quiet as obvious that we can do any of these things. Recognizing kin, remembering past interactions, or social constructs like reputation become very difficult. Hence, I look at the more primitive methods such as spatial/network reciprocity or viscosity.

Earlier, Paul mentioned that if we have a turbulent environment it becomes very hard for us to live. Hence the idea that we introduce some structure into our environment. We populate all our agents inside a small grid where they can interact with their neighbors and reproduce into neighboring squares.

Alternatively, we can borrow an idea from the selfish gene approach to evolution called the green-beard effect. This was introduced by Hamilton (1964) & Dawkins’ Selfish Gene. This is a gene that produces three phenotypical effects: (1) it produces an arbitrary marker which we call the beard (or in our case circles and squares), (2) it allows you to recognize this trait in others, not their strategy just the trait/beard, and (3) it allows you to change your strategy depending on what trait/beard you observe. As before, you can cooperate or defect with other circles, or if you meet a square then you can also chose to cooperate or defect. You have four possible strategies that are drawn in the figure below. In human culture, cooperating with those that are like you (i.e. other circles) and defecting against those that are squares is the idea of ethnocentrism. Here we bring back the social context a little bit by looking at this as a simple model of human evolution, too.

We can combine the two models, by looking at little circles and squares of different colors inside a grid, and seeing how the population will evolve with time. The results we observe are that we do see cooperation emerge, but sadly it is an ethnocentric sort of cooperation. We can see it from the below graph where the y-axis is proportion of cooperative interactions: the higher up you are in the graph, the more cooperation is happening, so the better it is. In the blue model we have agents that can distinguish between circles and squares living inside a spatial lattice. In the green we see a model with spatial structure, but no cognitive ability to adjust based on tags. In the red and the yellow you can see models where there is no spatial structure, or there is no ability to recognize people based on if they are a circle or a square. In these restricted models cooperation does not consistently emerge. Although in the tags with no space model in yellow there is occasional bifurcation of cooperation highlighted by the black circle and arrow.

Annotated reproduction of figure from Kaznatcheev & Shultz 2011

Proportion of cooperation versus evolutionary cycle for four different conditions. In blue is the standard H&A model; green preserves local child placement but eliminates tags; yellow has tags but no local child placement; red is both inviscid and tag-less. The lines are from averaging 30 simulations for each condition, and thickness represents standard error. Figure appeared in Kaznatcheev & Shultz (2011).

This gives us a suggestion of how evolution could have shaped the way we are today, and how evolution could have shaped the common trend of ethnocentrism in humans. The model doesn’t propose ways to overcome ethnocentrism, but one thing it does is at least create cooperation among scientists who use it. In particular, the number of different fields (represented in one of my favorite xkcd comics, below) that use these sort of models.

Sociologists and political scientists use these models for peace building and conflict resolution (eg. Hammond & Axelrod, 2006). In this case cooperation would be working towards peace, and defection could be sending a mortar round into the neighboring village. Psychologists look at games like the Prisoner’s dilemma (or the Knitter’s dilemma in my case) and say “well, humans tend to cooperate in certain settings. Why is that? Can we find an evolutionary backing for that?” In our running example by looking at ethnocentrism (eg. Shultz, Hartshorn, & Kaznathceev, 2009). Biologists look at how the first molecules came together to form life, or how single cells started to form multi-cellular organisms. Even in cancer research (eg. Axelrod, Axelrod, & Pienta, 2006) and the spread of infectious disease such as the swine flu (eg. Read & Keeling, 2003). Even chemists and physicists use this as a model of self-organizing behavior and a toy model of non-linear dynamics (eg. Szabo & Fath, 2007). Of course, it comes back to computer scientists and mathematicians, who use this for studying network structure and distributive computing. The reason all these fields can be unified by the mathematical idea underlying evolution seems kind of strange. The reason this can happen is because of the simple nature of evolution. Evolution can occur in any system where information is copied in a noisy environment. Thus, all these fields can cooperate together in working on finding answers to the emergence and evolution of cooperation. Hopefully, starting with the scientists working together on these questions, we can get people around the world to also cooperate.

References

Axelrod, R., Axelrod, D. E., & Pienta, K. J. (2006). Evolution of cooperation among tumor cells. Proceedings of the National Academy of Sciences, 103(36), 13474-13479.

Hamilton, W. D. (1964). The Genetical Evolution of Social Behavior. Journal of Theoretical Biology 7 (1): 1–16.

Hammond, R. A., & Axelrod, R. (2006). The evolution of ethnocentrism. Journal of Conflict Resolution, 50(6), 926-936.

Kaznatcheev, A., & Shultz, T.R. (2011). Ethnocentrism Maintains Cooperation, but Keeping One’s Children Close Fuels It. Proceedings of the 33rd Annual Conference of the Cognitive Science Society, 3174-3179

Lenski, R. E., & Velicer, G. J. (2001). Games microbes play. Selection, 1(1), 89-96.

Nowak MA (2006). Five rules for the evolution of cooperation. Science (New York, N.Y.), 314 (5805), 1560-3 PMID: 17158317

Nowak, M. A., & Sigmund, K. (1998). Evolution of indirect reciprocity by image scoring. Nature, 393(6685), 573-577.

Read, J. M., & Keeling, M. J. (2003). Disease evolution on networks: the role of contact structure. Proceedings of the Royal Society of London. Series B: Biological Sciences, 270(1516), 699-708.

Shultz, T. R., Hartshorn, M., & Kaznatcheev, A. (2009). Why is ethnocentrism more common than humanitarianism. In Proceedings of the 31st annual conference of the cognitive science society (pp. 2100-2105).

Szabo, G., & Fath, G. (2007). Evolutionary games on graphs. Physics Reports, 446 (4-6), 97-216

Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly review of biology, 35-57.