Personification and pseudoscience

If you study the philosophy of science — and sometimes even if you just study science — then at some point you might get the urge to figure out what you mean when you say ‘science’. Can you distinguish the scientific from the non-scientific or the pseudoscientific? If you can then how? Does science have a defining method? If it does, then does following the steps of that method guarantee science, or are some cases just rhetorical performances? If you cannot distinguish science and pseudoscience then why do some fields seem clearly scientific and others clearly non-scientific? If you believe that these questions have simple answers then I would wager that you have not thought carefully enough about them.

Karl Popper did think very carefully about these questions, and in the process introduced the problem of demarcation:

The problem of finding a criterion which would enable us to distinguish between the empirical sciences on the one hand, and mathematics and logic as well as ‘metaphysical’ systems on the the other

Popper believed that his falsification criterion solved (or was an important step toward solving) this problem. Unfortunately due to Popper’s discussion of Freud and Marx as examples of non-scientific, many now misread the demarcation problem as a quest to separate epistemologically justifiable science from the epistemologically non-justifiable pseudoscience. With a moral judgement of Good associated with the former and Bad with the latter. Toward this goal, I don’t think falsifiability makes much headway. In this (mis)reading, falsifiability excludes too many reasonable perspectives like mathematics or even non-mathematical beliefs like Gandy’s variant of the Church-Turing thesis, while including much of in-principle-testable pseudoscience. Hence — on this version of the demarcation problem — I would side with Feyerabend and argue that a clear seperation between science and pseudoscience is impossible.

However, this does not mean that I don’t find certain traditions of thought to be pseudoscientific. In fact, I think there is a lot to be learned from thinking about features of pseudoscience. A particular question that struck me as interesting was: What makes people easily subscribe to pseudoscientific theories? Why are some kinds of pseudoscience so much easier or more tempting to believe than science? I think that answering these questions can teach us something not only about culture and the human mind, but also about how to do good science. Here, I will repost (with some expansions) my answer to this question.
Read more of this post

Asking Amanda Palmer about cooperation in the public goods game

In the late summer of 2010 I was homeless — living in hostels, dorms, and on the couches of friends as I toured academic events: a total of 2 summer schools, and 4 conferences over a two and a half month period. By early September I was ready to return to a sedentary life of research. I had just settled into my new office in the Department of Combinatorics & Optimization at the University of Waterloo and made myself comfortable with a manic 60 hour research spree. This meant no food or sleep — just sunflower seeds, Arizona iced tea, and leaving my desk only to use the washroom. I was committing all the inspiration of the summer to paper, finishing up old articles, and launching new projects.

A key ingredient to inducing insomnia and hushing hunger was the steady rhythm of music. In this case, it was a song that a burlesque dancer (also, good fencer and friend) had just introduced me to: “Runs in the Family” by Amanda Palmer. The computer pumped out the consistent staccato rhythm on loop as it ran my stochastic models in the background.

After finishing my research spree, I hunted down more of Palmer’s music and realized that I enjoyed all her work and the story behind her art. For two and a half years, I thought that the connection between the artist and my research would be confined to the motivational power of her music. Today, I watched her TED talk and realized the connection is much deeper:

As Amanda Palmer tells her story, she stresses the importance of human connection, intimacy, trust, fairness, and cooperation. All are key questions to an evolutionary game theorist. We study cooperation by looking at the prisoner’s dilemma and public goods game (Nowak, 2006). We look at fairness through the ultimatum and dictator game (Henrich et al., 2001). We explore trust with direct and indirect reciprocity (Axelrod, 1981; Nowak & Sigmund, 1998). We look at human connections and intimacy through games on graphs and social networks (Szabo & Fath, 2007).

As a musician that promotes music ‘piracy’ and crowdfunding, she raises a question that is a perfect candidate for being modeled as a variant of the public goods game. A musician that I enjoy is an amplifier of utility: if I give the musician ten dollars then I receive back a performance or record that provides me more than ten dollars worth of enjoyment. It used to be that you could force me to always pay before receiving music, this is equivalent to not allowing your agent to defect. However, with the easy of free access to music, the record industry cannot continue to forbid defection. I can chose to pay or not pay for my music, and the industry fears that people will always tend to the Nash equilibrium: defecting by not paying for music.

From the population level this is a public goods game. Every fan of Amanda Palmer has a choice to either pay (cooperate) or not (defect) for her music. If we all pay then she can turn that money into music that all the fans can enjoy. However, if not enough of us pay then she has to go back to her day job as a human statue which will decrease the time she can devote to music and result in less enjoyable songs or at least less frequent releases of new songs. If none of us pay her then it becomes impossible for Palmer and her band to record and distribute their music, and none of the fans gain utility.

The record industry believes in homo economicus and concludes that the population will converge to all defection. The industry fears that if left to their own devices, no fans will chose to pay for music. For the highly inviscid environment of detached mass-produced pop music, I would not be surprised if this was true.
The record industry has come up with only one mechanism to overcome this: punishment. If I do not pay (cooperate) then an external agent will punish me, and reduce my net utility to lower than if I had simply paid for the music. Fehr & Gachter (1999) showed that this is one way to establish cooperation. If the industry can produce a proper punishment scheme then they can make people pay for music. However, as evolutionary game theorists, we know that there are many other mechanisms with which to promote cooperation in the public good’s game. Amanda Palmer realizes this, too, and closes her talk with:

I think people have been obsessed with the wrong question, which is: “how do we make people pay for music?” What if we started asking: “how do we let people pay for music?”

As a modeler of cooperation, in some ways my work is as an engineer. In order to publish, I need to design novel mechanisms that allow cooperation to emerge in a population. In this way, there is a much deeper connection between my research and one of the questions asked by Amanda Palmer. So I ask you: What are your favorite non-punishment mechanisms for allowing cooperation in the public goods game?

References

Axelrod, R. (1981). The emergence of cooperation among egoists. The American Political Science Review, 306-318.

Fehr, E., & Gächter, S. (2000). Cooperation and Punishment in Public Goods Experiments American Economic Review, 90 (4), 980-994 DOI: 10.1257/aer.90.4.980

Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001). In Search of Homo Economicus: Behavioral Experiments in 15 Small-Scale Societies American Economic Review, 91 (2), 73-78.

Nowak, M. A., & Sigmund, K. (1998). Evolution of indirect reciprocity by image scoring. Nature, 393(6685), 573-577.

Nowak, M. A. (2006). Five rules for the evolution of cooperation. science, 314(5805), 1560-1563.

Szabo, G., & Fath, G. (2007). Evolutionary games on graphs Physics Reports, 446 (4-6), 97-216

Theorists as connectors: from Poincaré to mathematical medicine

Henri Poincaré (29 April 1854 – 17 July 1912) is often considered to be the last universalist of mathematicians. He excelled in all parts of theoretical physics, applied, and pure mathematics that existed during his time. Since him, top mathematicians have become increasingly more specialized, as have scientists. Poincaré was part pure mathematician, part engineer; he advocated the importance of intuition over formality in mathematics. This put him at odds with the likes of Frege, Hilbert, and Russell — men that are typically considered the grandfathers of theoretical computer science. As an aspiring CSTheorist, I think we are misplaced in tracing our intellectual roots to the surgical and sterile philosophies of logicism and formalism.

A computer scientist, at least one that embraces the algorithmic lens, is part scientist/engineer and part logician/mathematician. Although there is great technical merit to be had in proving that recently defined complexity class X is equal (or not) to a not-so-recently defined complexity class Y, my hope is that this is a means to a deeper understanding of something other than arbitrarily defined complexity classes. The mark of a great theorist is looking at a problem in science (or some other field) and figuring out how to properly frame it in such a way that the formal tools of mathematics at her disposal become applicable to the formulation. I think Scott Aaronson said it clearly (his emphasis):

A huge part of our job description as theoretical computer scientists is finding formal ways to model informally-specified notions! (What is it that Turing did when he defined Turing machines in the first place?) For that reason, I don’t think it’s possible to define “modeling questions” as outside the scope of TCS, without more-or-less eviscerating the subject.

As experimental science becomes more and more specialized, I believe it is increasingly important to have universal theorists or connectors. People with the mission of finding connections between disparate fields, and framing different theories in common languages. That is my goal, and the only unifying theme I can detect between my often random-seeming interests. Of course, CSTheorists are not the only ones well prepared to do take on the job of connectors. Jacob G. Scott (@CancerConnector on twitter; where I borrow ‘connector’ from) suggests that MD trained scientists are also perfect as connectors:

I completely agree with Jacob’s emphasis on creativity, and seeing complex problems as a whole. Usually, I would be reluctant to accept the suggestion of connectors without formal mathematical training, but I am starting to see that it is not essential for a universalist. My only experience with MD trained scientists was stimulating conversations with Gary An, a surgeon at University of Chicago Medical Center and organizer of the Swarmfest2012 conference on complex adaptive systems. He brought a pragmatic view to computational modeling, and (more importantly) the purpose of models, that I would have never found on my own. For me, computational models had been an exercise in formalism and a tool to build intuition on questions I could not tackle analytically. Gary stressed the importance of models as a means of communication, as a bridge between disciplines. He showed me that modelers are connectors.

As most scientists becomes more and more specialized, I think it is essential to have generalists and connectors to keep science unified. We cannot hope for a modern Poincaré, but we can aspire for theorists that specialize in drawing connections between fields, and driving a cross-fertilization of tools. For me, following Turing’s footsteps on the intuitive road of theoretical computer science and algorithmic lens is the most satisfying, but it is not the only way. Jacob shows that translating between distant disciplines like math/physics and biology/medicine and engaging their researchers can drive progress. Gary shows that pragmatism and viewing modeling as a means of communication is equally important. In some way, they (and many like them) act as a 21st century Poincaré by bringing the intuition of mathematics and computer modeling to bare on the engineering of modern medicine.

Introduction to evolving cooperation

Since 2009, I’ve had a yearly routine of guest lecturing for Tom’s Cognitive Science course. The way I’ve structured the class was by assigning videos to watch before the lecture so that I could build on them. Last year, I started posting the video ahead of time on the blog: my 2009 TEDxMcGill talk, Robert Wright’s evolution of compassion, and Howard Rheingold’s new power of collaboration. However, instead of just presenting a link with very little commenatry, this time I decided to write a transcript with my talk that I seeded with references and links for the curious. The text is not an exact recreation of the words, but a pretty close fit that is meant to serve as a gentle introduction to the evolution of cooperation.

Earlier today, we heard about the social evolution of language and to a certain extent we heard about the emergence and evolution of zero. We even heard about our current economic affairs and such. I am going to talk about all of these things and, in particular, continue the evolutionary theme and talk about the evolution of cooperation in society and elsewhere.

We’ve all come across ideas of the greater good, altruism, cooperation or the sacrifice of an individual for the good of others. In biology, we have an analogous concept where we look at the willingness of certain individuals to give up some of their reproductive potential to increase the reproductive potential of others. This paradoxical concept in the social sciences is grappled with by philosophers, sociologists, and political scientists. In the biological context, it is obviously an important question to biologists.

Now, the question really becomes as to how and why does this cooperation emerge? First, we are going to look at this from the biological point of view, connect it to the social sciences, and then to everything else.

Currently, biology is really shaped by Darwin, Wallace and their theory of evolution by natural selection. It is a unifying theme and tie of modern biology. The interesting feature of biology is that it is an explicitly competitive framework: organisms compete against other organisms for their reproduction. Our question becomes: how does cooperation emerge in such a competitive environment?

We know this cooperation does emerge because it is essential for all the complexity we see. It is essential for single cells to come together into multi-cellular organisms, for the emergence of ant colonies, and even human society. We want to study this and try to answer these questions. But how do you create a competitive environment in a mathematical framework? We borrow from game theory the idea of Prisoner’s dilemma, or in my case I prefer the Knitter’s dilemma. This is one of many possible models of a competitive environment, and the most used in the literature.

In the Knitter’s dilemma there are two players. One of them is Alice. Alice produces yarn, but she doesn’t have any needles, and she wants to sew a sweater. In the society that she lives, knitting sweaters is frowned upon, so she can’t go ask for needles publicly. Bob, on the other hand, produces needles but not yarn. He also wants to sew a sweater. So they decide: “okay, lets go out into the woods late at night, bring briefcases with our respected goods and trade”.

Alice has a dilemma: should she include yarn in her briefcase (indicated by the green briefcase in the figure below)? Or should she not (signified by the red)? If Bob includes needles (first column), and Alice includes yarn then she gets the benefit b of going home and knitting a sweater, but she does pay a small cost c for giving away some of her yarn. Alternatively, if Bob brings needles, but she’s tricky and doesn’t bring her yarn then she gets all the benefit of going home and making a sweater without paying even the marginal cost of giving away some of her yarn. If Bob brings an empty briefcase (second column), and Alice brings yarn as she said she would then Alice pays a small cost in giving some of her yarn away without benefit of being able to make a sweater. Alternatively, if she also brings an empty briefcase then they just met in the middle of the night, traded empty briefcases, and everybody goes back with the no payoff.

Knitter's dilemma

It seems that no matter what Bob does, it is better for Alice to bring an empty briefcase, what we call defection, than to cooperate by bringing a full briefcase. This sets up the basic idea of a competitive environment. The rational strategy, or the Nash equilibrium, for this game is for both individuals to defect and bring empty briefcases. However, from outside the game we can see that if they both do what they said they would and cooperate then they are both better of. That is captured by the Pareto optimum in green.

Of course, as mentioned earlier by Andy, we cannot always expect people to be rational and make all these decisions based on reasoning. Evolutionary game theory comes from the perspective of modeling Alice and Bob as simple agents that have a trait that is passed down to their offspring. This is shown below by green circles for players that cooperate and red circles for ones that don’t. In the standard model, we will pair them off randomly and they will play the game. So a green and a green is two cooperators; they both went home and made a sweater. Two reds both went empty handed. After interaction we disseminate them through the population and let them reproduce according to how the game affected their potential. Higher for people that received a large benefit, and lower chance to reproduce to people who only paid costs. We cycle this for a while, and what we observe is more and more red emerging. All the green cooperation starts to go away. This captures the basic intuition that a competitive environment breeds defection.

Of course, you and I can think of some ways to overcome this dilemma. Evolutionary game theorists have also been there and thought of it (Nowak, 2006). They thought of three models of how to avoid it. The first is Hamilton’s (1964) kin selection: Bob’s actually your uncle, and you’re willing to work with him. You’ll bring the yarn as you said you would. Alternatively, you’ve encountered Bob many times before and he has always included needles in his briefcase. You are much more willing to work with him. This is Trivers’ (1971) direct reciprocity, and you’ll include your yarn. Finally, indirect reciprocity (Nowak & Sigmund, 1998): you’ve heard that Bob is an honest man that always brings needles as he says he will. So you are much more likely to cooperate with him.

All these things seem pretty simple to us, but if we’re an amoeba floating around in some soup (and microbes do play games; Lenski & Velicer 2001) then it’s not quiet as obvious that we can do any of these things. Recognizing kin, remembering past interactions, or social constructs like reputation become very difficult. Hence, I look at the more primitive methods such as spatial/network reciprocity or viscosity.

Earlier, Paul mentioned that if we have a turbulent environment it becomes very hard for us to live. Hence the idea that we introduce some structure into our environment. We populate all our agents inside a small grid where they can interact with their neighbors and reproduce into neighboring squares.

Alternatively, we can borrow an idea from the selfish gene approach to evolution called the green-beard effect. This was introduced by Hamilton (1964) & Dawkins’ Selfish Gene. This is a gene that produces three phenotypical effects: (1) it produces an arbitrary marker which we call the beard (or in our case circles and squares), (2) it allows you to recognize this trait in others, not their strategy just the trait/beard, and (3) it allows you to change your strategy depending on what trait/beard you observe. As before, you can cooperate or defect with other circles, or if you meet a square then you can also chose to cooperate or defect. You have four possible strategies that are drawn in the figure below. In human culture, cooperating with those that are like you (i.e. other circles) and defecting against those that are squares is the idea of ethnocentrism. Here we bring back the social context a little bit by looking at this as a simple model of human evolution, too.

We can combine the two models, by looking at little circles and squares of different colors inside a grid, and seeing how the population will evolve with time. The results we observe are that we do see cooperation emerge, but sadly it is an ethnocentric sort of cooperation. We can see it from the below graph where the y-axis is proportion of cooperative interactions: the higher up you are in the graph, the more cooperation is happening, so the better it is. In the blue model we have agents that can distinguish between circles and squares living inside a spatial lattice. In the green we see a model with spatial structure, but no cognitive ability to adjust based on tags. In the red and the yellow you can see models where there is no spatial structure, or there is no ability to recognize people based on if they are a circle or a square. In these restricted models cooperation does not consistently emerge. Although in the tags with no space model in yellow there is occasional bifurcation of cooperation highlighted by the black circle and arrow.

Annotated reproduction of figure from Kaznatcheev & Shultz 2011

Proportion of cooperation versus evolutionary cycle for four different conditions. In blue is the standard H&A model; green preserves local child placement but eliminates tags; yellow has tags but no local child placement; red is both inviscid and tag-less. The lines are from averaging 30 simulations for each condition, and thickness represents standard error. Figure appeared in Kaznatcheev & Shultz (2011).

This gives us a suggestion of how evolution could have shaped the way we are today, and how evolution could have shaped the common trend of ethnocentrism in humans. The model doesn’t propose ways to overcome ethnocentrism, but one thing it does is at least create cooperation among scientists who use it. In particular, the number of different fields (represented in one of my favorite xkcd comics, below) that use these sort of models.

Sociologists and political scientists use these models for peace building and conflict resolution (eg. Hammond & Axelrod, 2006). In this case cooperation would be working towards peace, and defection could be sending a mortar round into the neighboring village. Psychologists look at games like the Prisoner’s dilemma (or the Knitter’s dilemma in my case) and say “well, humans tend to cooperate in certain settings. Why is that? Can we find an evolutionary backing for that?” In our running example by looking at ethnocentrism (eg. Shultz, Hartshorn, & Kaznathceev, 2009). Biologists look at how the first molecules came together to form life, or how single cells started to form multi-cellular organisms. Even in cancer research (eg. Axelrod, Axelrod, & Pienta, 2006) and the spread of infectious disease such as the swine flu (eg. Read & Keeling, 2003). Even chemists and physicists use this as a model of self-organizing behavior and a toy model of non-linear dynamics (eg. Szabo & Fath, 2007). Of course, it comes back to computer scientists and mathematicians, who use this for studying network structure and distributive computing. The reason all these fields can be unified by the mathematical idea underlying evolution seems kind of strange. The reason this can happen is because of the simple nature of evolution. Evolution can occur in any system where information is copied in a noisy environment. Thus, all these fields can cooperate together in working on finding answers to the emergence and evolution of cooperation. Hopefully, starting with the scientists working together on these questions, we can get people around the world to also cooperate.

References

Axelrod, R., Axelrod, D. E., & Pienta, K. J. (2006). Evolution of cooperation among tumor cells. Proceedings of the National Academy of Sciences, 103(36), 13474-13479.

Hamilton, W. D. (1964). The Genetical Evolution of Social Behavior. Journal of Theoretical Biology 7 (1): 1–16.

Hammond, R. A., & Axelrod, R. (2006). The evolution of ethnocentrism. Journal of Conflict Resolution, 50(6), 926-936.

Kaznatcheev, A., & Shultz, T.R. (2011). Ethnocentrism Maintains Cooperation, but Keeping One’s Children Close Fuels It. Proceedings of the 33rd Annual Conference of the Cognitive Science Society, 3174-3179

Lenski, R. E., & Velicer, G. J. (2001). Games microbes play. Selection, 1(1), 89-96.

Nowak MA (2006). Five rules for the evolution of cooperation. Science (New York, N.Y.), 314 (5805), 1560-3 PMID: 17158317

Nowak, M. A., & Sigmund, K. (1998). Evolution of indirect reciprocity by image scoring. Nature, 393(6685), 573-577.

Read, J. M., & Keeling, M. J. (2003). Disease evolution on networks: the role of contact structure. Proceedings of the Royal Society of London. Series B: Biological Sciences, 270(1516), 699-708.

Shultz, T. R., Hartshorn, M., & Kaznatcheev, A. (2009). Why is ethnocentrism more common than humanitarianism. In Proceedings of the 31st annual conference of the cognitive science society (pp. 2100-2105).

Szabo, G., & Fath, G. (2007). Evolutionary games on graphs. Physics Reports, 446 (4-6), 97-216

Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly review of biology, 35-57.

Howard Rheingold on collaboration at TED

Howard Rheingold sows some of the seeds of evolutionary game theoretic thinking at TED.

Rheingold talks about the prisoner’s dilemma, ultimatum game, and tragedy of the commons (or public good), and how they can be modified to facilitate collaboration. Do you think evolutionary game theory can give us serious insights on human cooperation? Or is it just too simple of an approximation?

The evolution of compassion by Robert Wright at TED

An enjoyable video from Robert Wright about the evolution of compassion:

How would you model the evolution of compassion? How would your model differ from standard models of evolution of cooperation? Does a model of compassion necessarily need the agents to have model minds/emotions to feel compassion or can we address it purely operationally like cooperation?

Evolving past Bruce Bueno de Mesquita’s predictions at TED

Originally, today’s post was going to be about “The evolution of compassion” by Robert Wright, but a September 3rd Economist article caught my attention. So we will save compassion for another week, and instead quickly talk about prediction human behavior. The Economist discusses several academics and firms that specialize in using game theory to predicting negotiation behavior and quantify how it can be influenced. The article included a companion video highlighting Bruce Bueno de Mesquita, but I decided to include an older TED talk “Bruce Bueno de Mesquita predicts Iran’s future” instead:

I like the discussion of game theoretic predictions in the first part of the video. I want to concentrate on that part and side-step the specific application to Iranian politics at the end of the video.

Bruce Bueno de Mesquita clearly comes from a political science background, and unfortunately concentrates on very old game theory. However, we know from EGT that many of these classical assumptions are unjustified. In particular, Bueno de Mesquita says there are only two exceptions to rationality: 2-year olds and schizophrenics. Of course, this means we have to ignore classical results such as those of Shafir & Tversky [ST92,TS92] and basically the whole field of neuroeconomics.

The speaker also tries to build a case for modeling by scaring us with factorials and prescribing magic power to computers. He gives the examples of being able to keep track of all possible interactions of 5 people in your head, but not of 10. However, as we know from basic complexity theory, working with problems that grow in difficulty as factorials is not possible for computers either. In particular, if Bueno de Mesquita simple argument held, then for 20 people, all the computing power on Earth would not be enough to run his simulations. Thus, the real reason behind the need for computational modeling (or game theory software as the Economist article calls it) is not one of simply considering all interactions. You still need great ideas and beautiful models to cut down the set of possible interactions to ones that can be tractably analyzed.

Of course, the way we actually overcome this ridiculous explosion in complexity is by using problem-specific knowledge to constrain our possible influences and interactions. My favorite graphic of the talk is the influence graph of the president of the United States. Not because it is a new idea, but because understanding the function and design of such networks is central to modern EGT. A classic example is the work on selection amplifiers [LHN05] which showed the weaknesses of hierarchical structures such as then president’s influence network for promoting good ideas.

Although Bueno de Mesquita accuracy of predictions is impressive (although the 90% he cites is also misleading; note that simply predicting the opposite of the expert opinion would yield similar results), his methods are outdated. If we want to take game theoretic prediction to the next step, we must consider realistic bounds on the rationality of agents, reasonably simple feedback and update rules, and computationally tractable equilibria concepts. All of these are more likely to come from work on questions like the evolution of cooperation than think-tanks bigger and bigger ‘game theory software’.

I tried to keep my comments brief, so that you can enjoy your weekend and the video. Please leave your thoughts and analysis of the talk and article in the comments. Do you think evolutionary game theory can improve the predictive power of these classic models? Why or why not?

References

[LHN05] Lieberman, E., Hauert, C., and Nowak, M.A. (2005). Evolutionary dynamics on graphs. Nature, 433, 312-316.
[ST92] Sha fir, E., & Tversky, A. (1992). Thinking through uncertainty: Nonconsequential reasoning and choice. Cognitive Psychology, 24, 449-474.
[TS92] Tversky, A., & Shafi r, E. (1992). The disjunction eff ect in choice under uncertainty. Psychological Science, 3 , 305-309.

Evolving cooperation at TEDxMcGill 2009

For me, one of the highlights of working on EGT has been the opportunity to present it to the general public. As a shameless plug and a way to start off video Saturdays, I decided to post a link to my TEDxMcGill talk on evolving cooperation. This was from the first TEDxMcGill in 2009:

I think this is the first time I used the knitters’ dilemma as an explanation for PD, which has become my favorite way of introducing the game. If you want to read a more technical overview of the graph you see on the second to last slide, then it is discussed in ref.[KS11]. If you want the comic at the end of the slides, it is xkcd’s “Purity”.

More great TEDxMcGill talks are available here and I recommend checking all of them out. Check back next Saturday for another EGT-related video!

References

[KS11] A. Kaznatcheev and T.R. Shultz [2011] “Ethnocentrism maintains cooperation, but keeping one’s children close fuels it.” In Proceedings of the 33rd annual conference of the cognitive science society. [pdf]

Follow

Get every new post delivered to your Inbox.

Join 2,326 other followers