Evolution explains the fundamental constants of physics

While speaking at TEDxMcGill 2009, Jan Florjanczyk — friend, quantum information researcher, and former schoolmate of mine — provided one of the clearest characterization of theoretical physics that I’ve had the please of hearing:

Theoretical physics is about tweaking the knobs and dials and assumptions of the laws that govern the universe and then interpolating those laws back to examine how they affect our daily lives, or how they affect the universe that we observe, or even if they are consistent with each other.

I believe that this definition extends beyond physics to all theorists. We are passionate about playing with the the stories that define the unobservable characters of our theoretical narratives and watching how our mental creations get along with each other and affect our observable world. With such a general definition of a theorists, it is not surprising that we often see such thinkers cross over disciplinary lines. The most willing to wander outside their field are theoretical physicists; sometimes they have been extremely influential interdisciplinaries and at other times they suffered from bad cases of interdisciplinitis.

On the other hand, physicists like to say physics is to math as sex is to masturbation.

The physicists’ excursions have been so frequent that it almost seems like a hierarchy of ideas developed — with physics and mathematics “on top”. Since I tend to think of myself as a mathematician (or theoretical computer scientist, but nobody puts us in comics), this view often tempts me but deep down I realize that the flow of ideas is always bi-directional and no serious field can be dominant over another. To help slow my descent into elitism, it is always important to have this realization reinforced. Thus, I was extremely excited when Jeremy Fox of Dynamic Ecology drew my attention to a recent paper by theoretical zoologist Andy Gardner (in collaboration with physicists J.P. Conlon) on how to use the Price equation of natural selection to model the evolution and adaptation of the entire universe.

Since you will need to know a little bit about the physics of black holes to proceed, I recommend watching Jan’s aforementioned talk. Pay special attention to the three types of black holes he defines, especially the Hubble sphere:

As you probably noticed, our universe isn’t boiling, the knobs and dials of the 30 or so parameters of the Standard Model of particle physics are exquisitely well-tuned. These values seem arbitrary, and even small modifications would produce a universe incapable of producing or sustaining the complexity we observe around us. Physicists’ default explanation of this serendipity is the weak anthropic principle: only way we would be around to observe the universe and ask “why are the parameters so well tuned?” is if that universe was tuned to allow life. However, this argument is fundamentally unsettling, it lacks any mechanism.

Smolin (1992) addressed this discomfort by suggesting that the fundamental constants of nature were fine-tuned by the process of cosmological natural selection. The idea extends our view of the possible to a multiverse (not to be confused with Deutsch’s idea) that is inhabited by individual universes that differ in their fundamental constants and give birth to offspring universes via the formation of blackholes. Universes that are better tuned to produce black holes sire more offspring (i.e. have a higher fitness) and thus are more likely in the multiverse.

Although, Smolin (2004) worked to formalize this evolutionary process, he could not achieve the ecological validity of Gardner & Conlon (2013). Since I suspect the authors’ paper is a bit tongue-in-cheek, I won’t go into the details of their mathematical model and instead provide some broad strokes. They consider deterministically developing (also stochastic in the appendix) universes, and a 1-to-1 mapping between black-holes in one generation of universes and the universes of the next generation. Since — as Jan stressed — we can never go inside black-holes to measure their parameters, the authors allow for any degree of heritability between parent and offspring universes. At the same time, they consider a control optimization problem, with the objective function to maximize the number of black-holes. They then compare the Price dynamics of their evolutionary model to the optimal solution of the optimization problem and show a close correspondence. This correspondence implies that successive generations of universes will seem increasingly designed for the purpose of forming black holes (without the need for a designer, of course).

You might object; “I’m not a black hole, why is this relevant?” Well, it turns out that universes that are designed for producing black holes, are also ones that are capable of sustaining the complexity needed for intelligent observers to emerge (Smolin, 2004). So, although you are not a black-hole, the reason you can get excited about studying them is because you are an accidental side-effect of their evolution.

References

Gardner, A., & Conlon, J. (2013). Cosmological natural selection and the purpose of the universe Complexity DOI: 10.1002/cplx.21446

Smolin, L. (1992). Did the universe evolve?. Classical and Quantum Gravity, 9(1), 173.

Smolin, L. (2004). Cosmological natural selection as the explanation for the complexity of the universe. Physica A: Statistical Mechanics and its Applications, 340(4), 705-713.

Tegmark, M., Aguirre, A., Rees, M. J., & Wilczek, F. (2006). Dimensionless constants, cosmology, and other dark matters. Physical Review D, 73(2), 023505.

Advertisements

Introduction to evolving cooperation

Since 2009, I’ve had a yearly routine of guest lecturing for Tom’s Cognitive Science course. The way I’ve structured the class was by assigning videos to watch before the lecture so that I could build on them. Last year, I started posting the video ahead of time on the blog: my 2009 TEDxMcGill talk, Robert Wright’s evolution of compassion, and Howard Rheingold’s new power of collaboration. However, instead of just presenting a link with very little commenatry, this time I decided to write a transcript with my talk that I seeded with references and links for the curious. The text is not an exact recreation of the words, but a pretty close fit that is meant to serve as a gentle introduction to the evolution of cooperation.

Earlier today, we heard about the social evolution of language and to a certain extent we heard about the emergence and evolution of zero. We even heard about our current economic affairs and such. I am going to talk about all of these things and, in particular, continue the evolutionary theme and talk about the evolution of cooperation in society and elsewhere.

We’ve all come across ideas of the greater good, altruism, cooperation or the sacrifice of an individual for the good of others. In biology, we have an analogous concept where we look at the willingness of certain individuals to give up some of their reproductive potential to increase the reproductive potential of others. This paradoxical concept in the social sciences is grappled with by philosophers, sociologists, and political scientists. In the biological context, it is obviously an important question to biologists.

Now, the question really becomes as to how and why does this cooperation emerge? First, we are going to look at this from the biological point of view, connect it to the social sciences, and then to everything else.

Currently, biology is really shaped by Darwin, Wallace and their theory of evolution by natural selection. It is a unifying theme and tie of modern biology. The interesting feature of biology is that it is an explicitly competitive framework: organisms compete against other organisms for their reproduction. Our question becomes: how does cooperation emerge in such a competitive environment?

We know this cooperation does emerge because it is essential for all the complexity we see. It is essential for single cells to come together into multi-cellular organisms, for the emergence of ant colonies, and even human society. We want to study this and try to answer these questions. But how do you create a competitive environment in a mathematical framework? We borrow from game theory the idea of Prisoner’s dilemma, or in my case I prefer the Knitter’s dilemma. This is one of many possible models of a competitive environment, and the most used in the literature.

In the Knitter’s dilemma there are two players. One of them is Alice. Alice produces yarn, but she doesn’t have any needles, and she wants to sew a sweater. In the society that she lives, knitting sweaters is frowned upon, so she can’t go ask for needles publicly. Bob, on the other hand, produces needles but not yarn. He also wants to sew a sweater. So they decide: “okay, lets go out into the woods late at night, bring briefcases with our respected goods and trade”.

Alice has a dilemma: should she include yarn in her briefcase (indicated by the green briefcase in the figure below)? Or should she not (signified by the red)? If Bob includes needles (first column), and Alice includes yarn then she gets the benefit b of going home and knitting a sweater, but she does pay a small cost c for giving away some of her yarn. Alternatively, if Bob brings needles, but she’s tricky and doesn’t bring her yarn then she gets all the benefit of going home and making a sweater without paying even the marginal cost of giving away some of her yarn. If Bob brings an empty briefcase (second column), and Alice brings yarn as she said she would then Alice pays a small cost in giving some of her yarn away without benefit of being able to make a sweater. Alternatively, if she also brings an empty briefcase then they just met in the middle of the night, traded empty briefcases, and everybody goes back with the no payoff.

Knitter's dilemma

It seems that no matter what Bob does, it is better for Alice to bring an empty briefcase, what we call defection, than to cooperate by bringing a full briefcase. This sets up the basic idea of a competitive environment. The rational strategy, or the Nash equilibrium, for this game is for both individuals to defect and bring empty briefcases. However, from outside the game we can see that if they both do what they said they would and cooperate then they are both better of. That is captured by the Pareto optimum in green.

Of course, as mentioned earlier by Andy, we cannot always expect people to be rational and make all these decisions based on reasoning. Evolutionary game theory comes from the perspective of modeling Alice and Bob as simple agents that have a trait that is passed down to their offspring. This is shown below by green circles for players that cooperate and red circles for ones that don’t. In the standard model, we will pair them off randomly and they will play the game. So a green and a green is two cooperators; they both went home and made a sweater. Two reds both went empty handed. After interaction we disseminate them through the population and let them reproduce according to how the game affected their potential. Higher for people that received a large benefit, and lower chance to reproduce to people who only paid costs. We cycle this for a while, and what we observe is more and more red emerging. All the green cooperation starts to go away. This captures the basic intuition that a competitive environment breeds defection.

Of course, you and I can think of some ways to overcome this dilemma. Evolutionary game theorists have also been there and thought of it (Nowak, 2006). They thought of three models of how to avoid it. The first is Hamilton’s (1964) kin selection: Bob’s actually your uncle, and you’re willing to work with him. You’ll bring the yarn as you said you would. Alternatively, you’ve encountered Bob many times before and he has always included needles in his briefcase. You are much more willing to work with him. This is Trivers’ (1971) direct reciprocity, and you’ll include your yarn. Finally, indirect reciprocity (Nowak & Sigmund, 1998): you’ve heard that Bob is an honest man that always brings needles as he says he will. So you are much more likely to cooperate with him.

All these things seem pretty simple to us, but if we’re an amoeba floating around in some soup (and microbes do play games; Lenski & Velicer 2001) then it’s not quiet as obvious that we can do any of these things. Recognizing kin, remembering past interactions, or social constructs like reputation become very difficult. Hence, I look at the more primitive methods such as spatial/network reciprocity or viscosity.

Earlier, Paul mentioned that if we have a turbulent environment it becomes very hard for us to live. Hence the idea that we introduce some structure into our environment. We populate all our agents inside a small grid where they can interact with their neighbors and reproduce into neighboring squares.

Alternatively, we can borrow an idea from the selfish gene approach to evolution called the green-beard effect. This was introduced by Hamilton (1964) & Dawkins’ Selfish Gene. This is a gene that produces three phenotypical effects: (1) it produces an arbitrary marker which we call the beard (or in our case circles and squares), (2) it allows you to recognize this trait in others, not their strategy just the trait/beard, and (3) it allows you to change your strategy depending on what trait/beard you observe. As before, you can cooperate or defect with other circles, or if you meet a square then you can also chose to cooperate or defect. You have four possible strategies that are drawn in the figure below. In human culture, cooperating with those that are like you (i.e. other circles) and defecting against those that are squares is the idea of ethnocentrism. Here we bring back the social context a little bit by looking at this as a simple model of human evolution, too.

We can combine the two models, by looking at little circles and squares of different colors inside a grid, and seeing how the population will evolve with time. The results we observe are that we do see cooperation emerge, but sadly it is an ethnocentric sort of cooperation. We can see it from the below graph where the y-axis is proportion of cooperative interactions: the higher up you are in the graph, the more cooperation is happening, so the better it is. In the blue model we have agents that can distinguish between circles and squares living inside a spatial lattice. In the green we see a model with spatial structure, but no cognitive ability to adjust based on tags. In the red and the yellow you can see models where there is no spatial structure, or there is no ability to recognize people based on if they are a circle or a square. In these restricted models cooperation does not consistently emerge. Although in the tags with no space model in yellow there is occasional bifurcation of cooperation highlighted by the black circle and arrow.

Annotated reproduction of figure from Kaznatcheev & Shultz 2011

Proportion of cooperation versus evolutionary cycle for four different conditions. In blue is the standard H&A model; green preserves local child placement but eliminates tags; yellow has tags but no local child placement; red is both inviscid and tag-less. The lines are from averaging 30 simulations for each condition, and thickness represents standard error. Figure appeared in Kaznatcheev & Shultz (2011).

This gives us a suggestion of how evolution could have shaped the way we are today, and how evolution could have shaped the common trend of ethnocentrism in humans. The model doesn’t propose ways to overcome ethnocentrism, but one thing it does is at least create cooperation among scientists who use it. In particular, the number of different fields (represented in one of my favorite xkcd comics, below) that use these sort of models.

Sociologists and political scientists use these models for peace building and conflict resolution (eg. Hammond & Axelrod, 2006). In this case cooperation would be working towards peace, and defection could be sending a mortar round into the neighboring village. Psychologists look at games like the Prisoner’s dilemma (or the Knitter’s dilemma in my case) and say “well, humans tend to cooperate in certain settings. Why is that? Can we find an evolutionary backing for that?” In our running example by looking at ethnocentrism (eg. Shultz, Hartshorn, & Kaznathceev, 2009). Biologists look at how the first molecules came together to form life, or how single cells started to form multi-cellular organisms. Even in cancer research (eg. Axelrod, Axelrod, & Pienta, 2006) and the spread of infectious disease such as the swine flu (eg. Read & Keeling, 2003). Even chemists and physicists use this as a model of self-organizing behavior and a toy model of non-linear dynamics (eg. Szabo & Fath, 2007). Of course, it comes back to computer scientists and mathematicians, who use this for studying network structure and distributive computing. The reason all these fields can be unified by the mathematical idea underlying evolution seems kind of strange. The reason this can happen is because of the simple nature of evolution. Evolution can occur in any system where information is copied in a noisy environment. Thus, all these fields can cooperate together in working on finding answers to the emergence and evolution of cooperation. Hopefully, starting with the scientists working together on these questions, we can get people around the world to also cooperate.

References

Axelrod, R., Axelrod, D. E., & Pienta, K. J. (2006). Evolution of cooperation among tumor cells. Proceedings of the National Academy of Sciences, 103(36), 13474-13479.

Hamilton, W. D. (1964). The Genetical Evolution of Social Behavior. Journal of Theoretical Biology 7 (1): 1–16.

Hammond, R. A., & Axelrod, R. (2006). The evolution of ethnocentrism. Journal of Conflict Resolution, 50(6), 926-936.

Kaznatcheev, A., & Shultz, T.R. (2011). Ethnocentrism Maintains Cooperation, but Keeping One’s Children Close Fuels It. Proceedings of the 33rd Annual Conference of the Cognitive Science Society, 3174-3179

Lenski, R. E., & Velicer, G. J. (2001). Games microbes play. Selection, 1(1), 89-96.

Nowak MA (2006). Five rules for the evolution of cooperation. Science (New York, N.Y.), 314 (5805), 1560-3 PMID: 17158317

Nowak, M. A., & Sigmund, K. (1998). Evolution of indirect reciprocity by image scoring. Nature, 393(6685), 573-577.

Read, J. M., & Keeling, M. J. (2003). Disease evolution on networks: the role of contact structure. Proceedings of the Royal Society of London. Series B: Biological Sciences, 270(1516), 699-708.

Shultz, T. R., Hartshorn, M., & Kaznatcheev, A. (2009). Why is ethnocentrism more common than humanitarianism. In Proceedings of the 31st annual conference of the cognitive science society (pp. 2100-2105).

Szabo, G., & Fath, G. (2007). Evolutionary games on graphs. Physics Reports, 446 (4-6), 97-216

Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly review of biology, 35-57.

Evolving cooperation at TEDxMcGill 2009

For me, one of the highlights of working on EGT has been the opportunity to present it to the general public. As a shameless plug and a way to start off video Saturdays, I decided to post a link to my TEDxMcGill talk on evolving cooperation. This was from the first TEDxMcGill in 2009:

I think this is the first time I used the knitters’ dilemma as an explanation for PD, which has become my favorite way of introducing the game. If you want to read a more technical overview of the graph you see on the second to last slide, then it is discussed in ref.[KS11]. If you want the comic at the end of the slides, it is xkcd’s “Purity”.

More great TEDxMcGill talks are available here and I recommend checking all of them out. Check back next Saturday for another EGT-related video!

References

[KS11] A. Kaznatcheev and T.R. Shultz [2011] “Ethnocentrism maintains cooperation, but keeping one’s children close fuels it.” In Proceedings of the 33rd annual conference of the cognitive science society. [pdf]