EGT Reading Group 56 – 60

Since my last update in February, the evolutionary game theory reading group has passed another milestone with 5 more meetings over the last 4 months. We looked at a broad range of topics, from life histories in cancer to the effects of heterogeneity and biodiversity. From the definitions of fitness to analyzing digital pathology. Part of this variety came from suggested papers by the group members. The paper for EGT 57 was suggested by Jill Gallaher, EGT 58 by Robert Vander Velde, and the second paper for EGT 60 came from a tip by Jacob Scott. We haven’t yet recovered our goal of regular weekly meetings, but we’ve more than halved the time it took for these five meetings compared to the previous ones.

Read more of this post

EGT Reading Group 51 – 55 and a photo

The evolutionary game theory reading group — originally part of the raison d’être for this blog — has continued at a crawling pace. Far from the weekly groups of its early days in 2010, we’ve only had 5 meetings since my last update on March 26th, 2015 — almost 11 months ago. Surprisingly, this is a doubling in pace, with the 46 to 50 milestone having taken 22 months. To celebrate, I wanted to update you on what we’ve read and discussed:
Read more of this post

EGT Reading Group 46 – 50 and a photo

Part of the original intent for this blog was to accompany the evolutionary game theory reading group that I started running at McGill in 2010. The blog has taken off, but the reading group has waned. However, since I still have some hope to revive a regular reading group, I have continued to call occasional journal discussion meetings that I organize as the EGT reading group. These meetings are very sparse and highly irregular, not the weekly groups that they were in 2010. For example, since my last update on May 28th, 2013, around 22 months have passed with the group meeting only 5 times. Still, these 5 meetings bring us to a milestone and hence an update on the papers we’ve read:
Read more of this post

A detailed update on readership for the first 200 posts

It is time — this is the 201st article on TheEGG — to get an update on readership since our 151st post and lament on why academics should blog. I apologize for this navel-gazing post, and it is probably of no interest to you unless you are really excited about blog statistics. I am writing this post largely for future reference and to celebrate this arbitrary milestone.

The of statistics in this article are largely superficial proxies — what does a view even mean? — and only notable because of how easy they are to track. These proxies should never be used to seriously judge academics but I do think they can serve as a useful self-tracking tool. Making your blog’s statistics available publicly can be a useful comparison for other bloggers to get an idea of what sort of readership and posting habits are typical. In keeping with this rough and lighthearted comparison, according to Jeromy Anglim’s order-of-magnitude rules of thumb, in the year since the last update the blog has been popular in terms of RSS subscribers and relatively popular in terms of annual page views.

As before, I’ll start with the public self-metrics of the viewership graph for the last 6 and a half months:

Columns are views per week at TheEGG blog since the end of August, 2014. The vertical lines separate months, and the black line is average views per day for each month. The scale for weeks is on the left, it is different from the scale for daily average, those are labeled at each height.

Columns are views per week at TheEGG blog since the end of August, 2014. The vertical lines separate months, and the black line is average views per day for each month. The scale for weeks is on the left, it is different from the scale for daily average, those are labeled at each height.

If you’d like to know more, dear reader, then keep reading. Otherwise, I will see you on the next post!
Read more of this post

Why academics should blog and an update on readership

It’s that time again, TheEGG has passed a milestone — 150 posts under our belt!– and so I feel obliged to reflect on blogging plus update the curious on the readerships statistics.

About a month ago, Nicholas Kristof bemoaned the lack of public intellectuals in the New York Times. Some people responded with defenses of the ‘busy academic’, and others agreement but with a shift of conversation medium to blogs from the more traditional media Kristof was focused on. As a fellow blogger, I can’t help but support this shift, but I also can’t help but notice the conflation of two very different notions: the public intellectual and the public educator.
Read more of this post

Stats 101: an update on readership

Sorry, I couldn’t resist the title. This is the hundred and first post on TheEGG blog and I wanted to use the opportunity to update those curious about viewership stats. This is also a way for me to record milestones for the blog and proselytize people to blogging. Read on only if you want to learn about the behind the scenes of this blog.
Read more of this post

Evolve ethnocentrism in your spare time

Running an agent based simulation really isn’t that complex. While there’s no shortage of ready-made software packages for ABM (like Repast and NetLogo), all you really need is a good, high-level programming language and a code editor.

As you may have noticed from other blog posts, we have spent quite a bit of time studying agent based models of ethnocentric evolution. To coincide with the publication of our paper (Hartshorn, Kaznatcheev & Shultz, 2013) on the evolution of ethnocentrism in the Journal of Artificial Societies and Social Simulation (JASSS), we thought it would be fun to provide a hands-on tutorial so you can replicate the model yourself. There’s a lot to cover here, so we won’t get into the scientific description of the model itself, but you can read a good synopsis in my executive summary, or Artem’s general overview.

This post assumes no programming background, just a computer, patience, and some curiosity. That being said, you will be compiling a small Java program and modifying its source code, so if words like “compile,” “source code,” and “Java” strike terror in your heart, consider yourself forewarned. It’s actually not that scary. In Estonia they’re teaching kids to program in first grade, and you’re smarter than a first grader…right?!
Read more of this post

EGT Reading Group 41 – 45 and a photo

In recent months, TheEGG blog has morphed into a medium for me to share cool articles and quick (and sometimes overly snarky) reviews. However, I still remember its original purpose to accompany the EGT Reading Group that I launched at McGill University in 2010. Next week, we will have our 46th meeting, and so I am taking a short break from reviewing the 2nd workshop on Natural Algorithms and the Sciences to give you a quick recap of what we’ve read since the last update:
Read more of this post

Social learning dilemma

Last week, my father sent me a link to the 100 top-ranked specialties in the sciences and social sciences. The Web of Knowledge report considered 10 broad areas[1] of natural and social science, and for each one listed 10 research fronts that they consider as the key fields to watch in 2013 and are “hot areas that may not otherwise be readily identified”. A subtle hint from my dad that I should refocus my research efforts? Strange advice to get from a parent, especially since you would usually expect classic words of wisdom like: “if all your friends jumped off a bridge, would you jump too?”


And it says a lot about you that when your friends jump off a bridge en masse, your first thought is apparently 'my friends are all foolish and I won't be like them' and not 'are my friends okay?'.

So, which advice should I follow? Should I innovate and focus on my own fields of interest, or should I imitate and follow the trends? Conveniently, the field best equipped to answer this question, i.e. “social learning strategies and decision making”, was sixth of the top ten research fronts for “Economics, Psychology, and Other Social Sciences”[2].

For the individual, there are two sides to social learning. On the one hand, social learning is tempting because it allows agents to avoids the effort and risk of innovation. On the other hand, social learning can be error-prone and lead individuals to acquire inappropriate and outdated information if the the environment is constantly changing. For the group, social learning is great for preserving and spreading effective behavior. However, if a group has only social learners then in a changing environment it will not be able to innovate new behavior and average fitness will decrease as the fixed set of available behaviors in the population becomes outdated. Since I always want to hit every nail with the evolutionary game theory hammer, this seems like a public goods game. The public good is effective behaviors, defection is frequent imitation, and cooperation is frequent innovation.

Although we can trace the study of evolution of cooperation to Peter Kropotkin, the modern treatment — especially via agent-based modeling — was driven by the innovative thoughts of Robert Axelrod. Axelrod & Hamilton (1981) ran a computer tournament where other researchers submitted strategies for playing the iterated prisoners’ dilemma. The clarity of their presentation, and the surprising effectiveness of an extremely simple tit-for-tat strategy motivated much of the current work on cooperation. True to their subject matter, Rendell et al. (2010) imitated Axelrod and ran their own computer tournament of social learning strategies, offering 10,000 euros for the best submission. By cosmic coincidence, the prize went to students of cooperation: Daniel Cownden and Tim Lillicrap, two graduate students at Queen’s University, the former a student of mathematician and notable inclusive-fitness theorist Peter Taylor.

A restless multi-armed bandit served as the learning environment. The agent could select which of 100 arms to pull in order to receive a payoff drawn independently (for each arm) from an exponential distribution. It was made “restless” by changing the payoff after each pull with probability p_C. A dynamic environment was chosen because copying outdated information is believed to be a central weakness of social learning, and because Papadimitriou & Tsitsiklis (1999) showed that solving this bandit (finding an optimal policy) is PSPACE-complete[3], or in laymen terms: very intractable.

Participants submitted specifications for learning strategies that could perform one of three actions at each time step:

  • Innovate — the basic form of asocial learning, the move returns accurate information about the payoff of a randomly selected behavior that is not already known by the agent.
  • Observe — the basic form of social learning, the observe move returns noisy information about the behavior and payoff being demonstrated by a randomly selected individual. This could return nothing if no other agent played an exploit move this round, or if the behavior was identical to one the focal agent already knows. If some agent is selected for observation then unlike the perfect information of innovate, noise is added: with probability p_\text{copyActWrong} a randomly chosen behavior is reported instead of the one performed by the selected agent, and the payoff received is reported with Gaussian noise with variance \sigma_\text{copyPayoffError}.
  • Exploit — the only way to acquire payoffs by using one of the behaviors that the agent has previously added to its repertoire with innovate and observe moves. Since no payoff is given during innovate and observe, they carry an inherent opportunity cost of not exploiting existing behavior.

The payoffs were used to drive replicator dynamics via a death-birth process. The fitness of an agent was given by their total accumulated payoff divided by the number of rounds they have been alive for. At each round, every agent in the population had a 1/50 probability of expiring. The resulting empty spots were filled by offspring of the remaining agents, with probability of being selected for reproduction proportional to agent fitness. Offspring inherited their parents’ learning strategy, unless a mutation occurred, in which case the offspring would have the strategy of a randomly selected learning strategy from those considered in the simulation.

A total of 104 learning strategies were received for the tournament. Most were from academics, but three were from high school students (with one placing in the top 10). A pairwise tournament was held to test the probability of a strategy invading any other strategy (i.e, if a single individual with a new strategy is introduced into a homogeneous population of another strategy).This round-robin tournament was used to select the 10 best strategies for advancement to the melee stage. During the round-robin p_C, p_\text{copyActWrong}, \sigma_\text{copyPayoffError} were kept fixed, only during the melee stage with all of the top-10 strategies present did the experimenters vary these parameters.

Mean score of the 104 learning sttrategies depending on the proportion of learning actions (both INNOVATE and OBSERVE) in the left figure, and the proportion of OBSERVE actions in the right figure. These are figures 2A and 2C from Rendell et al. (2010).

Mean score depending the proportion of learning actions (both INNOVATE and OBSERVE) in the left figure, and the proportion of OBSERVE actions in the right figure. These are figures 2C and 2A from Rendell et al. (2010).

Unsurprisingly using lots of EXPLOIT moves is essential to good performance, since this is the only way to earn payoff. In other words: less learning and more doing. However, a certain minimal amount of learning is needed to get your doing off the ground, of this learning there is a clear positive correlation between the amount of social learning and success in invading other strategies. The best strategies used the limited information given to them to estimate p_C and used that to better predict and quickly react to changes in the environment. However, they also relied completely on social learning, waiting for other agents to innovate new strategies or for p_\text{copyActWrong} to accidently give a new behavior for their repertoire. Since evolution (unlike the classical assumptions of rationality) cares about relative and not absolute payoffs, it didn’t matter to these agents that they were not doing as well as they could be, as long as they were doing as well as (or better than) their opponents[4]. OBSERVE moves and a good estimate of environmental change allowed the agents to minimize their number of non-EXPLOIT moves and since their exploits paid as well as their opponents (who they were copying) they ended up having equal or better payoff (due to less learning and more exploiting).

Average individual fitness of the top 10 strategies when in a homogenous environment. The best strategy from the multi-strategy competitions is on the left and the tenth best is on the right. Note that the best strategies for when all 10 strategies are present are the worst for when they are alone.

Average individual fitness of the top 10 strategies when in a homogenous environment. The best strategy from the multi-strategy competitions is on the left and the tenth best is on the right. Note that the best strategies for when all 10 strategies are present are the worst for when they are alone. This is figure 1D from Rendell et al. (2010).

My view of social learning as an antisocial strategy is strengthened by the strategy’s low fitness when in isolation. The figure to the left shows this result, with the data-points more to the left corresponding to strategies that did better in the melee. Strategies 1, 2, and 4 are the pure social learners. The height of the data points shows how well a strategy performed when faced only against itself. The strategies that did best in the heterogeneous setting of the 10 strategy melee performed the worst when they were in a homogeneous populations with only agents of the same type. This is in line with Rendell, Fogarty, & Laland (2010) observation that social learning can decrease the overall fitness of the population. Social learners fare even worse when they can’t make occasional random mistakes in copying behavior, without these errors all innovation disappears from the population and average fitness plummets. Social learners are free-riding on the innovation of asocial agents.

I would be interested in pursuing this heuristic connection between learning and social dilemmas further. The interactions of learners with each other and the environment can be seen as an evolutionary game: can we calculate the explicit payoff matrix of this game in terms of environmental and strategy parameters? Does this game belong to the Prisoners’ dilemma or Hawk-Dove (or other) region of cooperate-defect games? The heuristic view of innovation as a public good and the lack of stable co-existence of imitators and innovators suggests that the dynamics are PD. However, Rendell, Fogarty, & Laland (2010) show social learning can sometimes spread better on a grid structure, this is contrary to the effects of PD on grids, but consistent with observations for HD (Hauert & Doebeli, 2004). Since the two studies use very different social learning strategies, it could be the case that depending on parameters, we can achieve either PD or HD dynamics.

Regardless of which social dilemma is in play, we know that slight spatial structure enhances cooperation. This means that I expect that if — instead of inviscid interactions — I repeated Rendell et al. (2010) on a regular random graph then we would see more innovation. Similarly, if we introduced selection on the level of groups then groups with more innovators would fare better and spread the innovative strategy throughout the population.

So what does this mean for how I should take my father’s implicit advice? First: stop learning and start doing; I need to spend more time writing up results into papers instead of learning new things. Unfortunately for you, my dear reader, this could mean fewer blog posts on fun papers and more on my boring work! In terms of following research trends, or innovating new themes, I think a more thorough analysis is needed. It would be interesting to extend my preliminary ramblings on citation network dynamics to incorporate this work on social learning. For now, I am happy to know that at least some of things I’m interested are — in Twitter speak — trending.

Notes and References

  1. Way too broad for my taste, one category was “Mathematics, Computer Science, and Engineering”; talk about a tease-and-trick. After reading the first two items I was excited to see a whole section dedicated to results like theoretical computer science, only to have my dreams dashed by ‘Engineering’. Turns out that Thomson Reuters and I have very different ideas on what ‘Mathematics’ means and how it should be grouped.
  2. Note that my interest weren’t absent from the list, with “financial crisis, liquidity, and corporate governance” appearing tenth for “Economics, Psychology, and Other Social Sciences” and even selected for a special more in-depth highlight. Evolutionary thinking also appeared in tenth place for the poorly titled “Mathematics, Computer Science and Engineering” area as “Differential evolution algorithm and memetic computation”. It is nice to know that these topics are popular, although I am usually not a fan of the engineering approach to computational models of evolution since their goal is to solve problems using evolution, not answer questions about evolution.
  3. High-impact general science publications like Nature, Science, and their more recent offshoots (like the open-access Scientific Reports) are awful at presenting theoretical computer science. It is no different in this case, Papadimitriou and Tsitsiklis (1999) is a worst-case result that requires more freedom in the problem instances to encode the necessary structure for a reduction to known hard problems. Although their theorem is about restless bandits, the reduction needs a more general formulation in terms of arbitrary deterministic finite-dimensional Markov chains instead of the specific distributions used by Rendell et al. (2010). I am pretty sure that the optimal policy for the obvious generalization (i.e. n arms instead of 100, but generated in the same way) of the stochastic environment can be learned efficiently; there is just not enough structure there to encode a hard problem. Since I want to understand multi-armed bandits better, anyways, I might find the optimal algorithm and write about it in a future post.
  4. This sort of “I just want to beat you” behavior, reminds me of the irrational defection towards the out-group that I observed in the harmony game for tag-based models (Kaznatcheev, 2010).

Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211(4489), 1390-1396.

Hauert, C., & Doebeli, M. (2004). Spatial structure often inhibits the evolution of cooperation in the snowdrift game. Nature, 428(6983), 643-646.

Kaznatcheev, A. (2010). Robustness of ethnocentrism to changes in inter-personal interactions. Complex Adaptive Systems – AAAI Fall Symposium. (pdf)

Papadimitriou, C. H., & Tsitsiklis, J. N. (1999). The complexity of optimal queuing network control. Mathematics of Operations Research, 24(2): 293-305.

Rendell L, Boyd R, Cownden D, Enquist M, Eriksson K, Feldman MW, Fogarty L, Ghirlanda S, Lillicrap T, & Laland KN (2010). Why copy others? Insights from the social learning strategies tournament. Science, 328 (5975), 208-213 PMID: 20378813

Rendell, L., Fogarty, L., & Laland, K. N. (2010). Rogers’ paradox recast and and resolved: population structure and the evolution of social learning strategies” Evolution 64(2): 534-548.

Will the droids take academic jobs?

As a researcher, one of the biggest challenges I face is keeping up with the scientific literature. This is further exasperated by working in several disciplines, and without a more senior advisor or formal training in most of them. The Evolutionary Game Theory Reading Group, and later this blog, started as an attempt to help me discover and keep up with the reading. Every researcher has a different approach: some use the traditional process of reading the table of contents and abstracts of selective journals, others rely on colleagues, students, and social media to keep them up to date. Fundamentally, these methods are surprisingly similar and it is upto the individual to find what is best for them. I rely on blogs, G+, Google Scholar author alerts, extensive forward-citation searches, surveys, and most recently: Google Scholar updates.

ScholarSuggest

The updates are a computer filtering system that uses my publications to gage my interests and then suggests new papers as they enter Google’s database. Due to my limited publication history, the AI doesn’t have much to go on, and is rather hit or miss. Some papers, like the Requejo & Camacho reference in the screeshot above, have led me to find useful papers on ecological games and hurry my own work on environmental austerity. Other papers, like Burgmuller’s, are completely irrelevant. However, this recommendation system will improve with time, I will publish more papers to inform it and the algorithms it uses will advance.

Part of that advancement comes from scientists optimizing their own lit-review process. Three days ago, Davis, Wiegers et al. (2013) published such an advancement in PLoS One. The authors are part of the team behind the Comparative Toxicogenomics Database that (emphasis mine):

promotes understanding about the effects of environmental chemicals on human health by integrating data from curated scientific literature to describe chemical interactions with genes and proteins, and associations between diseases and chemicals, and diseases and genes/proteins.

This curation requires their experts to go through thousands of articles in order update the database. Unfortunately, not every article is relevant and there are simply too many articles to curate everything. As such, the team needed a way to automatically sort which articles are most likely to be useful and thus curated. They developed a system that uses their database plus some hand-made rules to text-mine articles and assign them a document relevance score (DRS). The authors looked at a corpus of 14,904 articles, with 1,020 of them having been considered before and thus serving as a calibration set. To test their algorithms effectiveness, 3583 articles were sampled at random from the remaining 13884 and sent to biocurators for processing. The DRS correlated well with probability of curation, with 85% curation rate for articles with high DRS and only 15% for low DRS articles. This outperformed the PubMed ranking of articles which resulted in a ~60% curation rate regardless of the PubMed ranking.

With the swell of scientific publishing, I think machines are well on their way to replacing selective journals, graduate students, and social media for finding relevant literature. Throw in that computers already make decent journalists; and you can go to SCIgen to make your own AI authored paper that is ‘good’ enough to be accepted at an IEEE conference; and your laptop can write mathematical proofs good enough to fool humans. Now you have the ingredients to remind academics that they are at risk, just like everybody else, of losing their jobs to computers. Still it is tempting to take comfort from technological optimists like Andrew McAfee and believe that the droids will only reduce mundane and arduous tasks. It is nicer to believe that there will always be a place for human creativity and innovation:

For now, the AI systems are primarily improving my workflow and making researcher easier and more fun to do. But the future is difficult to predict, and I am naturally a pessimist. I like to say that I look at the world through algorithmic lenses. By that, I mean that I apply ideas from theoretical computer science to better understand natural or social phenomena. Maybe I should adopt a more literal meaning; at this rate “looking at the world through algorithmic lenses” might simply mean that the one doing the looking will be a computer, not me.

ResearchBlogging.orgDavis, A., Wiegers, T., Johnson, R., Lay, J., Lennon-Hopkins, K., Saraceni-Richards, C., Sciaky, D., Murphy, C., & Mattingly, C. (2013). Text Mining Effectively Scores and Ranks the Literature for Improving Chemical-Gene-Disease Curation at the Comparative Toxicogenomics Database PLoS ONE, 8 (4) DOI: 10.1371/journal.pone.0058201