Social learning dilemma

Last week, my father sent me a link to the 100 top-ranked specialties in the sciences and social sciences. The Web of Knowledge report considered 10 broad areas[1] of natural and social science, and for each one listed 10 research fronts that they consider as the key fields to watch in 2013 and are “hot areas that may not otherwise be readily identified”. A subtle hint from my dad that I should refocus my research efforts? Strange advice to get from a parent, especially since you would usually expect classic words of wisdom like: “if all your friends jumped off a bridge, would you jump too?”


And it says a lot about you that when your friends jump off a bridge en masse, your first thought is apparently 'my friends are all foolish and I won't be like them' and not 'are my friends okay?'.

So, which advice should I follow? Should I innovate and focus on my own fields of interest, or should I imitate and follow the trends? Conveniently, the field best equipped to answer this question, i.e. “social learning strategies and decision making”, was sixth of the top ten research fronts for “Economics, Psychology, and Other Social Sciences”[2].

For the individual, there are two sides to social learning. On the one hand, social learning is tempting because it allows agents to avoids the effort and risk of innovation. On the other hand, social learning can be error-prone and lead individuals to acquire inappropriate and outdated information if the the environment is constantly changing. For the group, social learning is great for preserving and spreading effective behavior. However, if a group has only social learners then in a changing environment it will not be able to innovate new behavior and average fitness will decrease as the fixed set of available behaviors in the population becomes outdated. Since I always want to hit every nail with the evolutionary game theory hammer, this seems like a public goods game. The public good is effective behaviors, defection is frequent imitation, and cooperation is frequent innovation.

Although we can trace the study of evolution of cooperation to Peter Kropotkin, the modern treatment — especially via agent-based modeling — was driven by the innovative thoughts of Robert Axelrod. Axelrod & Hamilton (1981) ran a computer tournament where other researchers submitted strategies for playing the iterated prisoners’ dilemma. The clarity of their presentation, and the surprising effectiveness of an extremely simple tit-for-tat strategy motivated much of the current work on cooperation. True to their subject matter, Rendell et al. (2010) imitated Axelrod and ran their own computer tournament of social learning strategies, offering 10,000 euros for the best submission. By cosmic coincidence, the prize went to students of cooperation: Daniel Cownden and Tim Lillicrap, two graduate students at Queen’s University, the former a student of mathematician and notable inclusive-fitness theorist Peter Taylor.

A restless multi-armed bandit served as the learning environment. The agent could select which of 100 arms to pull in order to receive a payoff drawn independently (for each arm) from an exponential distribution. It was made “restless” by changing the payoff after each pull with probability p_C. A dynamic environment was chosen because copying outdated information is believed to be a central weakness of social learning, and because Papadimitriou & Tsitsiklis (1999) showed that solving this bandit (finding an optimal policy) is PSPACE-complete[3], or in laymen terms: very intractable.

Participants submitted specifications for learning strategies that could perform one of three actions at each time step:

  • Innovate — the basic form of asocial learning, the move returns accurate information about the payoff of a randomly selected behavior that is not already known by the agent.
  • Observe — the basic form of social learning, the observe move returns noisy information about the behavior and payoff being demonstrated by a randomly selected individual. This could return nothing if no other agent played an exploit move this round, or if the behavior was identical to one the focal agent already knows. If some agent is selected for observation then unlike the perfect information of innovate, noise is added: with probability p_\text{copyActWrong} a randomly chosen behavior is reported instead of the one performed by the selected agent, and the payoff received is reported with Gaussian noise with variance \sigma_\text{copyPayoffError}.
  • Exploit — the only way to acquire payoffs by using one of the behaviors that the agent has previously added to its repertoire with innovate and observe moves. Since no payoff is given during innovate and observe, they carry an inherent opportunity cost of not exploiting existing behavior.

The payoffs were used to drive replicator dynamics via a death-birth process. The fitness of an agent was given by their total accumulated payoff divided by the number of rounds they have been alive for. At each round, every agent in the population had a 1/50 probability of expiring. The resulting empty spots were filled by offspring of the remaining agents, with probability of being selected for reproduction proportional to agent fitness. Offspring inherited their parents’ learning strategy, unless a mutation occurred, in which case the offspring would have the strategy of a randomly selected learning strategy from those considered in the simulation.

A total of 104 learning strategies were received for the tournament. Most were from academics, but three were from high school students (with one placing in the top 10). A pairwise tournament was held to test the probability of a strategy invading any other strategy (i.e, if a single individual with a new strategy is introduced into a homogeneous population of another strategy).This round-robin tournament was used to select the 10 best strategies for advancement to the melee stage. During the round-robin p_C, p_\text{copyActWrong}, \sigma_\text{copyPayoffError} were kept fixed, only during the melee stage with all of the top-10 strategies present did the experimenters vary these parameters.

Mean score of the 104 learning sttrategies depending on the proportion of learning actions (both INNOVATE and OBSERVE) in the left figure, and the proportion of OBSERVE actions in the right figure. These are figures 2A and 2C from Rendell et al. (2010).

Mean score depending the proportion of learning actions (both INNOVATE and OBSERVE) in the left figure, and the proportion of OBSERVE actions in the right figure. These are figures 2C and 2A from Rendell et al. (2010).

Unsurprisingly using lots of EXPLOIT moves is essential to good performance, since this is the only way to earn payoff. In other words: less learning and more doing. However, a certain minimal amount of learning is needed to get your doing off the ground, of this learning there is a clear positive correlation between the amount of social learning and success in invading other strategies. The best strategies used the limited information given to them to estimate p_C and used that to better predict and quickly react to changes in the environment. However, they also relied completely on social learning, waiting for other agents to innovate new strategies or for p_\text{copyActWrong} to accidently give a new behavior for their repertoire. Since evolution (unlike the classical assumptions of rationality) cares about relative and not absolute payoffs, it didn’t matter to these agents that they were not doing as well as they could be, as long as they were doing as well as (or better than) their opponents[4]. OBSERVE moves and a good estimate of environmental change allowed the agents to minimize their number of non-EXPLOIT moves and since their exploits paid as well as their opponents (who they were copying) they ended up having equal or better payoff (due to less learning and more exploiting).

Average individual fitness of the top 10 strategies when in a homogenous environment. The best strategy from the multi-strategy competitions is on the left and the tenth best is on the right. Note that the best strategies for when all 10 strategies are present are the worst for when they are alone.

Average individual fitness of the top 10 strategies when in a homogenous environment. The best strategy from the multi-strategy competitions is on the left and the tenth best is on the right. Note that the best strategies for when all 10 strategies are present are the worst for when they are alone. This is figure 1D from Rendell et al. (2010).

My view of social learning as an antisocial strategy is strengthened by the strategy’s low fitness when in isolation. The figure to the left shows this result, with the data-points more to the left corresponding to strategies that did better in the melee. Strategies 1, 2, and 4 are the pure social learners. The height of the data points shows how well a strategy performed when faced only against itself. The strategies that did best in the heterogeneous setting of the 10 strategy melee performed the worst when they were in a homogeneous populations with only agents of the same type. This is in line with Rendell, Fogarty, & Laland (2010) observation that social learning can decrease the overall fitness of the population. Social learners fare even worse when they can’t make occasional random mistakes in copying behavior, without these errors all innovation disappears from the population and average fitness plummets. Social learners are free-riding on the innovation of asocial agents.

I would be interested in pursuing this heuristic connection between learning and social dilemmas further. The interactions of learners with each other and the environment can be seen as an evolutionary game: can we calculate the explicit payoff matrix of this game in terms of environmental and strategy parameters? Does this game belong to the Prisoners’ dilemma or Hawk-Dove (or other) region of cooperate-defect games? The heuristic view of innovation as a public good and the lack of stable co-existence of imitators and innovators suggests that the dynamics are PD. However, Rendell, Fogarty, & Laland (2010) show social learning can sometimes spread better on a grid structure, this is contrary to the effects of PD on grids, but consistent with observations for HD (Hauert & Doebeli, 2004). Since the two studies use very different social learning strategies, it could be the case that depending on parameters, we can achieve either PD or HD dynamics.

Regardless of which social dilemma is in play, we know that slight spatial structure enhances cooperation. This means that I expect that if — instead of inviscid interactions — I repeated Rendell et al. (2010) on a regular random graph then we would see more innovation. Similarly, if we introduced selection on the level of groups then groups with more innovators would fare better and spread the innovative strategy throughout the population.

So what does this mean for how I should take my father’s implicit advice? First: stop learning and start doing; I need to spend more time writing up results into papers instead of learning new things. Unfortunately for you, my dear reader, this could mean fewer blog posts on fun papers and more on my boring work! In terms of following research trends, or innovating new themes, I think a more thorough analysis is needed. It would be interesting to extend my preliminary ramblings on citation network dynamics to incorporate this work on social learning. For now, I am happy to know that at least some of things I’m interested are — in Twitter speak — trending.

Notes and References

  1. Way too broad for my taste, one category was “Mathematics, Computer Science, and Engineering”; talk about a tease-and-trick. After reading the first two items I was excited to see a whole section dedicated to results like theoretical computer science, only to have my dreams dashed by ‘Engineering’. Turns out that Thomson Reuters and I have very different ideas on what ‘Mathematics’ means and how it should be grouped.
  2. Note that my interest weren’t absent from the list, with “financial crisis, liquidity, and corporate governance” appearing tenth for “Economics, Psychology, and Other Social Sciences” and even selected for a special more in-depth highlight. Evolutionary thinking also appeared in tenth place for the poorly titled “Mathematics, Computer Science and Engineering” area as “Differential evolution algorithm and memetic computation”. It is nice to know that these topics are popular, although I am usually not a fan of the engineering approach to computational models of evolution since their goal is to solve problems using evolution, not answer questions about evolution.
  3. High-impact general science publications like Nature, Science, and their more recent offshoots (like the open-access Scientific Reports) are awful at presenting theoretical computer science. It is no different in this case, Papadimitriou and Tsitsiklis (1999) is a worst-case result that requires more freedom in the problem instances to encode the necessary structure for a reduction to known hard problems. Although their theorem is about restless bandits, the reduction needs a more general formulation in terms of arbitrary deterministic finite-dimensional Markov chains instead of the specific distributions used by Rendell et al. (2010). I am pretty sure that the optimal policy for the obvious generalization (i.e. n arms instead of 100, but generated in the same way) of the stochastic environment can be learned efficiently; there is just not enough structure there to encode a hard problem. Since I want to understand multi-armed bandits better, anyways, I might find the optimal algorithm and write about it in a future post.
  4. This sort of “I just want to beat you” behavior, reminds me of the irrational defection towards the out-group that I observed in the harmony game for tag-based models (Kaznatcheev, 2010).

Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211(4489), 1390-1396.

Hauert, C., & Doebeli, M. (2004). Spatial structure often inhibits the evolution of cooperation in the snowdrift game. Nature, 428(6983), 643-646.

Kaznatcheev, A. (2010). Robustness of ethnocentrism to changes in inter-personal interactions. Complex Adaptive Systems – AAAI Fall Symposium. (pdf)

Papadimitriou, C. H., & Tsitsiklis, J. N. (1999). The complexity of optimal queuing network control. Mathematics of Operations Research, 24(2): 293-305.

Rendell L, Boyd R, Cownden D, Enquist M, Eriksson K, Feldman MW, Fogarty L, Ghirlanda S, Lillicrap T, & Laland KN (2010). Why copy others? Insights from the social learning strategies tournament. Science, 328 (5975), 208-213 PMID: 20378813

Rendell, L., Fogarty, L., & Laland, K. N. (2010). Rogers’ paradox recast and and resolved: population structure and the evolution of social learning strategies” Evolution 64(2): 534-548.

About Artem Kaznatcheev
From the ivory tower of the School of Computer Science and Department of Psychology at McGill University, I marvel at the world through algorithmic lenses. My specific interests are in quantum computing, evolutionary game theory, modern evolutionary synthesis, and theoretical cognitive science. Previously I was at the Institute for Quantum Computing and Department of Combinatorics & Optimization at the University of Waterloo and a visitor to the Centre for Quantum Technologies at the National University of Singapore.

12 Responses to Social learning dilemma

  1. neuroecology says:

    Assuming my dad is like your dad, the advice is more along the lines of, “Hey if you do this maybe you can get a job when you graduate!” (thanks dad)

    On the one hand, social learning is why you go to a ‘good’ school for your field: you want to be around others doing similar work, so you can discuss ideas etc.

    In the game that you describe above, is the behavior a vector? ie, if you have two behaviors 10011 and 10100, are the two behaviors ‘similar’ in some way or are they totally distinct? If they’re distinct, I wonder how the results would change if social and personal information are additive.

    • Lack of having people to discuss with can be a real bummer. I really need to start interacting with more evolutionary game theory folks in person, but I don’t know anybody that is really focused on that here. Social learning is better represented, it seems.

      In the work I summarize, the behavior is just the number of the bandit arms you know how to pull. All behaviors are equally similar/distinct. Their work specifically specifically excludes social structures and models. That is why I suggest to add a basic interaction network, so that you are not copying from individuals at random in the population, but from your neighbours in the network. I suspect that will promote more innovation.

  2. Pingback: Social learning dilemma | e-Xploration | Scoop.it

  3. Pingback: Social learning dilemma | e-learning tips | Sco...

  4. Pingback: Toward an algorithmic theory of biology | Theory, Evolution, and Games Group

  5. Pingback: How ethnocentrics rule: a simulation of evolutionary dynamics | Theory, Evolution, and Games Group

  6. Pingback: Hunger Games themed semi-iterated prisoner’s dilemma tournament | Theory, Evolution, and Games Group

  7. Pingback: Baldwin effect and overcoming the rationality fetish | Theory, Evolution, and Games Group

  8. Pingback: Cooperation through useful delusions: quasi-magical thinking and subjective utility | Theory, Evolution, and Games Group

  9. Pingback: Learning and evolution are different dynamics | Theory, Evolution, and Games Group

  10. Pingback: Cataloging a year of blogging: from behavior to society and mind | Theory, Evolution, and Games Group

  11. Pingback: Phenotypic plasticity, learning, and evolution | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,071 other followers