Social learning dilemma
May 5, 2013 10 Comments
Last week, my father sent me a link to the 100 top-ranked specialties in the sciences and social sciences. The Web of Knowledge report considered 10 broad areas of natural and social science, and for each one listed 10 research fronts that they consider as the key fields to watch in 2013 and are “hot areas that may not otherwise be readily identified”. A subtle hint from my dad that I should refocus my research efforts? Strange advice to get from a parent, especially since you would usually expect classic words of wisdom like: “if all your friends jumped off a bridge, would you jump too?”
So, which advice should I follow? Should I innovate and focus on my own fields of interest, or should I imitate and follow the trends? Conveniently, the field best equipped to answer this question, i.e. “social learning strategies and decision making”, was sixth of the top ten research fronts for “Economics, Psychology, and Other Social Sciences”.
For the individual, there are two sides to social learning. On the one hand, social learning is tempting because it allows agents to avoids the effort and risk of innovation. On the other hand, social learning can be error-prone and lead individuals to acquire inappropriate and outdated information if the the environment is constantly changing. For the group, social learning is great for preserving and spreading effective behavior. However, if a group has only social learners then in a changing environment it will not be able to innovate new behavior and average fitness will decrease as the fixed set of available behaviors in the population becomes outdated. Since I always want to hit every nail with the evolutionary game theory hammer, this seems like a public goods game. The public good is effective behaviors, defection is frequent imitation, and cooperation is frequent innovation.
Although we can trace the study of evolution of cooperation to Peter Kropotkin, the modern treatment — especially via agent-based modeling — was driven by the innovative thoughts of Robert Axelrod. Axelrod & Hamilton (1981) ran a computer tournament where other researchers submitted strategies for playing the iterated prisoners’ dilemma. The clarity of their presentation, and the surprising effectiveness of an extremely simple tit-for-tat strategy motivated much of the current work on cooperation. True to their subject matter, Rendell et al. (2010) imitated Axelrod and ran their own computer tournament of social learning strategies, offering 10,000 euros for the best submission. By cosmic coincidence, the prize went to students of cooperation: Daniel Cownden and Tim Lillicrap, two graduate students at Queen’s University, the former a student of mathematician and notable inclusive-fitness theorist Peter Taylor.
A restless multi-armed bandit served as the learning environment. The agent could select which of 100 arms to pull in order to receive a payoff drawn independently (for each arm) from an exponential distribution. It was made “restless” by changing the payoff after each pull with probability . A dynamic environment was chosen because copying outdated information is believed to be a central weakness of social learning, and because Papadimitriou & Tsitsiklis (1999) showed that solving this bandit (finding an optimal policy) is PSPACE-complete, or in laymen terms: very intractable.
Participants submitted specifications for learning strategies that could perform one of three actions at each time step:
- Innovate — the basic form of asocial learning, the move returns accurate information about the payoff of a randomly selected behavior that is not already known by the agent.
- Observe — the basic form of social learning, the observe move returns noisy information about the behavior and payoff being demonstrated by a randomly selected individual. This could return nothing if no other agent played an exploit move this round, or if the behavior was identical to one the focal agent already knows. If some agent is selected for observation then unlike the perfect information of innovate, noise is added: with probability a randomly chosen behavior is reported instead of the one performed by the selected agent, and the payoff received is reported with Gaussian noise with variance .
- Exploit — the only way to acquire payoffs by using one of the behaviors that the agent has previously added to its repertoire with innovate and observe moves. Since no payoff is given during innovate and observe, they carry an inherent opportunity cost of not exploiting existing behavior.
The payoffs were used to drive replicator dynamics via a death-birth process. The fitness of an agent was given by their total accumulated payoff divided by the number of rounds they have been alive for. At each round, every agent in the population had a 1/50 probability of expiring. The resulting empty spots were filled by offspring of the remaining agents, with probability of being selected for reproduction proportional to agent fitness. Offspring inherited their parents’ learning strategy, unless a mutation occurred, in which case the offspring would have the strategy of a randomly selected learning strategy from those considered in the simulation.
A total of 104 learning strategies were received for the tournament. Most were from academics, but three were from high school students (with one placing in the top 10). A pairwise tournament was held to test the probability of a strategy invading any other strategy (i.e, if a single individual with a new strategy is introduced into a homogeneous population of another strategy).This round-robin tournament was used to select the 10 best strategies for advancement to the melee stage. During the round-robin , , were kept fixed, only during the melee stage with all of the top-10 strategies present did the experimenters vary these parameters.Unsurprisingly using lots of EXPLOIT moves is essential to good performance, since this is the only way to earn payoff. In other words: less learning and more doing. However, a certain minimal amount of learning is needed to get your doing off the ground, of this learning there is a clear positive correlation between the amount of social learning and success in invading other strategies. The best strategies used the limited information given to them to estimate and used that to better predict and quickly react to changes in the environment. However, they also relied completely on social learning, waiting for other agents to innovate new strategies or for to accidently give a new behavior for their repertoire. Since evolution (unlike the classical assumptions of rationality) cares about relative and not absolute payoffs, it didn’t matter to these agents that they were not doing as well as they could be, as long as they were doing as well as (or better than) their opponents. OBSERVE moves and a good estimate of environmental change allowed the agents to minimize their number of non-EXPLOIT moves and since their exploits paid as well as their opponents (who they were copying) they ended up having equal or better payoff (due to less learning and more exploiting). My view of social learning as an antisocial strategy is strengthened by the strategy’s low fitness when in isolation. The figure to the left shows this result, with the data-points more to the left corresponding to strategies that did better in the melee. Strategies 1, 2, and 4 are the pure social learners. The height of the data points shows how well a strategy performed when faced only against itself. The strategies that did best in the heterogeneous setting of the 10 strategy melee performed the worst when they were in a homogeneous populations with only agents of the same type. This is in line with Rendell, Fogarty, & Laland (2010) observation that social learning can decrease the overall fitness of the population. Social learners fare even worse when they can’t make occasional random mistakes in copying behavior, without these errors all innovation disappears from the population and average fitness plummets. Social learners are free-riding on the innovation of asocial agents.
I would be interested in pursuing this heuristic connection between learning and social dilemmas further. The interactions of learners with each other and the environment can be seen as an evolutionary game: can we calculate the explicit payoff matrix of this game in terms of environmental and strategy parameters? Does this game belong to the Prisoners’ dilemma or Hawk-Dove (or other) region of cooperate-defect games? The heuristic view of innovation as a public good and the lack of stable co-existence of imitators and innovators suggests that the dynamics are PD. However, Rendell, Fogarty, & Laland (2010) show social learning can sometimes spread better on a grid structure, this is contrary to the effects of PD on grids, but consistent with observations for HD (Hauert & Doebeli, 2004). Since the two studies use very different social learning strategies, it could be the case that depending on parameters, we can achieve either PD or HD dynamics.
Regardless of which social dilemma is in play, we know that slight spatial structure enhances cooperation. This means that I expect that if — instead of inviscid interactions — I repeated Rendell et al. (2010) on a regular random graph then we would see more innovation. Similarly, if we introduced selection on the level of groups then groups with more innovators would fare better and spread the innovative strategy throughout the population.
So what does this mean for how I should take my father’s implicit advice? First: stop learning and start doing; I need to spend more time writing up results into papers instead of learning new things. Unfortunately for you, my dear reader, this could mean fewer blog posts on fun papers and more on my boring work! In terms of following research trends, or innovating new themes, I think a more thorough analysis is needed. It would be interesting to extend my preliminary ramblings on citation network dynamics to incorporate this work on social learning. For now, I am happy to know that at least some of things I’m interested are — in Twitter speak — trending.
Notes and References
- Way too broad for my taste, one category was “Mathematics, Computer Science, and Engineering”; talk about a tease-and-trick. After reading the first two items I was excited to see a whole section dedicated to results like theoretical computer science, only to have my dreams dashed by ‘Engineering’. Turns out that Thomson Reuters and I have very different ideas on what ‘Mathematics’ means and how it should be grouped.
- Note that my interest weren’t absent from the list, with “financial crisis, liquidity, and corporate governance” appearing tenth for “Economics, Psychology, and Other Social Sciences” and even selected for a special more in-depth highlight. Evolutionary thinking also appeared in tenth place for the poorly titled “Mathematics, Computer Science and Engineering” area as “Differential evolution algorithm and memetic computation”. It is nice to know that these topics are popular, although I am usually not a fan of the engineering approach to computational models of evolution since their goal is to solve problems using evolution, not answer questions about evolution.
- High-impact general science publications like Nature, Science, and their more recent offshoots (like the open-access Scientific Reports) are awful at presenting theoretical computer science. It is no different in this case, Papadimitriou and Tsitsiklis (1999) is a worst-case result that requires more freedom in the problem instances to encode the necessary structure for a reduction to known hard problems. Although their theorem is about restless bandits, the reduction needs a more general formulation in terms of arbitrary deterministic finite-dimensional Markov chains instead of the specific distributions used by Rendell et al. (2010). I am pretty sure that the optimal policy for the obvious generalization (i.e. arms instead of 100, but generated in the same way) of the stochastic environment can be learned efficiently; there is just not enough structure there to encode a hard problem. Since I want to understand multi-armed bandits better, anyways, I might find the optimal algorithm and write about it in a future post.
- This sort of “I just want to beat you” behavior, reminds me of the irrational defection towards the out-group that I observed in the harmony game for tag-based models (Kaznatcheev, 2010).
Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211(4489), 1390-1396.
Hauert, C., & Doebeli, M. (2004). Spatial structure often inhibits the evolution of cooperation in the snowdrift game. Nature, 428(6983), 643-646.
Kaznatcheev, A. (2010). Robustness of ethnocentrism to changes in inter-personal interactions. Complex Adaptive Systems – AAAI Fall Symposium. (pdf)
Papadimitriou, C. H., & Tsitsiklis, J. N. (1999). The complexity of optimal queuing network control. Mathematics of Operations Research, 24(2): 293-305.
Rendell L, Boyd R, Cownden D, Enquist M, Eriksson K, Feldman MW, Fogarty L, Ghirlanda S, Lillicrap T, & Laland KN (2010). Why copy others? Insights from the social learning strategies tournament. Science, 328 (5975), 208-213 PMID: 20378813
Rendell, L., Fogarty, L., & Laland, K. N. (2010). Rogers’ paradox recast and and resolved: population structure and the evolution of social learning strategies” Evolution 64(2): 534-548.