Big data, prediction, and scientism in the social sciences
April 13, 2014 15 Comments
Much of my undergrad was spent studying physics, and although I still think that a physics background is great for a theorists in any field, there are some downsides. For example, I used to make jokes like: “soft isn’t the opposite of hard sciences, easy is.” Thankfully, over the years I have started to slowly grow out of these condescending views. Of course, apart from amusing anecdotes, my past bigotry would be of little importance if it wasn’t shared by a surprising number of grown physicists. For example, Sabine Hossenfelder — an assistant professor of physics in Frankfurt — writes in a recent post:
It isn’t so surprising that social scientists themselves are unhappy because the boat of inadequate skills is sinking in the data sea and physics envy won’t keep it afloat. More interesting than the paddling social scientists is the public opposition to the idea that the behavior of social systems can be modeled, understood, and predicted.
As a blogger I understand that we can sometimes be overly bold and confrontational. As an informal medium, I have no fundamental problem with such strong statements or even straw-men if they are part of a productive discussion or critique. If there is no useful discussion, I would normally just make a small comment or ignore the post completely, but this time I decided to focus on Hossenfelder’s post because it highlights a common symptom of interdisciplinitis: an outsider thinking that they are addressing people’s critique — usually by restating an obvious and irrelevant argument — while completely missing the point. Also, her comments serve as a nice bow to tie together some thoughts that I’ve been wanting to write about recently.
In Hossenfelder’s case, they point she is missing — and inadvertently illustrating — is the danger of scientism in the social sciences. The lesser danger is belittling the methods used by social scientists. It is not uncommon for physicists or mathematicians to bring in heavy-duty mathematical tools and argue that they should be listened to because their tools are fancy; relevance by intimidations. Of course, sometimes these tools can prove useful and alter the landscape of the fields they are introduced into, but most of the time they either disappear, or form a methological ghetto. At times these ghettos grow into subfields of their own that develop nearly independent of the discipline they wanted to have an effect on, like econophysics and network science.
A great illustration of this is the citation patterns in the Small World networks literature (from Freeman, 2004) on the left. Papers by sociologists are represented with white dots, those by physicists in black, and others are in grey. It is easy to see that there are two distinct clusters that barely communicate with each other. I am not sure how such segmentation is productive, since unless the physicists just transition completely to being sociologists, they are not actually moving the study of society forward because their work remains unknown or unimportant to practicing sociologists. This is why I believe that if you are entering a new field then you should do so as a connector: try your best to make any new tools you bring as simple and well justified as possible, and make sure you understand exactly what the problems the fields wants to answer are instead of imagining your own.
Methodological intimidation concerns only the dynamics of fellow scientists and is thus of minor importance. The real issue is with using the undue authority of science to drive social change. If you don’t think scientific has authority in society then just look at peddlers of homeopathic medicine, who appeal to the authority of ‘scientifically tested’ to sell their products. This sort of Scientism is what people are fearing and critiquing when they cry ‘social engineering!’ at the end of physicists’ posts on social issues. This is also the point that Hossenfelder misses completely when she writes (emphasis mine):
the only way we can solve the problems that mankind faces today — the global problems in highly connected and multi-layered political, social, economic and ecological networks — is to better understand and learn how to improve the systems that govern our lives.
This is the real point of people who are opposed to physicists naive views, and the point most frequently missed by physicists climbing into the social sciences. As Cathy O’Neil writes in the context of economics: “actual scientists are skeptical, even of their own work, and don’t pretend to have error bars small enough to make high-impact policy decisions based on their fragile results.” Instead of following this motto, being ‘scientific’ is often used as a show of power in our society, and if you climb into a poorly understood system and start “making it better”, you are likely to make it awful. Especially if you are have no respect for human autonomy or dignity (regardless of your views on free will). If you want to be scientific then you should focus on understanding the system you study, not changing it. Kepler, Galileo, and Newton didn’t aim to improve the paths of the wandering stars, just to understand them. Only after a level of understanding was achieved, did a derived engineering develop.
For something as important as social policy, you should first have a reasonable understanding of the system you are trying to affect before you try to turn to scientism to support your pet policies. In the process of expanding your understanding, however, you should not expect more information to help converge political opinion. In fact “the more information partisans get, the deeper their disagreements become” (for details see Kahan et al., 2013). Yet among liberals a common joke that “truth has a liberal bias” continues and an assumption that more data, especially the trendy (and overhyped) ‘big data’, will create ‘good’ social policy is prevalent. This, of course, ignores the actual bias present in data from how we collect it, process it, and choose to act on its predictions.
Even in non-partizan settings where ‘improvement’ is unambigious, such as Google Flu Trend (Ginsberg et al., 2009), it is easy to see the limits of prediction from big data. Google Flu Trends started with great success but in a few years was much less effective than the more traditional predictions by the Centers for Disease Control and Prevention that it was meant to surpass. Lazer et al. (2014) identified two primary problems: big data hubris and algorithmic dynamics. The first is a belief “that big data are a substitute for, rather than a supplement to, traditional data collection and analysis” (Lazer et al. 2014, pg. 1203). For me, this is one of the main differences between big data for social sciences versus data in more traditional observational sciences like astronomy. For astronomy, the type of data collected and the sort of questions asked are shaped by the scientists themselves. In the social sciences, however, the questions are often imposed from the outside (either from folk theories or from policy demands) and the big data is collected with other questions (usually related to the interests of a specific company) in mind. This makes the interpretation, analysis, and reproduction fundamentally different between the fields. Only in exceptional cases is sound experimental/measurement design and machine learning combined.
Algorithmic dynamics further exasperate the problem for big data. Since the data collection is often outside of the researchers hands and controlled by an opaque entity that has specific interests in processes generating the data. In the case of Google Flu trends, for instance, countless improvements to results (such as suggested searches) made by the search team made it very difficult for the prediction team to generate accurate predictions because their measurement apparatus (the volume of specific search queries) was constantly changing without their knowledge (although both teams are in the same company, they are not in close contact). This makes the ‘raw data’ impossible to regenerate and replication or reanalysis by other teams and the resulting critical dialogue is something unimaginable. Compare this to astronomy, where countless scholars could pour over Ptolemy’s data and provide their own analysis, interpertation, and critiques, with the resultant dialogue developing the science.
In the above example of flu outbreak prediction there is relatively little interest to game the system or profiteer. If you really want to see the authority of science or math abused to misguide and make profit then finance is ready to please. I’ve already highlighted the danger of hiding lies in complex derivatives, and I’m not alone, there is even a forthcoming book on Weapons of Math Destruction. One of the simplest such weapons is the pseudo-mathematical misuse of backtesting among financial advisors (Bailey et al., 2014). Here, the advisor presents some mathematical model, sometimes simple and sometimes complex, for predicting when its best to buy or sell a given asset (or some other investment decision). He assures you of its soundness by showing you the great Sharpe ratio it has on historic stock data and a statistical test guaranteeing its significance. What he neglects to tell you as the thousands (or more with modern computers) of other candidate models he considered, and the lack of any out-of-sample testing. The result is a model that overfits its training set and immediately fails on new data. However, even such predictions consistently and reliably fail, the community of advisors is able to explain away the issue with more appeals to science like the efficient markets hypothesis: “the market has found the hidden effect and arbitraged away the profits”. In reality, the original model hadn’t detected any actual regularity, but the advisor was able to hide this behind a veil of scientism.
I am not trying to argue that we should avoid increasing our understanding of social systems, or even that we should avoid using our understanding to guide policy. What I am trying to convince you of is that the only way to do this is through critical and well-intentioned discourse. Belittling the social sciences, or trying to argue from the authority of past success in the ‘hard’ sciences, is not conducive of such a discussion. Instead, we should embrace a plurality of methods and be modest of our past accomplishments and mindful that just because our tools worked in one domain, doesn’t mean that they are likely to work in another. At least not without significant work and give-and-take. Further, this discussion cannot be confined to a community of experts in some methodological ghetto, but should do its best to embrace as many voices as possible, hopefully including those of the public that police is trying to affect.
Bailey, D. H., Borwein, J. M., de Prado, M. L., & Zhu, Q. (2014). Pseudo mathematics and financial charlatanism: the effects of backtest overfitting on out-of-sample performance. Notices of the AMS, 61(5): 458-471.
Freeman, L. C. (2004). The development of social network analysis: A study in the sociology of science. Vancouver: Empirical Press.
Ginsberg, J., Mohebbi, M. H., Patel, R. S., Brammer, L., Smolinski, M. S., & Brilliant, L. (2009). Detecting influenza epidemics using search engine query data. Nature, 457(7232): 1012-1014.
Kahan, D. M., Peters, E., Dawson, E. C., & Slovic, P. (2013). Motivated Numeracy and Enlightened Self-Government. Yale Law School, The Cultural Cognition Project, Working Paper, (116).
Lazer, D., Kennedy, R., King, G., & Vespignani, A. (2014). Big data. The parable of Google Flu: traps in big data analysis. Science, 343 (6176), 1203-1205 PMID: 24626916