Models, modesty, and moral methodology
April 27, 2014 7 Comments
In highschool, I had the privilege to be part of a program that focused on the humanities and social sciences, critical thinking, and building research skills. The program’s crown was a semester of grade eleven (early 2005) dedicated to working on independent research for a project of our own design. For my project, I poured over papers and books at the University of Saskatchewan library, trying to come up with a semi-coherent thesis on post-cold war religious violence. Maybe this is why my first publications in college were on ethnocentrism? It’s a hard question to answer, but I doubt that the connection was that direct. As I was preparing to head to McGill, I had ambition to study political science and physics, but I was quickly disenchanted with the idea, and ended up focusing on theoretical computer science, physics, and math. When I returned to the social sciences in late 2008, it was with the arrogance typical of a physicist first entering a new field.
In the years since — along with continued modeling — I have tried to become more conscious of the types and limitations of models and their role in knowledge building and rhetoric. In particular, you might have noticed a recent trend of posts on the social sciences and various dangers of Scientism. These are part of an on-going discussions with Adam Elkus and reading the Dart-Throwing Chimp. Recently, Jay Ulfelder shared a fun quip on why skeptics make bad pundits:
First Rule of Punditry: I know everything; nothing is complicated.
First Rule of Skepticism: I know nothing; everything is complicated.
Which gets at an important issue common to many public-facing sciences, like climate, social, or medicine, among others. Academics are often encouraged to be skeptical, both of their work and others, and precise in the scope of their predictions. Although self-skepticism and precision is sometimes eroded away by the need to publish ‘high-impact’ results. I would argue that without factions, divisions, and debate, science would find progress — whatever that means — much more difficult. Academic rhetoric, however, is often incompatible with political rhetoric, since — as Jay Ulfelder points out — the latter relies much more on certainty, conviction, and the force with which you deliver your message. What should a policy oriented academic do?
I am probably not the best person to ask, since I usually try to sidestep the issue by cloaking my discussions in a language that is not easily accessible to non-experts. Further, my lack of any inherent authority means that my inaccessible language is never (mis-)translated for the public in a way that I might disagree with. This is not the case for high-impact science.
Even if the author does not intended to directly engage with the public, other academics might assume that science-journalists will force an engagement. So if you work in an area of potential public interface, even if you don’t plan to engage the public yourself, it is still important to keep them in mind just to get by in the academic hussle. For example, Brian McGill recently shared part of a negative review one of his ecology colleagues received from Nature:
I can appreciate counter-intuitive findings that are contrary to common assumptions. However, because of the large policy implications of this paper and its interpretation, I feel that this paper has to be held to a high standard of demonstrating results beyond a reasonable doubt … Unfortunately, while the authors are careful to state [the limitations of their results] … clearly media reporting on these results are going to skim right over [these caveats and conditions] … I do not think [the media's simplistic] conclusion would be justified, and I think it is important not to pave the way for that conclusion to be reached by the public.
Of course, it is easy to write off this review as “those damn high-impact pulp rags, they just care about the headlines”, but for most, this is a disingenuous dismissal. For example, no matter how much I ridicule the high-impact journals, I am hard pressed to imagine a setting where I would turn down an opportunity to publish there. To stress the importance to non-academics, economists would give up more than half-a-thumb to publish in their flagship journal (Attema et al., 2014). In other words, we cannot dismiss the precedence that review behavior at top journals sets in terms of interaction with the public.
As much as we might object that — capital S — Science is objective and amoral and it is only those pesky human scientists that aren’t, there is no use denying that the authority of science and the institutions that support it can be used to gain undue power. This is often hard for scientists to see — I have my own trouble with it — because a lot of times science ‘done right’ happens to endorse our opinions. However, we are willing to dismiss junk science as “activists using the trappings of science to influence public opinion and policy”. But if science is not political then why does disguising one’s work as science give it more social influence?
Let’s go back to the Nature review, and examine why the hypothetical science journalist omitting all the important caveats of a result might upsets us. I believe that the issue here is one of moral agency. When a journalist presents some ‘scientific conclusion’, they are not taking any moral responsibility for that conclusion or the effects of our belief in it, instead they are outsourcing that responsibility to objective and amoral Science, or sometimes to the authority of the authors who would disagree with the simplified conclusion (and can thus denounce responsibility). It seems to me, that in a political discourse, moral agency is important and by an appeal to science, we have been able to rid ourselves of any negative moral consequences for our action. This seems strange to me.
I feel that if people want to engage in both science and public-policy then they should try to keep separate those two parts of their activities. In particular, they should not use one as a trump-card in the other field, they should engage in each area on its own terms. In the context of social sciences, this means that quantitative forecasters should be less skeptical of their own models if they want to have influence on policy. But my agreement comes with one major caveat that I think Jay Ulfelder ignored — moral responsibility. What I object to is saying “the math says blah” or “scientifically, blah should happen” or “according to statistics, its blah“. I object because the moral subject of those sentences is not a person that we can blame, but math, science, and statistics. In this subtle shift of moral subject, I am pretending that there is some objective truth to which I am outsourcing my responsibility — and no social models are at a level where you can do ethically do this. However, using my model to generate a prediction and then saying “I predict blah blah” is alright. In the second case, if my prediction is wrong and leads to something bad, I can’t use my model as a scapegoat since I took responsibility for the prediction enough to endorse it in the face of known and unknown uncertainties.
In other words, punditry with models is fine, as long as it is still people and not the models themselves that are pundits. It is fine for me to pretend that I have access to “objective truth”, but it is not fine to pretend that I am simply the messenger of a model that has access to “objective truth” and take the credit when I am right but yell “don’t shoot the messenger” when I am mistaken. If I change the moral subject and say “my model predicts” then I need to be modest about the limits of my model and take the time to educate the person I am addressing to on all the caveats and limitations, so that the final decision maker can then make an informed decision and take moral responsibility for what is now their prediction.
It is important to note that my opinion is predicated on my own experiences, largely with CAS modelers using their work as rhetoric, and explaining away mistaken predictions as “all models are approximations”, “garbage in, garbage out”, “human society is a complex system so the butterfly effect amplifies even the little uncertainties in our data”, etc. Of course, people like Adam Elkus, have more experience than me with pundits and can thus offer very strong counter arguments that I recommend consulting (see the discussion here). Finally, my opinion assumes that being wrong can actually have consequences for pundits, which is definitely a mistaken assumption at times:
Attema, A. E., Brouwer, W. B., & Van Exel, J. (2014). Your right arm for a publication in AER?. Economic Inquiry, 52(1): 495-502.