Models, modesty, and moral methodology

In highschool, I had the privilege to be part of a program that focused on the humanities and social sciences, critical thinking, and building research skills. The program’s crown was a semester of grade eleven (early 2005) dedicated to working on independent research for a project of our own design. For my project, I poured over papers and books at the University of Saskatchewan library, trying to come up with a semi-coherent thesis on post-cold war religious violence. Maybe this is why my first publications in college were on ethnocentrism? It’s a hard question to answer, but I doubt that the connection was that direct. As I was preparing to head to McGill, I had ambition to study political science and physics, but I was quickly disenchanted with the idea, and ended up focusing on theoretical computer science, physics, and math. When I returned to the social sciences in late 2008, it was with the arrogance typical of a physicist first entering a new field.

In the years since — along with continued modeling — I have tried to become more conscious of the types and limitations of models and their role in knowledge building and rhetoric. In particular, you might have noticed a recent trend of posts on the social sciences and various dangers of Scientism. These are part of an on-going discussions with Adam Elkus and reading the Dart-Throwing Chimp. Recently, Jay Ulfelder shared a fun quip on why skeptics make bad pundits:

First Rule of Punditry: I know everything; nothing is complicated.

First Rule of Skepticism: I know nothing; everything is complicated.

Which gets at an important issue common to many public-facing sciences, like climate, social, or medicine, among others. Academics are often encouraged to be skeptical, both of their work and others, and precise in the scope of their predictions. Although self-skepticism and precision is sometimes eroded away by the need to publish ‘high-impact’ results. I would argue that without factions, divisions, and debate, science would find progress — whatever that means — much more difficult. Academic rhetoric, however, is often incompatible with political rhetoric, since — as Jay Ulfelder points out — the latter relies much more on certainty, conviction, and the force with which you deliver your message. What should a policy oriented academic do?

I am probably not the best person to ask, since I usually try to sidestep the issue by cloaking my discussions in a language that is not easily accessible to non-experts. Further, my lack of any inherent authority means that my inaccessible language is never (mis-)translated for the public in a way that I might disagree with. This is not the case for high-impact science.

Even if the author does not intended to directly engage with the public, other academics might assume that science-journalists will force an engagement. So if you work in an area of potential public interface, even if you don’t plan to engage the public yourself, it is still important to keep them in mind just to get by in the academic hussle. For example, Brian McGill recently shared part of a negative review one of his ecology colleagues received from Nature:

I can appreciate counter-intuitive findings that are contrary to common assumptions. However, because of the large policy implications of this paper and its interpretation, I feel that this paper has to be held to a high standard of demonstrating results beyond a reasonable doubt … Unfortunately, while the authors are careful to state [the limitations of their results] … clearly media reporting on these results are going to skim right over [these caveats and conditions] … I do not think [the media's simplistic] conclusion would be justified, and I think it is important not to pave the way for that conclusion to be reached by the public.

Of course, it is easy to write off this review as “those damn high-impact pulp rags, they just care about the headlines”, but for most, this is a disingenuous dismissal. For example, no matter how much I ridicule the high-impact journals, I am hard pressed to imagine a setting where I would turn down an opportunity to publish there. To stress the importance to non-academics, economists would give up more than half-a-thumb to publish in their flagship journal (Attema et al., 2014). In other words, we cannot dismiss the precedence that review behavior at top journals sets in terms of interaction with the public.

As much as we might object that — capital S — Science is objective and amoral and it is only those pesky human scientists that aren’t, there is no use denying that the authority of science and the institutions that support it can be used to gain undue power. This is often hard for scientists to see — I have my own trouble with it — because a lot of times science ‘done right’ happens to endorse our opinions. However, we are willing to dismiss junk science as “activists using the trappings of science to influence public opinion and policy”. But if science is not political then why does disguising one’s work as science give it more social influence?

Let’s go back to the Nature review, and examine why the hypothetical science journalist omitting all the important caveats of a result might upsets us. I believe that the issue here is one of moral agency. When a journalist presents some ‘scientific conclusion’, they are not taking any moral responsibility for that conclusion or the effects of our belief in it, instead they are outsourcing that responsibility to objective and amoral Science, or sometimes to the authority of the authors who would disagree with the simplified conclusion (and can thus denounce responsibility). It seems to me, that in a political discourse, moral agency is important and by an appeal to science, we have been able to rid ourselves of any negative moral consequences for our action. This seems strange to me.

I feel that if people want to engage in both science and public-policy then they should try to keep separate those two parts of their activities. In particular, they should not use one as a trump-card in the other field, they should engage in each area on its own terms. In the context of social sciences, this means that quantitative forecasters should be less skeptical of their own models if they want to have influence on policy. But my agreement comes with one major caveat that I think Jay Ulfelder ignored — moral responsibility. What I object to is saying “the math says blah” or “scientifically, blah should happen” or “according to statistics, its blah“. I object because the moral subject of those sentences is not a person that we can blame, but math, science, and statistics. In this subtle shift of moral subject, I am pretending that there is some objective truth to which I am outsourcing my responsibility — and no social models are at a level where you can do ethically do this. However, using my model to generate a prediction and then saying “I predict blah blah” is alright. In the second case, if my prediction is wrong and leads to something bad, I can’t use my model as a scapegoat since I took responsibility for the prediction enough to endorse it in the face of known and unknown uncertainties.

In other words, punditry with models is fine, as long as it is still people and not the models themselves that are pundits. It is fine for me to pretend that I have access to “objective truth”, but it is not fine to pretend that I am simply the messenger of a model that has access to “objective truth” and take the credit when I am right but yell “don’t shoot the messenger” when I am mistaken. If I change the moral subject and say “my model predicts” then I need to be modest about the limits of my model and take the time to educate the person I am addressing to on all the caveats and limitations, so that the final decision maker can then make an informed decision and take moral responsibility for what is now their prediction.

It is important to note that my opinion is predicated on my own experiences, largely with CAS modelers using their work as rhetoric, and explaining away mistaken predictions as “all models are approximations”, “garbage in, garbage out”, “human society is a complex system so the butterfly effect amplifies even the little uncertainties in our data”, etc. Of course, people like Adam Elkus, have more experience than me with pundits and can thus offer very strong counter arguments that I recommend consulting (see the discussion here). Finally, my opinion assumes that being wrong can actually have consequences for pundits, which is definitely a mistaken assumption at times:

References

Attema, A. E., Brouwer, W. B., & Van Exel, J. (2014). Your right arm for a publication in AER?. Economic Inquiry, 52(1): 495-502.

About these ads

About Artem Kaznatcheev
From the ivory tower of the School of Computer Science and Department of Psychology at McGill University, I marvel at the world through algorithmic lenses. My specific interests are in quantum computing, evolutionary game theory, modern evolutionary synthesis, and theoretical cognitive science. Previously I was at the Institute for Quantum Computing and Department of Combinatorics & Optimization at the University of Waterloo and a visitor to the Centre for Quantum Technologies at the National University of Singapore.

7 Responses to Models, modesty, and moral methodology

  1. Thanks for continuing this conversation, which I’ve found challenging in a good way.

    One point of clarification about my post on how circumspect quantitative forecasters should be: I was *not* suggesting that modelers should assert their results as truth or that we should ignore the moral dimensions of our conversations with policy and activist audiences.

    On the first point, I tried—and, apparently, failed—to say clearly that modelers should be transparent about the limitations of their work; they just shouldn’t start the presentation there, because leading with the caveats will, I expect, cause many listeners to tune out and favor information provided more confidently by others.

    On the second point, my rationale for doing so comes from my attempt to consider the wider moral context. If these audiences are making planning decisions with moral consequences the information they are getting now is often unreliable, then we arguably have a moral obligation to try to give them information that is marginally less unreliable. If we believe we have information that satisfies that criterion, then there’s a moral case to be made for emphasizing persuasion rather than circumspection in its presentation. If we’re not confident that our information satisfies that criterion, then we should not present it confidently.

    And just to be clear: by “present confidently,” I simply mean follow rather than lead with the caveats. I do *not* mean give the models agency and pretend that they exist in some realm of objectivity outside our mental construction. They are our mental constructs, and we should never pretend otherwise.

    • Adam Elkus says:

      I’ve written a lot at various outlets about the problem of political scientists confusing the reality of their science (e.g. explaining the world) with how it can aid politics (e.g. as a means of manipulating the world to serve some desired end). I came out pretty strong as saying something similar to what you do here — science != politics, and we shouldn’t confuse the two. This was during a time when many political scientists thought that their discipline was becoming irrelevant, and the key to achieving relevance was to answer the demands of policy and journalism. I thought this, to say the least, was a very, very, wrong attitude to take.

      That said, I wonder if what Jay is bringing up here raises a different question. It’s difficult for us to improve on the basic problem I noted here: as much as people like me, you, and Jay can say to our hearts’ content that science != politics and that we must be moral and transparent about our work, there are very strong incentives not to be. I briefly wrote about them (in a rather flamboyant style befitting op-ed writing http://warontherocks.com/2013/09/hot-models-hard-questions/) but I don’t think my suggestions at the end really can help with the problem beyond making the consumer more knowledgable.

      What we really need at this point is not necessarily more blogs about advocascience and models — which is already well-trodden ground — but something that offers guidance for the researcher/modeler that exists in the space in between science and pure policy. There are a lot of people who aren’t either researchers or purely partisan hacks that do applied analysis in public policy. People, for example, like the Cold War RAND analysts. I haven’t seen a lot of writing that really speaks to their perspectives — the moral dilemmas they face, the problems they try to solve, and the constraints they deal with from within institutions that are very different the typical environments that scientists work in.

      • Adam Elkus says:

        Closest thing I think of to something like this that already exists is Emanuel Derman’s “Modeler’s Manifesto….”

  2. Pingback: Models, modesty, and moral methodology. « Economics Info

  3. Pingback: Weapons of math destruction and the ethics of Big Data | Theory, Evolution, and Games Group

  4. Pingback: Limits of prediction: stochasticity, chaos, and computation | Theory, Evolution, and Games Group

  5. Pingback: Models and metaphors we live by | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,326 other followers