Danger of motivatiogenesis in interdisciplinary work

Randall Munroe has a nice old xkcd on citogenesis: the way factoids get created from bad checking of sources. You can see the comic at right. But let me summarize the process without direct reference to Wikipedia:

1. Somebody makes up a factoid and writes it somewhere without citation.
2. Another person then uses the factoid in passing in a more authoritative work, maybe sighting the point in 1 or not.
3. Further work inherits the citation from 2, without verifying its source, further enhancing the legitimacy of the factoid.
4. The cycle repeats.

Soon, everybody knows this factoid and yet there is no ground truth to back it up. I’m sure we can all think of some popular examples. Social media certainly seems to make this sort of loop easier.

We see this occasionally in science, too. Back in 2012, Daniel Lemire provided a nice example of this with algorithms research. But usually with science factoids, it eventually gets debuked with new experiments or proofs. Mostly because it can be professionally rewarding to show that a commonly assumed factoid is actually false.

But there is a similar effect in science that seems to me even more common, and much harder to correct: motivatiogenesis.

Motivatiogenesis can be especially easy to fall into with interdisiplinary work. Especially if we don’t challenge ourselves to produce work that is an advance in both (and not just one) of the fields we’re bridging.

Let me first be a bit more precise about what I mean by motivatiogenesis.

I think that Kyler J. Brown first introduced me to this many years ago when we were both still at McGill. He referenced me an all-too-common misbehavior in neuroscience: A bad researcher justifies his work to a biologist with “this is the way computer scientists address this question, and it is of interest purely from theory” and when justifying the same work to a computer scientist — “this model is biologically reasonable, and of interest from science”. The biologist and computer scientist don’t know the other’s field well enough to see through this bluff, and the researcher manages to squeeze a poor publication out.

Contrast this with what a good researcher would do. She would justify her work to a biologist purely on biological grounds. And she would justify it to the computer scientists through its contribution to computer science. In other words, a good interdisciplinary scientist would contribute to both fields, and not use the border as a crutch.

But the case of the bad scientist gets worse!

Once the cycle starts and a group get a few such papers out, the auto-catalytic effect sets in: future work can justify itself by saying “we use a standard model in the field”. All of this even though the ‘standard model’ never had a justification for it. Eventually the subfield can start generating and answering its own field-endogenous questions that are fundamentally unhinged from reality.

But unlike a factoid, a false motivation is harder to burst. Especially if a subfield or cottage industry develops around the method. I think that this might be closely related to Jeremy Fox’s notion of Zombie Ideas.

Sometimes, you are able to see this motivatiogenesis cycle starting or well developed, but there is nothing you can do! It is because you have to publish an unimpressive negative result or critique of motivations. This is much less rewarding that upending a commonly believed fact. And much harder than continuing to work in the bubble.

I feel like a lot of potential bubbles (whether they be hot-topics or temperate on-going themes) started via motivatiogenesis. The only way to address the bubble is by having people who are knowledgeable on both sides of the topic point out why a certain model or approach has neither a strong empirical, nor theoretical justification. It started by chance and continues because of sociological reasons. It can help to have skeptical but engaged colleagues from different fields.

But this is hard to do, and so a motivation bubble will often grow and grow.

Sometimes, new authors don’t even realize they’ve fallen into a trap. If they’ve been trained within the bubble, it might be impossible to find the appropriate distance for questioning. When reflection on my own work, I sometimes fear that parts of evolutionary game theory might end up like this.

But even with bubbles, there can be hope. A field is not static, and there are ways to ground it and use the tools developed even if they were developed in an ungrounded way. This is why I am such a big advocate of operationalization of theoretically well-understood simple models.

Do you know any interdisiplinary motivation bubbles in your own fields, dear reader? Have you been part of a motivatiogenesis before?

I feel like I’ve definitely been a small part of motivatiogenesis — a part of this cycle.

For one of my papers in undergrad, I briefly studied pronoun acquisition in children (Kaznatcheev, 2010). It was a computational study and I used Fahlman & Lebiere’s (1990) cascade-correlation neural nets. I used these because it was the ‘standard approach’ in my lab and I had code on hand for running them. However, these types of neural nets are not the best performers from an engineering perspective, and also seem to have no real empirical justification from neurobiology. But I used them because I simply didn’t have the time or energy to consider building (or finding) a better model. Or the wisdom and perspective to question my model choice. So I used the ‘standard model’.

To make matters worse, the paper has subsequently been cited in engineering work as motivation: “people use CC NNs in science for these class of problems (Kaznatcheev, 2010), we should continue to work on them”.

Thankfully, the overall impact of this old paper of mine has been minimal.

I don’t know how to avoid these bubbles forming. But I really liked John Regehr’s call for epipaths for bubbles. Maybe we can learn something useful once they’ve popped?

What do you think, dear reader? Is this a legitimate concern or an unnecessary worry?

References

Fahlman, S. E., & Lebiere, C. (1990). The cascade-correlation learning architecture. In Advances in neural information processing systems (pp. 524-532).

Kaznatcheev, A. (2010). A connectionist study on the interplay of nouns and pronouns in personal pronoun acquisition. Cognitive Computation, 2(4), 280-284.

Advertisements

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

7 Responses to Danger of motivatiogenesis in interdisciplinary work

  1. Philip Gerlee says:

    I think you make a valid point, and it applies to several subfields of mathematical biology. My pet example (as you might know) is the application of evolutionary game theory in the form of the replicator equation in cancer modelling. There are too many papers with made up parameter values that neither advance theory nor cancer biology. Your paper on measuring games is an excellent example of someone trying to break the motivatiogenesis cycle!

  2. Pingback: Friday links: zombie ideas in the humanities and social sciences, fantasy birding, Rhesus pieces, and more | Dynamic Ecology

  3. Pingback: The Master Class Is Just Not That Into You – Tropics of Meta

  4. Rob Noble says:

    I agree with Philip: definitely a valid and important point. I would only add that bad established ideas persist not only due to myopia, but also because senior scientists write these ideas into major grant proposals and build big research programmes around them. Even when junior researchers or collaborators can see the concept is dodgy, it’s very hard for them to swim against the tide.

    • Good point, Rob.

      I am generally of the (pessimistic?) mindset that most things in science persist primarily due to power structures. And develop along paths of least resistance — and nature or false-ness can provide some resistance, but (lack of) funding fashion or prestige is often the bigger source of resistance.

      It might be an interesting empirical question to see if fields with big groups and more oppressive hierarchies form more/bigger/longer motivatiogenesis bubbles or not compared to flatter fields without ‘ lab’s. I don’t have a strong intuition either way.

  5. For posterity: I think I offered a rather vague definition of motivatiogenesis in this post. Jeremy Fox provided a better definition of motivatiogenesis than me on Dynamic Ecology’s Friday Links:

    [motivatiogenesis is] providing different motivations or rationales for your work to different audiences–each of which would be seen through by the other audience.

    This seems to get to the core of what I was trying to say. But saying it in one sentence.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.