Separating theory from nonsense via communication norms, not Truth

Earlier this week on twitter, Brian Skinner wrote an interesting thread on how to distinguish good theory from crackpottery. He started with a trait that both theorists and crackpots share: we have an “irrational self-confidence” — a belief that just by thinking we “can arrive at previously-unrealized truths about the world”. From this starting point, the two diverge in their use of evidence. A crackpot relies primarily on positive evidence: he thinks hard about a problem, arrives at a theory that feels right, and then publicizes the result.

A theorist, on the other prong, incorporates negative evidence: she ponders hard about a problem, arrives at a theory that feels right and then proceeds to try to disprove that theory. She reads the existing literature and looks at the competing theories, takes time to understand them and compare them against her own. If any disagree with hers then she figures out why those theories are wrong. She pushes her theory to the extremes, looks at its limiting cases and checks them for agreement with existing knowledge. Only after her theory comes out unscathed from all these challenges does she publicize it.

For Skinner, this second prong is the definition of scholarship. In practice, coming up with a correct theory is mostly a painful process of discarding many of your own wrong attempts. A good theorist is a thorough, methodical and skeptical of their own ideas.

The terminology of crackpottery vs scholarship is probably overly harsh, as Skinner acknowledges. And in practice, somebody might be a good theorist in one domain but a crackpot elsewhere. As Malkym Lesdrae points out, there are many accomplished accademics who are also crackpot theorists: “Most often it’s about things outside their field of specialty”. Thus, this ideal self-skepticism might be domain specific.

It is also a destructive ideal.

In other words, I disagreed with Skinner on the best way to separate good theory from nonsense. Mostly on the framing. Skinner crystalized our disagreement in a tweet: whereas he views self-skepticism as I an obligation to the Truth, I view a similar sort of self-reflective behavior as a social obligation. I am committed to this latter view because I want to make sense of things like heuristic models, where truth is secondary to other modelling concerns. Where truth is not the most useful yardstick for checking the usefulness of model. Where you hear Box’s slogan: “all models are wrong, but some are useful.

Given the brief summary of Skinner’s view above — and please, Brian, correct me in the comments if I misrepresented your position — I want to use the rest of this post to sketch what I mean by self-reflective behavior as a social obligation.

But first, let me start on where we agree: on the self-reflective behavior that a good theorist should display. A good theorist should read widely in the existing literature. She should acknowledge prior work, and honestly and fairly compare her own work to it. She should write in good faith, with the urge to learn and improve from the exchange. And she should try to anticipate and address — potentially by restarting her theorizing process — obvious critiques.

Now where we disagree.

The reason that good theorist does the above is not because it improves the truth quality of her theory. She does it because it is a respectful way to communicate.

She reads and acknowledges prior work because this is what we would do in any conversation. Consider the following faux pas. I am in a group chat with my friends, and we are deciding on what to eat. So far, everybody has agreed that we don’t want pizza. After all, Alice is allergic. Suddenly, I suggest: “hey guys, let’s get pizza”. Now replace the group chat by the academic literature. And pizza by my latest theory. I have to read and understand the prior work not because that will get me closer to the Truth — although if one does view science as progressive towards the Truth then this will be a nice side-effect — but because it is disrespectful to the rest of the community that I am trying to introduce to my theory.

Now, it doesn’t always have to be a prior theory that I am repeating or contradicting. Let’s go back to the above food selection analogy. Suppose that we all know already that Alice is allergic to pizza. She doesn’t have to explicitly remind us of this. As good friends, we should be able to infer for ourselves that our group can’t agree to pizza because at least one of us is allergic to it. Similarly with a theory, there doesn’t have to be an explicit prior theory that I am breaking with, I also have to test my new theory against the known facts.

Of course, we can’t always demand consistency with old theories. But if we do contradict serious prior theories, we should explain why they were wrong or how to resolve the tension. In the pizza example, I might say: “Hey guys, let’s go to Paul’s Pizza. I know that Alice said she was allergic to pizza, but I actually discussed it with her and she is actually allergic to gluten; and Paul makes gluten-free pizza.” Similarly, a good theorist will explicitly compare her theory to prior work and reconcile or at least acknowledge any disagreements.

Unfortunately, the scientific literature is much larger than a group chat about food. I can’t possibly read all of it. But I think there is a communication norm that addresses this. Before I started this blog, a quote that Radu GRIGore attributed to Dijiksta really struck me: “if you can spend 20 minutes to save each of your readers 1 minute, then it is polite to do so if you expect at least 20 readers.” If I expect twenty people to seriously engage with my theory, and that engagement might take a couple of days then I should be willing to spend a couple of months testing my theory against the literature, known facts, and obvious objections.

However, I don’t need to test my theory against everything. Science is a conversation. If I make an earnest, well-thought through contribution to that conversation with my theory then it can be useful even if its wrong. If I had spend a few months or more working on a model and then somebody else finds a flaw then it is useful for that flaw to be made known publicly. Then others can save themselves those months of time. If however, I put too high of a standard of correctness on myself then I will discard my theory in private and thus won’t save later colleagues any time (if they encounter a similar theory). Of course, finding the perfect balance between how much thought is enough is a bit of an art.

This was where I was particularly worried about Skinner’s emphasis that we go to the ends of the earth to disprove our own theory before publicizing it. I’ve seen lots of junior colleagues spend far too long making sure they know everything in a given field before saying anything at all. This is made even worse by impostor syndrome. To compensate for this, we should encourage our junior colleagues to be more forward with their ideas. In fact, due to the Matthew effect, a junior researcher’s work is much less likely to be read and so they have less to fear from Dijiksta’s maxim. Unfortunately, in practice, we see the other direction: as people become more senior and senior, they often release more half-baked ideas that are read by more and more colleagues.

We should publicized theories as just ideas that we as authors have thought hard enough about to conclude that others might benefit from thinking about them, too. We need to communicate these ideas as clearly as we can. We need to be aware of how large our audience is and who they are. Only all these aspects, as with any public speech act, due diligence is in order — but it is a sliding scale.

I hope that you, dear reader, find interesting this view of communication norms as primary. And I hope that I didn’t violate Dijiksta’s maxim in my hasty writing of this post. In fact, given that I have some records on my writing time and the viewership of posts, I could run some statistics on how often I make such violations. Maybe in the future.

Advertisements

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

2 Responses to Separating theory from nonsense via communication norms, not Truth

  1. David Pierce says:

    About accomplished persons who become crackpots outside their specialty: how much of this has to do with thinking that what is outside your specialty really *is* your specialty or at least uses the same methods? Examples I suggest include Facebook’s treating problems in human relations as being soluble by general algorithms; also, treating philosophy as mathematics or natural science

    • Good point David! I certainly agree that over-extending the domain of application of your own specialty can crack a lot of pots. This is why it’s so important to be especially critical of our own methodologies. I think that is the only way to develop humility about our tools.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.