Sherlock Holmes and the Case of the Missing Replication

Sherlock_holmes_pipe_hatIn 1959 at the University of Cambridge, C.P. Snow delivered his (now infamous) Rede lecture on the Two Cultures of science and humanities (or his derogatory term for the latter — ‘literary intellectuals’). Although Snow was both a writer and a scientist, his lecture was largely anti-humanities. It is unclear if the divide between science and humanities has narrowed since Snow’s time, but his influence remains even in the ‘third culture’ that is trying to bridge the gap. This year, the divide was evident in Steven Pinker’s patronizing olive branch to the humanities. He advocated that “science is not your enemy”, but didn’t bother to highlight the commonalities of the two fields, only suggesting how the humanities can be more like science, without any serious discussion of what science can learn from the humanities. I feel like this is not productive, and if we want unity then we should look for lessons in both directions.

A good starting point is to look at an ongoing crisis in science — the replicability crisis in psychology — and see if the humanities can offer some insight. Unfortunately, my knowledge of literature is limited to popular culture, and so I will turn to Sherlock Holmes. Holmes is not new to psychology; psychologists evaluated the fictional character’s IQ (Radford, 1999), tested the rigour of his methods (Snyder, 2004), used him as a quintessential model of expertise (Didierjean & Gober, 2008), and even suggested that neuroscientists are just like Holmes in their search for clues about the brain’s innerworkings (Kempster, 2006). It is not surprising to see so much cross-talk between psychology and literature because in many ways we can think of both as ways to explore and share insights into human nature. Of course, I am far from unique in suggesting this, there is at least one popular blog — Maria Konnikova’s Literally Psyched — that draws insights from literature and tests those insights with psychology. Maria can’t resist Sherlock either, recently publishing a self-help book offering you the secrets to a Holmesian mind. However, my goal isn’t to learn secrets to good science from the fictional character’s deductive prowess.
Read more of this post

Evolution as a risk-averse investor

DanielBernoulliI don’t know about you, but most of my money is in my savings account and not in more volatile assets like property, bonds, or stocks. This is a consequence of either laziness to explore my options, or — the more comforting alternative — extreme risk-aversion. Although it would be nice to have a few thousand dollars more to my name, it would be devastating to have a few thousand dollars less. As such if I was given a lottery where I had a 50% chance of loosing $990 or a 50% chance of winning $1000 then I would probably choose not to play, even though there is an expected gain of $10; I am risk averse, the extra variance of the bet versus the certainty of maintaining my current holdings is not worth $10 for me. I most cases, so are most investors, although the degree of expected profit to variance trade-off differs between agents.

Daniel Bernoulli (8 February 1700 – 17 March 1782) was one of the mathematicians in the famous Bernoulli family of Basal, Switzerland, and contemporary and friend of Euler and Goldbach. He is probably most famous for Bernoulli’s principle in hydrodynamics that his hyper-competitive father Johann publishing in a book he pre-dated by ten years to try and claim credit. One of Daniel’s most productive times was working alongside Euler and Goldbach in the golden days (1724-1732) of the St. Petersburg Academy. It was in Russia that he developed his solution to the St. Petersburg paradox by introducing risk-aversion, and made his contribution to probability, finance, and — as we will see — evolution.
Read more of this post

Three goals for computational models

The idea of computing machines was born to develop an algorithmic theory of thought — to learn if we could always decide the validity of sentences in axiomatic systems — but some of the first physical computing machines were build to calculate physics. In particular, they were tools of war, used to predict ballistic trajectories and the effects of not-yet-constructed hydrogen bombs. War time scientists had enough confidence in these computational models — Fermi’s bet notwithstanding — that they were willing to trust the computations’ conclusion that the Trinity test would not incinerate the atmosphere. Now computational modeling is so common, that we hear model predictions of the state of our (unincinerated) atmosphere every morning on the local weather report. Much progress has been made in modeling, yet although I will heed the anchor’s advice to pack an umbrella, I can’t say that I trust most computational models in domains outside of physics and chemistry. Actually, my trust in computational models has only gone down with exposure. Fortunately, modeling can have many goals and I can think of models as tools for (at least) three things: (1) predicting future outcomes of an external reality; (2) clarifying and formalizing (more verbal) theories; or (3) communication and rhetoric.
Read more of this post

Simplifying models of stem-cell dynamics in chronic myeloid leukemia

drugModelIf I had to identify one then my main allergy would be bloated models. Although I am happy to play with complicated insilications, if we are looking at heuristics where the exact physical basis of the model is not established then I prefer to build the simpleast possible model that is capable of producing the sort of results we need. In particular, I am often skeptical of agent based models because they are simple to build, but it is also deceptively easy to have the results depend on an arbitrary data-independent modeling decision — the curse of computing. Therefore, as soon as I saw the agent-based models for the effect of imatinib on stem-cells in chronic myeloid leukemia (Roeder et al., 2002; 2006; Horn et al., 2013 — the basic model is pictured above), I was overcome with the urge to replace it by a simpler system of differential equations.
Read more of this post

Lower bounds by negative adversary method

Are some questions harder than others?

Last week I quantified hardness of answering a question with a quantum computer as the quantum query complexity. I promised that this model would allow us to develop techniques for proving lower bounds. In fact, in this model there are two popular tools: the polynomial method, and the (negative) adversary method. In this week’s post, I’d like to highlight the latter.
Read more of this post

Quantum query complexity

Artem Kaznatcheev lecturing on quantum query complexityYou probably noticed a few things about TheEGG: a recent decrease in blog post frequency and an overall focus on the algorithmic lens — especially its view of biology. You might also be surprised by the lack of discussion of quantum information processing: the most successful on-going application of the algorithmic lens. I actually first became passionate about cstheory as a lens on science when I was studying quantum computing. In undergrad, I played around with representation theory and other fun math to prove things about a tool in quantum information theory known as unitary t-designs. At the start of grad school, I became more algorithmic by focusing on quantum query complexity. To kill two birds with one stone, I thought I would introduce you to query complexity and in doing so restore the more regular posting schedule you’ve been accustomed to. Of course, the easiest way to do this is to recycle my old writing from the now stale cstheory StackExchange blog.
Read more of this post