Limits of prediction: stochasticity, chaos, and computation

Some of my favorite conversations are about prediction and its limits. For some, this is purely a practical topic, but for me it is a deeply philosophical discussion. Understanding the limits of prediction can inform the philosophies of science and mind, and even questions of free-will. As such, I wanted to share with you a World Science Festival video that THEREALDLB recently posted on /r/math. This is a selected five minute clip called “What Can’t We Predict With Math?” from a longer one and a half hour discussion called “Your Life By The Numbers: ‘Go Figure'” between Steven Strogatz, Seth Lloyd, Andrew Lo, and James Fowler. My post can be read without watching the panel discussion or even the clip, but watching the clip does make my writing slightly less incoherent.

I want to give you a summary of the clip that focuses on some specific points, bring in some of discussions from elsewhere in the panel, and add some of my commentary. My intention is to be relevant to metamodeling and the philosophy of science, but I will touch on the philosophy of mind and free-will in the last two paragraphs. This is not meant as a comprehensive overview of the limits of prediction, but just some points to get you as excited as I am about this conversation.

Read more of this post

Advertisements

Models, modesty, and moral methodology

In highschool, I had the privilege to be part of a program that focused on the humanities and social sciences, critical thinking, and building research skills. The program’s crown was a semester of grade eleven (early 2005) dedicated to working on independent research for a project of our own design. For my project, I poured over papers and books at the University of Saskatchewan library, trying to come up with a semi-coherent thesis on post-cold war religious violence. Maybe this is why my first publications in college were on ethnocentrism? It’s a hard question to answer, but I doubt that the connection was that direct. As I was preparing to head to McGill, I had ambition to study political science and physics, but I was quickly disenchanted with the idea, and ended up focusing on theoretical computer science, physics, and math. When I returned to the social sciences in late 2008, it was with the arrogance typical of a physicist first entering a new field.

In the years since — along with continued modeling — I have tried to become more conscious of the types and limitations of models and their role in knowledge building and rhetoric. In particular, you might have noticed a recent trend of posts on the social sciences and various dangers of Scientism. These are part of an on-going discussions with Adam Elkus and reading the Dart-Throwing Chimp. Recently, Jay Ulfelder shared a fun quip on why skeptics make bad pundits:

First Rule of Punditry: I know everything; nothing is complicated.

First Rule of Skepticism: I know nothing; everything is complicated.

Which gets at an important issue common to many public-facing sciences, like climate, social, or medicine, among others. Academics are often encouraged to be skeptical, both of their work and others, and precise in the scope of their predictions. Although self-skepticism and precision is sometimes eroded away by the need to publish ‘high-impact’ results. I would argue that without factions, divisions, and debate, science would find progress — whatever that means — much more difficult. Academic rhetoric, however, is often incompatible with political rhetoric, since — as Jay Ulfelder points out — the latter relies much more on certainty, conviction, and the force with which you deliver your message. What should a policy oriented academic do?
Read more of this post

Cross-validation in finance, psychology, and political science

A large chunk of machine learning (although not all of it) is concerned with predictive modeling, usually in the form of designing an algorithm that takes in some data set and returns an algorithm (or sometimes, a description of an algorithm) for making predictions based on future data. In terminology more friendly to the philosophy of science, we may say that we are defining a rule of induction that will tell us how to turn past observations into a hypothesis for making future predictions. Of course, Hume tells us that if we are completely skeptical then there is no justification for induction — in machine learning we usually know this as a no-free lunch theorem. However, we still use induction all the time, usually with some confidence because we assume that the world has regularities that we can extract. Unfortunately, this just shifts the problem since there are countless possible regularities and we have to identify ‘the right one’.

Thankfully, this restatement of the problem is more approachable if we assume that our data set did not conspire against us. That being said, every data-set, no matter how ‘typical’ has some idiosyncrasies, and if we tune in to these instead of ‘true’ regularity then we say we are over-fitting. Being aware of and circumventing over-fitting is usually one of the first lessons of an introductory machine learning course. The general technique we learn is cross-validation or out-of-sample validation. One round of cross-validation consists of randomly partitioning your data into a training and validating set then running our induction algorithm on the training data set to generate a hypothesis algorithm which we test on the validating set. A ‘good’ machine learning algorithm (or rule for induction) is one where the performance in-sample (on the training set) is about the same as out-of-sample (on the validating set), and both performances are better than chance. The technique is so foundational that the only reliable way to earn zero on a machine learning assignments is by not doing cross-validation of your predictive models. The technique is so ubiquotes in machine learning and statistics that the StackExchange dedicated to statistics is named CrossValidated. The technique is so…

You get the point.

If you are a regular reader, you can probably induce from past post to guess that my point is not to write an introductory lecture on cross validation. Instead, I wanted to highlight some cases in science and society when cross validation isn’t used, when it needn’t be used, and maybe even when it shouldn’t be used.
Read more of this post

Big data, prediction, and scientism in the social sciences

Much of my undergrad was spent studying physics, and although I still think that a physics background is great for a theorists in any field, there are some downsides. For example, I used to make jokes like: “soft isn’t the opposite of hard sciences, easy is.” Thankfully, over the years I have started to slowly grow out of these condescending views. Of course, apart from amusing anecdotes, my past bigotry would be of little importance if it wasn’t shared by a surprising number of grown physicists. For example, Sabine Hossenfelder — an assistant professor of physics in Frankfurt — writes in a recent post:

If you need some help with the math, let me know, but that should be enough to get you started! Huh? No, I don't need to read your thesis, I can imagine roughly what it says.It isn’t so surprising that social scientists themselves are unhappy because the boat of inadequate skills is sinking in the data sea and physics envy won’t keep it afloat. More interesting than the paddling social scientists is the public opposition to the idea that the behavior of social systems can be modeled, understood, and predicted.

As a blogger I understand that we can sometimes be overly bold and confrontational. As an informal medium, I have no fundamental problem with such strong statements or even straw-men if they are part of a productive discussion or critique. If there is no useful discussion, I would normally just make a small comment or ignore the post completely, but this time I decided to focus on Hossenfelder’s post because it highlights a common symptom of interdisciplinitis: an outsider thinking that they are addressing people’s critique — usually by restating an obvious and irrelevant argument — while completely missing the point. Also, her comments serve as a nice bow to tie together some thoughts that I’ve been wanting to write about recently.
Read more of this post

Predicting the risk of relapse after stopping imatinib in chronic myeloid leukemia

IMODay1To escape the Montreal cold, I am visiting the Sunshine State this week. I’m in Tampa for Moffitt’s 3rd annual integrated mathematical oncology workshop. The goal of the workshop is to lock clinicians, biologists, and mathematicians in the same room for a week to develop and implement mathematical models focussed on personalizing treatment for a range of different cancers. The event is structured as a competition between four teams of ten to twelve people focused on specific cancer types. I am on Javier Pinilla-Ibarz, Kendra Sweet, and David Basanta‘s team working on chronic myeloid leukemia. We have a nice mix of three clinicians, one theoretical biologist, one machine learning scientist, and five mathematical modelers from different backgrounds. The first day was focused on getting modelers up to speed on the relevant biology and defining a question to tackle over the next three days.
Read more of this post

Are all models wrong?

GeorgeEPBoxGeorge E. P. Box is famous for the quote: “all models are wrong, but some are useful” (Box, 1979). A statement that many modelers swear by, often for the wrong reasons — usually they want preserve their pet models beyond the point of usefulness. It is also a statement that some popular conceptions of science have taken as foundational, an unfortunate choice given that the statement — like most unqualified universal statements — is blatantly false. Even when the statement is properly contextualized, it is often true for trivial reasons. I think a lot of the confusion around Box’s quote comes from the misconception that there is only one type of modeling or that all mathematical modelers aspire to the same ends. However, there are (at least) three different types of mathematical models.

In my experience, most models outside of physics are heuristic models. The models are designed as caricatures of reality, and built to be wrong while emphasizing or communicating some interesting point. Nobody intends these models to be better and better approximations of reality, but a toolbox of ideas. Although sometimes people fall for their favorite heuristic models, and start to talk about them as if they are reflecting reality, I think this is usually just a short lived egomania. As such, pointing out that these models are wrong is an obvious statement: nobody intended them to be not wrong. Usually, when somebody actually calls such a model “wrong” they actually mean “it does not properly highlight the point it intended to” or “the point it is highlighting is not of interest to reality”. As such, if somebody says that your heuristic model is wrong, they usually mean that it’s not useful and Box’s defense is of no help.
Read more of this post

Micro-vs-macro evolution is a purely methodological distinction

Evolution of CreationismOn the internet, the terms macroevolution and microevolution (especially together) are usually used primarily in creationist rhetoric. As such, it is usually best to avoid them, especially when talking to non-scientists. The main mistake creationist perpetuate when thinking about micro-vs-macro evolution, is that the two are somehow different and distinct physical processes. This is simply not the case, they are both just evolution. The scientific distinction between the terms, comes not from the physical world around us, but from how we choose to talk about it. When a biologist says “microevolution” or “macroevolution” they are actually signaling what kind of questions they are interested in asking, or what sort of tools they plan on using.
Read more of this post

Mathematical models of running cockroaches and scale-invariance in cells

I often think of myself as an applied mathematician — I even spent a year of grad school in a math department (although it was “Combinatorics and Optimization” not “Applied Math”) — but when the giant systems of ODEs or PDEs come a-knocking, I run and hide. I confine myself to abstract or heuristic models, and for the questions I tend to ask these are the models people often find interesting. These models are built to be as simple as possible, and often are used to prove a general statement (if it is an abstraction) that will hold for any more detailed model, or to serve as an intuition pump (if it is a heuristic). If there are more than a handful of coupled equations or if a simple symmetry (or Mathematica) doesn’t solve them, then I call it quits or simplify.

However, there is a third type of model — an insilication. These mathematical or computational models are so realistic that their parameters can be set directly by experimental observations (not merely optimized based on model output) and the outputs they generate can be directly tested against experiment or used to generate quantitative predictions. These are the domain of mathematical engineers and applied mathematicians, and some — usually experimentalists, but sometimes even computer scientists — consider these to be the only real scientific models. As a prototypical example of an insilication, think of the folks at NASA numerically solving the gravitational model of our solar system to figure out how to aim the next mission to Mars. These models often have dozens or hundreds (or sometimes more!) coupled equations, where every part is known to perform to an extreme level of accuracy.
Read more of this post

Computer science on prediction and the edge of chaos

With the development of statistical mechanics, physicists became the first agent-based modellers. Since the scientists of the 19th century didn’t have super-computers, they couldn’t succumb to the curse of computing and had to come up with analytic treatments of their “agent-based models”. These analytic treatments were often not rigorous, and only a heuristic correspondence was established between the dynamics of macro-variables and the underlying microdynamical implementation. Right before lunch on the second day of the Natural Algorithms and the Sciences workshop, Joel Lebowitz sketched how — for some models — mathematical physicists still continue their quest to rigorously show that macrodynamics fatefully reproduce the aggregate behavior of the microstates. In this way, they continue to ask the question: “when can we trust our analytic theory and when do we have to simulate the agents?”
Read more of this post

Programming playground: A whole-cell computational model

Three days ago, Jonathan R. Karr, Jayodita C. Sanghvi and coauthors in Markus W. Covert’s lab published a whole-cell computational model of the life cycle of the human pathogen Mycoplasma genitalium. This is the first model of its kind: they track all biological processes such as DNA replication, RNA transcription and regulation, protein synthesis, metabolism and cell division at the molecular level. To achieve this, the authors integrate 28 different sub-models of the known cellular processes.

Figure 1A from Karr, Sanghvi et al. (2012)

Figure 1A from Karr, Sanghvi et. al (2012): A diagram of the 28 sub-models, colored by category: RNA (green), protein (blue), metabolic (orange), DNA (red). The modules are connected by arrow representing common metabolites (orange), RNA (green), proteins (blue), and DNA (red).

The key technical accomplishment was integrating the 28 modulus into a single model. Each module is based on existing models, but different modules are expressed in different paradigms: ODE, Boolean, probabilistic, and constraint-based. For me, this is the most impressive aspect of the work. Usually, when I look at biology (or psychology), I see a mishmash of models with each expressed in its own language and seemingly incompatible with the others. The authors overcame this by assuming the modulus are independent on short timescales (under 1 second). This allows the software to keep track of 16 global cell variables which are used as inputs for the submodulus that are run to simulate 1 second and their results used to update the global variables and repeat the loop. The whole software is available online and the authors can use the data gathered to produce a video of a single cell’s life cycle:

The authors show that the model has a high level of agreement with existing data. They also use the predictions to run several novel real-biology experiments, and even partially overturn (or complete) a previous experimental observation based on hints from their model. In particular they show that disruption of the IpdA gene — which Glass et al. (2006) suggested as non-essential — has severe (but noncritical) impact on cell growth. I wish I could comment more on the validity of the model as judged by experiments, but molecular biology is magic to me.

The simulation results that were most exciting for me was looking at the effects of single-gene disruptions on phenotype. The bacterium Mycoplasma genitalium is a human urogenital parasite whose genome contains 525 genes (Fraser et al., 1995). It is not an easy model organism to work with, but it has the smallest known genome that can constitute a cell. Part of the team on this project, is from J. Craig Venter Institute and has extensive experience with the organism due to their effort to create the first self-replication synthetic life by implanting artificial DNA into Mycoplasma genitalium. I would not be surprised if this model plays a vital part in the institute’s engineering.

Karr, Sanghvi et al. (2012) ran simulations of each of the 525 possible single-gene disruption strains. They found that 284 genes were essential to sustain growth and division and 117 are non-essential — a 79% agreement with the experimental results of Glass et al. (2006). Of particular interest for me was that in some cases it took more than one generation for specific proteins to fall to lethal levels. As far as I understand this is because when a single-cell divides, daughters get both a copy of the mother DNA and have their initial levels of proteins and RNA set to within statistical fluctuations of those of their mother. Due to my complete lack of basic biological background, this seemed an interesting example of Lamarkian evolution. In particular, it raises questions on how to best combine single-cell learning and evolution. From a naive Bayesian model of learning, it would seem that this would allow cells to pass on their priors — a biological evolution counterpart to Beppu & Griffiths (2009) cultural ratchet.

The detail of the whole-cell model is impressive. I hope that the software becomes a tool for theorists without access to a wet-lab to play around with cells. The approach is an antithesis to the simple and completely unrealistic models I am accustomed to building. For me, it raises many thoughts on how to better think about the distinction between genotype and phenotype that is almost always ignored in evolutionary game theory. For now the whole-cell model is computationally too expensive for me to build evolutionary dynamics from it, but maybe parts of the code can be simplified or ignored or maybe we could use more course-grained models. Either way, I am excited for my new playground!

References

Beppu, A., & Griffiths, T. (2009). Iterated learning and the cultural ratchet. Proceedings of the 31st Annual Conference of the Cognitive Science Society, 2089-2094.

Fraser, C.M., Gocayne, J.D., White, O., Adams, M.D., Clayton, R.A., Fleischmann, R.D., Bult, C.J., Kerlavage, A.R., Sutton, G., Kelley, J.M., et al. (1995). The minimal gene complement of Mycoplasma genitalium. Science 270, 397–403.

Glass, J.I., Assad-Garcia, N., Alperovich, N., Yooseph, S., Lewis, M.R., Maruf, M., Hutchison, C.A., Smith, H.O., & Venter, J.C. (2006). Essential genes of a minimal bacterium. Proc. Natl. Acad. Sci. USA 103, 425–430.

Karr, J.R., Sanghvi, J.C., Macklin, D.N., Gutschow, M.V., Jacobs, J.M., Bolival, B., Assad-Garcia, N., Glass, J.I., & Covert, M.W. (2012). A whole-cell computational model predicts phenotype from genotype Cell, 150, 389-401 DOI: 10.1016/j.cell.2012.05.044