Three goals for computational models

The idea of computing machines was born to develop an algorithmic theory of thought — to learn if we could always decide the validity of sentences in axiomatic systems — but some of the first physical computing machines were build to calculate physics. In particular, they were tools of war, used to predict ballistic trajectories and the effects of not-yet-constructed hydrogen bombs. War time scientists had enough confidence in these computational models — Fermi’s bet notwithstanding — that they were willing to trust the computations’ conclusion that the Trinity test would not incinerate the atmosphere. Now computational modeling is so common, that we hear model predictions of the state of our (unincinerated) atmosphere every morning on the local weather report. Much progress has been made in modeling, yet although I will heed the anchor’s advice to pack an umbrella, I can’t say that I trust most computational models in domains outside of physics and chemistry. Actually, my trust in computational models has only gone down with exposure. Fortunately, modeling can have many goals and I can think of models as tools for (at least) three things: (1) predicting future outcomes of an external reality; (2) clarifying and formalizing (more verbal) theories; or (3) communication and rhetoric.

Forecasting an external reality

I’ve never believed that computational models are particularly adapt at making predictions.[1] There is the obvious caveat of insilications in physics (and chemistry) like models of fluid dynamics, weather, or space flight. However, these very accurate (or in the case of weather, accurate to a point but with known and quantifiable reasons for why the accuracy degrades) models actually stem from mathematical theories in which we have confidence because of how much easier it is to find extreme-cases of analytic (versus computational) models. Even in cases like weather, where we know the model’s accuracy degrades with the prediction horizon, we understand why this degradation happens and can quantify how quickly our accuracy decreases. These sort of computational insilications are really more in the realm of numerical solutions to analytic models. At times, we have no simple analytic narrative, and these numeric solutions become essential for engineering applications — a prime example: why do airplanes fly? Hint: it’s probably not the simplified variant of Bernoulli’s principle that you typically hear — but the grounding in analytic theories makes the correspondence between model and experiment workable. In particular, the same model makes predictions and ladens the observations allow for clear dialogue between theory and experiment.

In settings like biology, medicine — or even more ambitious: social sciences — there is no underlying analytic theory. Although we might call some parameters of the model by the same name as some things we measure experimentally, the theory we use to inform our measurement is not related to the theory used to generate our model. These sort of models are heuristics.[2] This means that when a heuristic model is wrong, we don’t really know why it is wrong, how to quantify how wrong it will be (apart from trial and error), or how to fix it. Further, even when heuristic models do predict accurate outcomes, we have no reason to believe that the hidden mechanism of the model is reflective of reality. Usually the model has so many free-parameters, often in the form of researcher degrees of freedom, that it could have been an accidental fit. This is often a concern in ecological modeling where we have to worry about overdetermination.

Clarifying and formalizing theories

I first started modeling in the social sciences and I believed strongly in the clarity goal of modeling. Verbal theories I would read in psychology were sometimes not self consistent, and I would think “why don’t you implement this, and you would see that your own hypothesized mechanism is not consistent with even ideal results”. Back when building computational models was difficult (mostly due to slow computers, and lack of access), people would have to think very carefully about their models (often running them by hand) and thus build the models to Einstein’s motto of “as simple as possible, but no simpler”. With these early models, most of the low-hanging fruit were picked off and clarified.

Now, computational models are easy to build,[3] more subtle effects are being chased, and computers are more powerful so we can blindly waste clock cycles. The amount of free computer resources means that models are often not carefully constructed. Programming as a basic modern skill means people don’t have training in the subtleties of modeling when they embark. The overdetermined and subtle effects make it is easy to convince yourself that one thing that is part of your “theory” is causing your final observable, while in reality it is an artifact of a programming decision you made and didn’t even describe carefully in your paper because you thought it was not important. The most common examples of such “implementation” details being important in evolutionary models are things like synchronous-vs-asynchronous updating, the specific of update rules, and edge effects from simulation boundaries or grid-resolution in spatial models. This deceptiveness is the curse of computing.

We lost the documentation on quantum mechanics. You'll have to decode the regexes yourself.

Turning a theory into a computational model is not the only way to gain clarity. The part where I think models still excel is in serving as virtual worlds in which to test other tools. Here, computational models serve as a benchmark for other models. This is a technique that is often practiced in statistics and machine learning, where some data will be generated with a known distribution to see if the learning algorithm or inference technique can recover that distribution. But it can also be applied elsewhere. For example, one of the ways we plan to test our mathematical model for personalized treatment of chronic myeloid leukemia is to use the best known (and much more complicated) computational models as if they were real patients, and then use our mathematical model to recommend treatment that can then be implemented in the computational models and compared to the counter-factual of not being implemented (you can copy the state of a program, but unfortunately not that of a physical patient). In a similar vein, the whole cell computational model, can be used by theoretical biologists without access to a wet-lab to perform sanity checks on their personal theories. The important part though, is to remember that you don’t want to build models of these models, but just use them as sanity checks for tools that could hopefully be compared to real experimental data.

Communication and rhetoric

Models as an aid to dialogue is part of the broader view of theorists as connectors. The communicative purpose of models didn’t occur to me until the summer of 2012 after discussion with Gary An at SwarmFest, but since then my cynicism has lead me to believe only in the rhetoric part. The difficulty with analytic models is that they are difficult! You need a lot more background to understand what parts of an analytic model mean, compared to parts of a computational model. This is good for doing science, since the person building an analytic model usually has to think much more carefully than one building a computational model. This usually results in the mathematical modeler having a better understanding of the assumptions she is making. However, it is very bad for communicating science, since for most people equations mean nothing. However, an agent-based model is a much friendlier narrative, thus it is easier for a modeler to talk to a domain expert using an agent-based model than an analytic one. This makes it easier for the modeler to extract domain expertise that they don’t otherwise have. Unfortunately, this also requires great honesty on the part of the modeler, since it is easy to sneak in artifacts that seem inconsequential to the domain expert but actually determine the result. My ideal modeling interaction with domain experts would involve building a computational model for communicating with them, but then using that model as a sanity check or benchmark (in the sense of the last paragraph of the previous section) for a cleaner analytic model. The hope is that this back and forth from domain expert to computational benchmark to mathematical model can keep the whole enterprise honest.

Unfortunately, in most cases such honesty seems to be lost. From the little experience I’ve had talking to agent-based modelers for government and other big bureaucracies, computational models are often build just to prop up a preconceived idea and not to challenge and discover truth. The modeler’s boss can then use this model for rhetoric, giving their beliefs and opinions the vernier of scientific certainty.[4] Since most people have not been exposed to the gritty life of a modeler, they don’t realize that an ostensibly reasonable sounding model can be built to support almost any conclusion. Thus, people start to think that their beliefs are knowledge and their opinions are fact — a very dangerous prospect for public policy.

Notes

This entry started as a comment on David Basanta’s post about Jon Turney’s article at Aeon.

  1. For those that equate science to prediction this might be the end of the story, but for me accurate prediction is an engineering concern. In my mind, the goal of science is a certain kinds of narrative (unfortunately, this makes demarcation of pseudoscience difficult and tends to bias history in favor of theorists), so accurate prediction is a convenient side-effect but not a mandatory feature. Of course, the scientific narrative isn’t a willy-nilly Gladwellian one, but is constrained by evidence. I think this stems from my fundamentally postpositivist view of science, a view that is easy to mistake for a starwman like postmodernism. Sadly this superficial resemblance to postmodernism, means that postpositivism is not sufficiently well studied and remedying this is something science can learn from the humanities.
  2. I don’t want to make the universal statement that all models in biology are heuristic; there are always exceptions like the model cockroach. Coincidentally, all examples of non-heuristic biological models that I know, are not truly computational. They are usually built up from physics and differential equations, and the only computational part is the numeric solutions of systems of ODEs that are too large or too complicated to solve analytically. Much like the insilications in physics and chemistry, I think these models derive their clarity and reliability from the close interaction with underlying analytic theories instead of computational encoding of intuitions.
  3. In this day and age, every scientist should have basic programming training, and in the hard sciences it is often assumed, especially among experimentalists. In the soft and social sciences it is still a novel skill to many, but tools like NetLogo are available to make the introduction easier. Surprisingly, in theoretical computer science — a field often confused for programming by outsiders — it is a skill that sometimes needs justification.
  4. This critique is not restricted to computational models. In fact, the people most frequently criticized this way are neoclassical economists that typically rely on analytic mathematical models. However, I think that it is worse for computational models, because to the public mathematical models are fundamentally opaque and thus non-expert readers realize that they don’t actually understand the assumptions that are built it. Computational models, however, can seem deceptively transparent. They are often described in terms of basic step-by-step interactions and local rules, and so seem understandable. However, just like real economies, these local rules don’t interact in as simple and linear fashion as most people imagine and the emergent results often depend (as mentioned before) on subtleties that seem irrelevant to people not intimately acquainted with modeling.

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

12 Responses to Three goals for computational models

  1. Terrific T says:

    NetLogo seems really interesting – I will need to spend some time playing with it.

    “Since most people have not been exposed to the gritty life of a modeler, they donā€™t realize that an ostensibly reasonable sounding model can be built to support almost any conclusion.”

    Well said…

  2. Excellent and well thought post Artem, not only I agree with the selection by Terrific T that you can always make a model to proof your point but I will add that the more complicated the model the easier that this task becomes.
    I think my work (as well as Jacob Scott’s) falls more under the _clarifying and formalising theories_ banner but I also appreciate that we can leave some of our concerns about simplicity aside and produce a more sophisticated computational model working as a proxy of the reality we want to capture with a simpler mathematical model (as we did in Tampa a month ago).

    Given that I work with biologists and not with policy makers I am less concerned about designing models for rethoric. I think having models to allow modelers and experimentalist arrive to some agreement in terms of the reality to be studied is an important but unrecognised aspect of computational modelling. The difference is that I always thought that it’d be better to start with something very simple and the built complexity as needed but what you suggest is the opposite. Starting with something that captures the phenomenology and use that as a proxy for reality could be an interesting approach which I hope we will be using with CML!

    • Thank you! I am not sure if working away from policy makers necessarily excuses us from thinking carefully about how our models are used as rhetoric. A lot of people have pet theories in science, and will try to give them the veneer of credibility by citing computational models. In the case of mathematical medicine I think we have to be extra careful because people’s lives can be at stake.

      I agree with you that starting complex and working down is a bit counter-intuitive. I think if I was working on something not often studied for the first time, I would also follow the start-simple and work up approach. Except, it only really looks like that for an artificial reason. In reality, in most cases, you have some (often relatively complicated and ill specified) mental model in place (I guess I am being inspired by Jacob’s thoughts, here), and as you “build from the bottom” you are actually using that mental model as a sanity check for your simple model.

      In the case of CML, however, I lack a personal understanding of the vast body of knowledge that exists. Hence, I advocate for using the ‘gold standard’ or the ‘already used’ computational model in place of the mental model I would typically have for sanity checks. So maybe it’s not all that different.

  3. I think one could add a fourth benefit — computational modeling can serve as a source of ideas. For example, one of the main lessons I learned from my own modeling work (in neuroscience) is the difficulty of the challenges that positive feedback creates for the brain, especially when combined with synaptic modification. That insight shapes my understanding of every brain system that I look at. I have also benefited greatly from the idea of a “bump attractor”, which I first learned about from Kohonen’s work.

  4. Pingback: Cataloging a year of blogging: the algorithmic world | Theory, Evolution, and Games Group

  5. Pingback: The Benefits of Being Unrealistic

  6. Pingback: On interdisciplinary research | CancerEvo

  7. Pingback: Models, modesty, and moral methodology | Theory, Evolution, and Games Group

  8. Pingback: Heuristic models as inspiration-for and falsifiers-of abstractions | Theory, Evolution, and Games Group

  9. Pingback: Methods and morals for mathematical modeling | Theory, Evolution, and Games Group

  10. Pingback: Mathtimidation by analytic solution vs curse of computing by simulation | Theory, Evolution, and Games Group

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.