Mathtimidation by analytic solution vs curse of computing by simulation

Recently, I was chatting with Patrick Ellsworth about the merits of simulation vs analytic solutions in evolutionary game theory. As you might expect from my old posts on the curse of computing, and my enjoyment of classifying games into dynamic regimes, I started with my typical argument against simulations. However, as I searched for a positive argument for analytic solutions of games, I realized that I didn’t have a good one. Instead, I arrived at another negative argument — this time against analytic solutions of heuristic models.

Hopefully this curmudgeoning comes as no surprise by now.

But it did leave me in a rather confused state.

Given that TheEGG is meant as a place to share such confusions, I want to use this post to set the stage for the simulation vs analytic debate in EGT and then rehearse my arguments. I hope that, dear reader, you will then help resolve the confusion.

First, for context, I’ll share my own journey from simulations to analytic approaches. You can see a visual sketch of it above. Second, I’ll present an argument against simulations — at least as I framed that argument around the time I arrived at Moffitt. Third, I’ll present the new argument against analytic approaches. At the end — as is often the case — there will be no resolution.

My journey from simulation to analytic solutions

A decade ago, when I first started using evolutionary game theory, I relied on simulations. At first, this was for completely practical reasons. I encountered EGT (or what I would later learn is EGT) in Tom Shultz’s computational psychology course (PSYC315 at McGill), and so I made computational models. You can see this in Shlutz et al (2009) — where my contribution to the paper was directly from ‘world saturation’ observations of simulations I made in Tom’s class; a few years later, I converted some of my other observations on probabilistic strategies from the class into easy early posts for TheEGG — and in Kaznatcheev (2010a) — where I assigned a ‘cognitive cost’ to some of the minimal complexity of tag-based models. But I was coming from a more pure math background so I wanted to be able to make more general statements than a few simulation runs. So I moved from presenting the average results of 20 or 30 simulations of a few parameter settings to doing broad parameter sweeps. This pushed me towards game space: a representation where I could see how my simulations behaved under a wide range of parameter values. By Kaznatcheev (2010b), you can see me sweeping over broad ranges of parameters.

As I learned more, I started to rationalize my simulation process to myself by saying that even though the models I was looking at were simple, they were still spatial models that were not analytically tractable. And this space, stochasticity, and discreteness is important even in minimal models (for examples, see: Shnerb et al., 2000). But as I read the literature more, I realized that people weren’t presenting simulations because they had already solved the ‘easy and tractable’ analytic cases. They were using simulations just because it was easy to code them and modify them. More importantly, in a lot of cases there was no reason to make an explicitly spatial model. Or if a spatial model was needed, to prefer one type of spatial model over another. This was especially relevant when the results that people claimed to follow from a game, actually followed from a specific but arbitrary choice of space instead.

This left me disenchanted with simulations and pushed me towards the minimal models that I could solve analytically. This focus on the analytic was important in helping me move to mathematical oncology and get a job at the Moffitt Cancer Center. In particular, I got David Basanta’s attention — in part — by showing how to analytically solve the INV-AG-GLY game that Basanta et al. (2008b) had instead studied through simulation of broad parameter sweeps. Given that I was still attached to my background with spatial models, this eventually resulted in Kaznatcheev et al. (2015) — where we spatialized in an analytically-solvable way David’s old INV-AG (or Go vs. Grow) game (see: Basanta et al., 2008a).

Simulation and the curse of computing

By the time I arrived at Moffitt, I was firmly in the ‘do it analytically’ camp. And always pushed for minimal models. My lingering question became: why would you even do simulations for simple evolutionary games?

A possible objection to the purely analytic focus is Patrick’s: “the main reason people prefer simulations is because it’s easier to present in a biology paper rather than explaining the regular payoff matrix and everything”. This is an important point on the rhetorical utility of computational models.

But a lot of papers that do present games, present the whole payoff matrix with symbolic parameter numbers. For an example, see Basanta et al. (2008a,b). Thus if we’re simulation games, we already have to go through the hard effort of justifying the payoff matrix and our parameterization of it. However, with simulation, we then go on to just stick a range of particular values — sometimes a very small range, sometimes a good wide range — and show simulation results. So doing an analytic treatment does not require more explanations at the start.

A purely analytic treatment might require a couple of sentences on the idea of a dynamic regime. But that is usually on topic. More importantly, it seems more honest since people don’t usually draw conclusions from the particular numbers the simulation of particular parameters outputs but rather its qualitative features. An example might be: “and the cancer took over the tissue if drug was low, but if it was high then the healthy tissue won”. These qualitative features are just dynamic regimes.

In a simulation, we have an extra step of inference that an analytic treatment does not require. In a simulation we observe a number of qualitatively similar results over a parameter range and make a qualitative conclusion. In an analytic treatment, we directly prove that a certain specific parameter range produces a certain specific kind of qualitative dynamic.

The most concrete advantage of eliminating this extra step of inference is that sometimes the qualitative observations of a number of simulations don’t actually capture an underlying regularity of a dynamic regime. This can be especially dangerous when one is tempted to cut corners by not carefully exploring their simulation or the way simulation results are aggregated (as is often required for stochastic models). For example, in Kaznatchev & Shultz (2011) I made the mistake of faulty inference from a simulation average to a dynamic regime. I saw an average proportion of cooperation trending towards zero and so innocently concluded — as would be consistent with theory in that particular case — that it was going to zero. In reality, there was a high heterogeneity in simulation runs that tended to bifurcate between going to all cooperate or all defect. I was able to notice this by 2012 when I eventually looked more closely at the individual simulations. But I should have done that right away. Trying to understand this unexpected behavior then taught me some useful analytic theory.

Another example I’ve seen with people starting to play with EGT for the first time — and part of what prompted my original discussion with Patrick — is when someone does a discrete-time simulation of replicator dynamics with a relatively large step size. In this case, one can get what looks like oscillations around a fixed point. But these oscillations are not due to some inherent property of the game being simulated. Rather they are a feature of the discrete step size. This can, of course, be of interest if there was a genuine reasons for discrete steps: like studying a seasonal population. But in most case, the discrete steps were made for simplicity of writing simulation code. In these cases, concluding an oscillatory dynamic regime is a mistake.

But these mistakes of inference are not that common, and are not central to my concerns. In most case, the inference from simulations to dynamic regime is correct. But why take the extra step? Here, my overall objection is not that the simulation is misleading but that it will draw a conclusion in broad symbolic and qualitative terms. But if we will draw a conclusion in broad symbolic and qualitative terms then why not use an analysis technique that is also symbolic and outputs qualitative statements (like dynamic regimes) instead of having a middle step of simulation? The step through simulation feels in some way like a non-sequitur. At least when the analytic alternative is right there.

So the cynic in me, would agree with Patrick on “easier” but disagree on “to present”. Rather, I would argue that the simulation is “easier to do” or “easier to delegate” at the expense of an unnecessary step of inference. A step we should save our readers from, even if we first noticed the effect through simulation.

Analytic solutions as mathtimidation

The cynic’s cynic might say though: doing an analytic treatment of dynamic regimes doesn’t actually add all that much beyond simulations. I often argue that simulations are preferred because they require marginally less background than analytic treatment. But that doesn’t mean analytic treatment of minimal models is hard to do. In fact, the whole point of minimal models is to pick ones that are straightforward to analyze. So from the perspective of just doing mathematics, few or no points are earned. Solving a well picked model should not be more difficult that an exercise.

What is more important is than lack of ‘math points’ is that we don’t trust minimal models enough to make sense of sharp analytic boundaries. I don’t think anybody thinks their minimal mathematical models captures all the relevant aspects of a phenomenon — I certainly don’t with my models. Rather, it is usually meant to draw our attention to a specific salient feature of the phenomenon. This is what I call heuristics, and what I focus on when I build analytically-solvable minimal models.

So, heuristic models in general are just cartoons or illustrations. Solving them analytically instead of just doing some simulations doesn’t make them less of an illustration. The analytic solution creates an extra certainty where no extra certainty is needed. In itself, this isn’t a problem. But often this extra precision and the ‘analytic’-brand can be used to make the whole model seem more certain. This is especially transparent with decorative mathematical models that accompany experiments without contributing any extra insights. But it also happens in purely theoretical work. In this case, the analytic solution of dynamic regimes acts as a bit of mathtimidation. Since the biggest uncertainty comes from the model itself rather than how its solved, using a more rigorous solution process can be a way to use mathematics to intimidate the reader into overlooking the shaky standing of the model itself. This is a criticism I often hear against economic models, but I think it applies just as well in biology.

This leaves me at a bit of an impasse. On the one hand, proceeding through simulation when an analytic option is available feels like a non-sequitur. On the other hand, the analytic option rarely provides something that can’t be gathered from just simulation and is too tempting to use to lend undue certainty to models.

It is tempting to side-step this impasse. And there are several sides. One direction around this problem is to embrace model pluralism. We can just pick and choose our methods based on what is most convenient in a given setting. And I certainly recommend model pluralism when it comes to heuristic models. Another direction is to turn from heuristic to abductions. We can invert simple analytically-solvable models as a way to directly measure abstract objects like games — as we do in Kaznatcheev et al. (2017) for measuring effective games in lung cancer. Here, inverse simulation can become a more difficult option that inverse analysis.

But both of these approaches leave this knot of tension intact. And I don’t know how to resolve it. Or if it needs to be resolved. Hopefully you, dear reader, will have some advice.

References

Basanta, D., Hatzikirou, H., & Deutsch, A. (2008a). Studying the emergence of invasiveness in tumours using game theory. The European Physical Journal B, 63 (3), 393-397

Basanta, D., Simon, M., Hatzikirou, H., & Deutsch, A. (2008b). Evolutionary game theory elucidates the role of glycolysis in glioma progression and invasion. Cell Proliferation, 41(6): 980-7.

Kaznatcheev, A. (2010a). The cognitive cost of ethnocentrism. In S. Ohlsson & R. Catrambone (Eds.), Proceedings of the 32nd annual Conference of the Cognitive Science Society.

Kaznatcheev, A. (2010b). Robustness of ethnocentrism to changes in inter-personal interactions. Complex Adaptive Systems – AAAI Fall Symposium.

Kaznatcheev, A., Scott, J. G., & Basanta, D. (2015). Edge effects in game-theoretic dynamics of spatially structured tumours. Journal of The Royal Society Interface, 12(108): 20150154.

Kaznatcheev, A., Peacock, J., Basanta, D., Marusyk, A., & Scott, J. G. (2017). Fibroblasts and alectinib switch the evolutionary games that non-small cell lung cancer plays. bioRxiv, 179259.

Shnerb NM, Louzoun Y, Bettelheim E, & Solomon S (2000). The importance of being discrete: Life always wins on the surface. Proceedings of the National Academy of Sciences of the United States of America, 97 (19): 10322-4

Shultz, T. R., Hartshorn, M., & Kaznatcheev, A. (2009). Why is ethnocentrism more common than humanitarianism? In N. Taatgen & H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society.

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

11 Responses to Mathtimidation by analytic solution vs curse of computing by simulation

  1. Rob Noble says:

    Nice post. I especially like the point about the “analytic brand” engendering false confidence. But I think that complicated, superficially “realistic” inscillications carry the same risk. The solution is to use simple models and clearly explain their behaviour in plain English as well as in maths and charts.

    I would always try to apply both methods: first do a general analysis of a basic model, then run simulations to confirm predictions and explore more complicated model variants. As a reader/reviewer I’m reassured when I see a figure showing agreement between analytical predictions and simulation results; I’m disappointed when a study uses only one approach if it could have done both. This is the point I was trying to make in a recent tweet (https://twitter.com/robjohnnoble/status/1043054393522831360). In that case I suggested doing the numerical solution first, but of course the order isn’t important.

    • Thank you, Rob.

      I completely agree with you on the danger of superficially “realistic” inscillications, regardless of if they are analyzed by simulation or analytic solution. And the way around this is to use simple models.

      That is why I wanted my discussion in this post to focus specifically on simple models (I.e. mostly heuristics) to side-step worrying about inscillications. I’ve written briefly about superficially “realistic” models in my old post on truthiness, but I certainly need to revisit this topic. Would you want to write about it for TheEGG or on your own blog?

      I especially like the Noble sixfold path to math bio. To repeat your template for the comment readers:

      1. Create model based on biology;

      2. Check output matches data;

      3. Be happy, write up;

      4. Analyse why model works;

      5. Based on 4, create simplest, most general useful model;

      6. Rewrite focussing on 4 & 5.

      The hardest, most valuable work begins at step 4.

      Although I would agree with Jorge Peña that we should be starting at 4. And you do make the good point that starting at 4 often comes with experience, and in my own history of modeling, it is certainly the case that the more inexperienced Artem had a harder time with step 4. It is certainly a shame that so many researchers stop at step 3. Maybe “be happy” should be replaced by “be anxious about why a model you don’t understand works, but still write up”.

      But I do have some concerns with the language you used in your comment:

      I would always try to apply both methods: first do a general analysis of a basic model, then run simulations to confirm predictions and explore more complicated model variants. As a reader/reviewer I’m reassured when I see a figure showing agreement between analytical predictions and simulation results

      My particular concern is with ‘confirm’. For me, confirming the predictions of a model is in finding agreement between the model and the domain that the model is about. Presumably in the typical case of math bio the model isn’t about predicting simulations but is about predicting some biological system. And agreement between analytic and simulation results should not increase our confidence in the models agreement with the biological system. This can be especially dangerous when bad models get grandfathered in, and people start trying to agree with these bad models instead of looking at nature.

      So although it is always nice to see people do more work, I don’t think it is necessary. If you can make an analytic solution and explain it clearly: just focus on that. No need to show the intermediate simulation steps. But if you can only do extreme cases analytically (and those analytic models also happen to be ‘standard’ models) and have to simulate intermediate cases then certainly do both. Although we should think about how to interpret the agreement we see in these cases. I view it more as a sanity check than a confirmation.

      But maybe this depends on how one approach the distinction between ‘knowledge of’ vs ‘knowledge via’ our models.

      • Rob Noble says:

        Thanks for your generous and expansive reply, Artem.

        This discussion and the responses to my quoted tweet have taught me it’s difficult to come up with advice for mathematical models in general. Each of us has in mind some particular examples that colour our opinions.

        My tweet was particularly inspired by non-linear systems of ODEs or recurrence equations. It’s very easy to create such a model from the bottom up, with seemingly reasonable flows and feedbacks, that appears to behave like the phenomenon of interest. It’s much harder – but much more useful – to characterise a general class of plausible models, and to analyse why these models work.

        As for the order of steps, I had in mind both my own experience and the example of Bob May, who achieved some of his most celebrated results by first playing with numerical solutions [e.g. his profile in APS News]. Of course he could have characterised the behaviour of the logistic map or generalised Lotka-Volterra equations by analysis alone, but he might not have realised there was something worth looking for if he hadn’t also run simulations.

        By “confirm” I just meant checking that simulations are reasonably consistent with analytical predictions, as a way to catch errors. Of course this doesn’t prove that the analysis is correct, and it certainly doesn’t mean that the predictions will prove true in reality.

    • Replying to your nested comment at a lower level of nesting:

      My tweet was particularly inspired by non-linear systems of ODEs or recurrence equations. It’s very easy to create such a model from the bottom up, with seemingly reasonable flows and feedbacks, that appears to behave like the phenomenon of interest. It’s much harder – but much more useful – to characterise a general class of plausible models, and to analyse why these models work.

      This is why I like games because they provide a baby-step version of this. It is very easy to come up with a particular payoff matrix (a particular form of a non-linear system of ODEs). But it is more useful to look at a whole range of payoff matrices with some structural features. And this extra step is not that much more difficult to do, but can already become insightful. Then it can be linked to measurement.

      Obviously, on more poorly characterized system, repeating that same process is more difficult. And might require developing new math (or applying existing math in creative ways) along the way.

      Marta might be asking about this on my new post on your path to mathbio. Maybe we should continue the discussion there? I feel like you’d be able to give clearer examples than me.

  2. Patrick Ellsworth says:

    Hey Artem, thank you for the interesting read, and thank you again for the guidance you’ve given me since I’ve started trying to learn about EG Theory.

    Reading about your accidental mistake with the ethnocentricism game, I thought it was interesting that you discovered the mistake while playing around with simulations, rather than doing a rigorous proof of your initial conclusion. That is something I hadn’t thought about when we had our discussion a few weeks ago – in most cases, it seems like simulations are a lot easier to adjust slightly and manipulate than analytical solutions. Maybe they have an advantage there, in that they promote re-examination and playing around in a way that analytical solutions don’t. In any case, I guess I’ll keep on trying to use both methods.

    • Hey Patrick, I am glad that I could help and thank you for the stimulating discussion.

      It is certainly the case that playing with simulations can help us later establish analytic results. When proving theorems, for example, I almost always have to first work through numerous special cases by hand before an actionable pattern reveals itself. And that is analogous to a certain ‘pen and paper simulation’.

      But I don’t think this is what happened for me in the case of the bifurcation in cooperation for inviscid ethnocentrism that I mentioned in this post. I wasn’t looking for a proof of some result, and it wasn’t parameter sweeps that revealed the behavior. Rather it was a novice mistake in visualization that I made in my earlier work. I committed the classic error of data analysis: I looked only at the averages and not the individual runs. When I accidentally fixed this mistake, I could see bifurcation and this in hindsight explained the higher variance I saw in my original data. Something that I hadn’t even considered a problem.

      Once I saw this as a problem to be explained, simulations were not helpful in resolving that problem. Rather I quickly turned to analytic treatments. So I guess the simulations were useful in that they drew my attention to a new field-endogenous problem that I had not considered when I was looking at the field-exogenous problem of understanding ethnocentric cooperation. Maybe that is just science as usual.

      Of course, as Rob pointed out in these comments. Knowing how to use both simulation and analytic methods is a great set of skills to have. Where I would disagree with Rob, though, is in what should come first. I still think we should study analytically solvable models that we can easily understand before we turn to simulation models that we cannot. We need that analytic background to realize what is surprising and unexpected in more complicated simulations. Going analytic-first also forces a certain discipline of minimal model building on us that simulations usually don’t enforce. I wish that I had embraced this more quickly when I started out.

      Maybe there is some-sort of dialectic happening here, with analytics as thesis, simulations as antithesis and a new kind of understanding as a synthesis that restarts the cycle.

      I’ll have to think more about this.

  3. jorgeapenas says:

    Hi Artem, nice post. You wanted me to comment on this, so here are my 2 cents:

    1. I agree with you that often simulations are abused as a cheap way of getting papers out. A sad example that you might know well (but there are any more out there) is simulation studies of games on graphs: slightly change the game, or the graph, or the update rule, etc, and get a new paper from your existing code (plus some tweaks).

    2. Analytical solutions (when available) are, well, more elegant. This does not serve only an aesthetic purpose. If we deem them more elegant is because they are concise: they pack information more efficiently than a figure showing simulation results. (Think of how many bytes you need to store a simple equation versus how many you need to store all the data coming out of individual based models).

    3. Often, analytical models and simulation models work with different limits. Assuming infinite population sizes and/or (extremely) weak selection would greatly help you solve many models in evolutionary biology, but those very same limits are simply impossible to reproduce in code. On the contrary, to get low execution times you would like to simulate small populations, strong selection, etc. This said, simulations serve as a nice sanity check: your analytical model might be exact only in those limits, but simulations can give you an idea of more or less where the approximations really begin to fail.

    4. Simulation opens the door to complicate features that can easily drive you away from your original question. Aiming an analytical solution often forces you to go for models as simple as possible (but that still retain those key features that make them useful).

    5. I think that reproducibility is also an important point. Math is highly reproducible, I’d dare to say code is far less. In other words, mistakes in mathematical results are often easier to spot than mistakes in a piece of code. I’ve had reviewers pointing mistakes in formulas of some of my papers (and I’ve done the same for some papers I’ve reviewed), but so far I’ve had no reviewer spotting potential coding errors in my scripts.

    • Thanks for that Jorge! I agree with most of your points, but I wanted to give a bit of push-back to develop both of our positions more. Starting with (2):

      One of the reasons behind my critique of analytic solutions in this post is that they are in some sense ‘over-kill’. A mathematical model, especially a heuristic one, is usually not meant to be used over its whole domain of parameters, etc. In practice, it is often used to illustrate some specific verbal or general conclusion. And in that way, it is no more compact than a simulation.

      What is important about representing models as equations rather than code (that ties into your point (5)) is that we are better at manipulating equations and seeing their boundaries than we are at manipulating and seeing the boundaries of code. It is easier to see where some verbal or general conclusion s (often expressing a kind of dynamic regime) will stop holding when we look a set of equation than when we are looking at code.

      But this has limits. We can write down equations where we can’t analyze the dynamic regimes very well. And we can also write code where we can use automated theorem provers and verifies to established certain dynamic properties. But in practice this limit is not brushed up against. Especially not in math bio.

      This brings me to point (3):

      I feel like using simulations to check the robustness of assumptions of analytic models is a bit of a crutch. For a good analytic approximation, we should try to factor through the error terms and thus have an analytic estimate for the importance of our analysis assumptions. But unfortunately in practice this is often not done (and sometimes can’t be done). But if we want to push for more analytic work then we should also push for analytic work that describes its own failure regimes, instead of relying on simulation to estimate the magnitude of failure. This ties in to my concerns about Rob’s comment on using agreement between simulations and analytics to ‘confirm’ something. This has to be done carefully.

      Finally, point (4):

      I largely agree with you and this is one of the reasons I prefer analytic approaches: they push us harder towards simplifying models. However, as Patrick pointed out in the comments, by having simulations push us away from our original question, they might generate new more interesting questions. This is, of course, possible with analytic models but does it happen less often?

  4. Pingback: The Noble Eightfold Path to Mathematical Biology | Theory, Evolution, and Games Group

  5. Pingback: Cataloging a year of metamodeling blogging | Theory, Evolution, and Games Group

  6. Pingback: Abstracting evolutionary games in cancer | Theory, Evolution, and Games Group

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.