Hobbes on knowledge & computer simulations of evolution

Earlier this week, I was at the Second Joint Congress on Evolutionary Biology (Evol2018). It was overwhelming, but very educational.

Many of the talks were about very specific evolutionary mechanisms in very specific model organisms. This diversity of questions and approaches to answers reminded me of the importance of bouquets of heuristic models in biology. But what made this particularly overwhelming for me as a non-biologist was the lack of unifying formal framework to make sense of what was happening. Without the encyclopedic knowledge of a good naturalist, I had a very difficult time linking topics to each other. I was experiencing the pluralistic nature of biology. This was stressed by Laura Nuño De La Rosa‘s slide that contrasts the pluralism of biology with the theory reduction of physics:

That’s right, to highlight the pluralism, there were great talks from philosophers of biology along side all the experimental and theoretical biology at Evol2018.

As I’ve discussed before, I think that theoretical computer science can provide the unifying formal framework that biology needs. In particular, the cstheory approach to reductions is the more robust (compared to physics) notion of ‘theory reduction’ that a pluralistic discipline like evolutionary biology could benefit from. However, I still don’t have any idea of how such a formal framework would look in practice. Hence, throughout Evol2018 I needed refuge from the overwhelming overstimulation of organisms and mechanisms that were foreign to me.

One of the places I sought refuge was in talks on computational studies. There, I heard speakers emphasize several times that they weren’t “just simulating evolution” but that their programs were evolution (or evolving) in a computer. Not only were they looking at evolution in a computer, but this model organism gave them an advantage over other systems because of its transparency: they could track every lineage, every offspring, every mutation, and every random event. Plus, computation is cheaper and easier than culturing E.coli, brewing yeast, or raising fruit flies. And just like those model organisms, computational models could test evolutionary hypotheses and generate new ones.

This defensive emphasis surprised me. It suggested that these researchers have often been questioned on the usefulness of their simulations for the study of evolution.

In this post, I want to reflect on some reasons for such questioning.

Let’s rewind to a time before computers. To a time before Darwin’s evolution by natural selection. Just to stress that this debate could have been had (and to some extent, has been had) before either computers or evolution. Let’s rewind to the time of Thomas Hobbes.

When Hobbes was writing, clocks and watches were some of the best examples of technology; and the hottest idea was the new science of mechanistic physics. Except Hobbes wanted to write about politics — more than that, he wanted to write a science of politics. The problem was that by looking at the commonwealth, he saw the importance of its form and the relative unimportance of its matter. If he was a pure Aristotelian, this would be no issue, but he accepted the new science’s eliminate of form as an explanatory tool. For the mechanistic physics, formal cause was not seen as an acceptable mode of explanation.

This forced Hobbes to distinguish between two kinds of knowledge. First, there was knowledge of things that we have made ourselves — for him, the central examples of this were geometry and the state. Second, there was knowledge of things that we did not make — i.e., the domain of mechanistic physics. In the case of physics, we could not deconstruct the machine because different mechanisms can produce the same effect. Thus, if we tried to reason from effects to causes, we could only arrive at reasonable conjectures and hypotheses. But for the state, we could know the causes because we had constructed them ourselves. With this move, Hobbes could avoid the problem of underdetermination.

This is also the move that a computational modeler employs. By explicitly specifying all the rules that the digital organism follows, she is making its world. Thus, she can then dismantle the machine and understand all of its parts and how they contribute to the effect of interest. Unlike Hobbes, she has the extra advantage of not having had the State build around her and being able to dismantle her simulation at will. Of course, in practice, just like Hobbes, most computer modelers usually don’t fully understand the code they’ve written. Still, this powerful determination is the computational modeler’s cake.

Unfortunately, the modeler wants to eat her cake, too. By appealing to multiple realizability, the modeler can claim that evolution does not need to be realized in DNA but can also be in silico. In other words, that evolution is underdetermined. She will usually proceed further by saying that a big advantage of a computational model is that it can be run in conditions that aren’t easily accessible to wet-lab experiments. In other words, she wants to assume a set of rules — which are underdetermined by a set of intuitions of real experiments — and then extrapolate their effects to carry out unreal experiments.

I think it is this tension between having your cake and eating it that causes the criticisms of “just a simulation”. All the advantages of peering under-the-hood come from determination, but all the applicability to non-simulations comes from underdetermination. And since we don’t usually inherently care about in silico organisms, we have to embrace the underdeterminism for the sake of applicability. Once we do that, all the power of peering under-the-hood disappears: since the detailed mechanisms are just conjectural. This is made worse by the curse of computing in big simulations, where the modeler doesn’t actually understand all the details of the mechanism they implemented — for example, when the organisms are arbitrary programs in some simple specification language.

Some of this critique can be avoided by replacing in silico with in logico. And I think computational modelers often offer this defence, too. For this, let’s turn again to Hobbes.

After sidestepping the problem of underdetermination, Hobbes could imagine the State as a giant watch or more general automaton. But he did not see the gears of that watch as the humans that make up society. Instead he compares artificial constructs like “wealth of the population” to strength of the automaton, counselors to memory, and reward/punishment to nerves. In this way, he was not implementing the State through physical processes (which would then make its study the extension of physical mechanics) but through conceptual human-made processes.

We can do a similar move with simulations. We can recognize that the physical world is separate from our descriptions of it. And that evolution is our way of making sense of the order and diversity in the physical world. As such, evolution is a concept which we can implement with other concepts. A computer simulation is then just the physical model of those concepts. Much like a clock was — for a long time — often used as a physical model of our astronomical hypotheses. This is the same sort of separation of theory and reality that I tried to do with Post’s variant of the Church-Turing thesis. And this provide a way to interpret evolutionary simulations as implementations of theory.

I think that modelers make the above argument when they point out that what matters is not the DNA/RNA/squishy-stuff of biology, but some set of logical process-based rules that defines the applicability of evolution. However, I think that when we make this argument, we have to be mindful of the underdetermination of our theory. In particular, that our goal is to improve how the theory is determined. In practice, I think that this can only be done if we provide an opportunity to directly link to systems of interest. We want our processes to have operationalizations that apply both in our computational model and other model organisms or natural organisms. For me, this can mean giving up some of the peaking under the hood in favor of an effective theory rather than a reductive one.

Of course, the above considerations are not limited to computer models. Model organisms in conditions designed for the purpose of a particular experiment are both conceptual and physical systems. And although computer models are also both conceptual and physical systems, these two aspects of them are usually easier to disentangle than for model organisms. This means that the above considerations could be repeated for experimental systems, but more care would be required.

Advertisements

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

4 Responses to Hobbes on knowledge & computer simulations of evolution

  1. Daniel Weissman says:

    Hi Artem, I had to miss the meeting so I’m not sure exactly what talks you’re writing about, but when I’ve seen people argue that what they’re doing is not “just a simulation”, they’re usually doing AVIDA or some similarly complicated system, and the accusation that they’re defending against is not “you shouldn’t use computational models” but “you should use simpler computational models”.

    • At least one of the talks was an Avida one, but I don’t remember if all the ones that used this phrase also used Avida. I’ve definitely heard the not “just a simulation” argument to defend overly complicated computational models. And I have much to say on that. Maybe it can be a future post. But I think in these particular cases they were not trying to head off criticism from model minimalists but trying to position themselves as relevant to biological questions in the same sort of way as traditional model organisms are relevant to biological questions.

      In the context of the post, I don’t think I was saying something that is meant to apply to all computational models. Because there are at least two ways we can relate knowledge and models: we can have “knowledge of” our model or “knowledge via” our model. Sometimes they intertwine. I was trying to suggest that Hobbes would think that “knowledge of” our computer models is of a different kind than “knowledge of” our biological model organisms. And connecting these two types of “knowledge of” to achieve “knowledge via” is not always straightforward. My suggestion is — unsurprisingly — to focus on effective theories and operationalization to bridge this. I haven’t seen this done much with Avida, but even with minimalist models, I seldom see it done.

  2. Pingback: Methods and morals for mathematical modeling | Theory, Evolution, and Games Group

  3. Pingback: Cataloging a year of metamodeling blogging | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.