# Heuristic models as inspiration-for and falsifiers-of abstractions

July 14, 2018
by Artem Kaznatcheev

Last month, I blogged about abstraction and lamented that abstract models are lacking in biology. Here, I want to return to this.

What isn’t lacking in biology — and what I also work on — is simulation and heuristic models. These can seem abstract in the colloquial sense but are not very abstract for a computer scientist. They are usually more idealizations than abstractions. And even if all I care about is abstract models — which I can reasonably be accused of at times — then heuristic models should still be important to me. Heuristics help abstractions in two ways: portfolios of heuristic models can inspire abstractions, and single heuristic models can falsify abstractions.

In this post, I want to briefly discuss these two uses for heuristic models. In the process, I will try to make it a bit more clear as to what I mean by a heuristic model. I will do this with metaphors. So I’ll produce a heuristic model of heuristic models. And I’ll use spatial structure and the evolution of cooperation as a case study.

It might be good to start by reiterating Lakoff and Johnson: “[m]etaphor is one of our most important tools for trying to comprehend partially what cannot be comprehended totally”. This might sound familiar because active research is specifically about “trying to comprehend partially” what we cannot (yet) comprehend totally. And one of our most important tools for doing this is modelling. A heuristic model achieves this partial comprehension by simplifying through idealization. When we build a heuristic model, we act like a cartoonist sketching a caricature. We pick a feature that seems salient to us and try to produce a mathematical or computational description that feels like it captures (or defines) the essence of that feature.

In other words, a typical heuristic model is built on a number of assumptions that seem simple to the modeler. The modeler then sees if they capture the relevant feature by seeing if the model produces desired results. This comparison is usually through qualitative agreement with some observations or common sense. The assumptions that ground a heuristic model are not guaranteed to be robust: small changes, additions, or subtractions to the assumptions might drastically change the results. More importantly, it is not even known — beyond the hunches of the modeler and their team — how these assumptions relate to the domain being modeled. In other words, the assumptions are often not empirically tested — at times, not even potentially testable — but that is not essential to the modelling process, since they are meant as an incomplete sketch and now as a deduction. And when heuristic models are studied through simulation, it isn’t even clear if (or which of) the assumptions necessarily imply the observed model results. Often all the assumptions are not even explicit. This is where the art of a good modeler comes in: she has to pin down all the assumptions being made, and get a feeling for their reasonableness.

As such, given a heuristic model or simulation, the model usually doesn’t capture a variety of possible physical implementations but corresponds to a particular (or small family of) computational implementation(s). Results from heuristic models don’t stack in the way abstractions do and are not certain in the way that abstractions are. Of course, this doesn’t mean that they aren’t useful. Usually, a scientist has a whole portfolio of a wide range of different heuristic models and when they all point toward a common result, she can abstract that results and believe in it without believing in the details of any particular model. This is why I still build a lot of heuristic models (and sometimes even simulations). And this is a view of heuristic models as illustrations: if you have a number of different cartoons of something then maybe you can imagine what it looks like ‘for real’.

In this way, heuristic models serve as inspiration for abstract theories. None of the models are meant to be taken as literally true but the theory they point to is meant to be taken as true. In most of my experience of biology, this transition from a portfolio of heuristic models to an abstract theory also takes us from the realm of math/computation to the realm of words.

For a case study, consider the effect of space on the evolution of cooperation. We have a huge literature of specific heuristic models — which I’ve often chronicled on TheEGG — where (1) “turning up” the parameter associated with space either (2) produces more cooperation, or (3) allows cooperation over a broader range of interactions. In the case of a specific heuristic model, each of (1), (2), and (3) is expressed mathematically or computationally in a very precise way. However, we then notice this similarity between simulations and produce an abstract verbal theory: “spatial structure promotes cooperation”. In this theory, the terms — most notable: “spatial structure” and “promote” — are meant to be implementable by a large variety of models. But, the verbal nature of the theory, makes it inherently fuzzy as to what should or shouldn’t count as “spatial structure” or “promote”. However, we would not have been able to generate this theory — or at least we would have much less confidence in it — if we had not seen the portfolio of heuristic models that implement it.

Of course, the abstraction inspired by heuristic models does not have to always be verbal or vague. In fact, theoretical computer science is particularly well placed to produce less vague abstractions from a mess of heuristic models. In my eyes, this is the most important thing that theoretical computer science can offer biology. To yet again repeat Scott Aaronson (who attributes this sentiment to Greg Kuperberg and was writing in the context of the transition from cryptology to cryptography):

[Theoretical computer science offers] what mathematics itself has sought to do for *everything* since Euclid! That is, when you see an unruly mess of insights, related to each other in some tangled way, systematize and organize it. Turn the tangle into a hierarchical tree (or dag). Isolate the minimal assumptions … on which each conclusion can be based, and spell out all the logical steps needed to get from here to there—even if the steps seem obvious or boring.

But extracting an abstraction from a mess of heuristics isn’t the only way heuristic models can be useful to abstractions. Even a single heuristic model can be useful. A single heuristic model is not in itself an abstraction. But sometimes there are ways that mathematically identical models can be interpreted both as a heuristic and as an abstraction — as I do with replicator dynamics. But this isn’t exactly a heuristic model being useful to abstraction rather than an abstraction being accidentally mathematically equivalent to a heuristic. More importantly, a single heuristic model can serve as an implementation that falsifies an abstraction. If somebody has an abstract theory that is meant to apply to a wide range of domains then one can argue that a particular computational model is a ‘correct’ implementation of that abstract theory. However, if that model then leads to a result that contradicts the theory then we know the abstraction was wrong. It either needs to refine its domain of application (to exclude that particular model, but hopefully with a reasoned argument) or check its conclusions. This view of heuristic models when applied to empirical models is similar to Popper’s falsfication.

To return to our case study, we might have an abstract theory that says: “free-riders evolutionary out-compete cooperators”. We can produce a particular heuristic model that has cooperators and defectors compete on a random k-regular graph. This seems to implement the theory. But in this model, cooperators can sometimes win. Thus, our original abstraction was wrong. We need to refine it: “in the absence of spatial structure, free-riders evolutionary out-compete cooperators”.

This means that heuristic models can help us even if we only care about abstractions. And they can help us in two ways. Heuristic models can both inspired new abstractions and falsify existing ones. Finally, heuristic models can have all kinds of other value that is independent of how they interact with abstractions. But that is a blog post for another day.

## Heuristic models as inspiration-for and falsifiers-of abstractions

July 14, 2018 by Artem Kaznatcheev Leave a comment

Last month, I blogged about abstraction and lamented that abstract models are lacking in biology. Here, I want to return to this.

What isn’t lacking in biology — and what I also work on — is simulation and heuristic models. These can seem abstract in the colloquial sense but are not very abstract for a computer scientist. They are usually more idealizations than abstractions. And even if all I care about is abstract models — which I can reasonably be accused of at times — then heuristic models should still be important to me. Heuristics help abstractions in two ways: portfolios of heuristic models can inspire abstractions, and single heuristic models can falsify abstractions.

In this post, I want to briefly discuss these two uses for heuristic models. In the process, I will try to make it a bit more clear as to what I mean by a heuristic model. I will do this with metaphors. So I’ll produce a heuristic model of heuristic models. And I’ll use spatial structure and the evolution of cooperation as a case study.

It might be good to start by reiterating Lakoff and Johnson: “[m]etaphor is one of our most important tools for trying to comprehend partially what cannot be comprehended totally”. This might sound familiar because active research is specifically about “trying to comprehend partially” what we cannot (yet) comprehend totally. And one of our most important tools for doing this is modelling. A heuristic model achieves this partial comprehension by simplifying through idealization. When we build a heuristic model, we act like a cartoonist sketching a caricature. We pick a feature that seems salient to us and try to produce a mathematical or computational description that feels like it captures (or defines) the essence of that feature.

In other words, a typical heuristic model is built on a number of assumptions that seem simple to the modeler. The modeler then sees if they capture the relevant feature by seeing if the model produces desired results. This comparison is usually through qualitative agreement with some observations or common sense. The assumptions that ground a heuristic model are not guaranteed to be robust: small changes, additions, or subtractions to the assumptions might drastically change the results. More importantly, it is not even known — beyond the hunches of the modeler and their team — how these assumptions relate to the domain being modeled. In other words, the assumptions are often not empirically tested — at times, not even potentially testable — but that is not essential to the modelling process, since they are meant as an incomplete sketch and now as a deduction. And when heuristic models are studied through simulation, it isn’t even clear if (or which of) the assumptions necessarily imply the observed model results. Often all the assumptions are not even explicit. This is where the art of a good modeler comes in: she has to pin down all the assumptions being made, and get a feeling for their reasonableness.

As such, given a heuristic model or simulation, the model usually doesn’t capture a variety of possible physical implementations but corresponds to a particular (or small family of) computational implementation(s). Results from heuristic models don’t stack in the way abstractions do and are not certain in the way that abstractions are. Of course, this doesn’t mean that they aren’t useful. Usually, a scientist has a whole portfolio of a wide range of different heuristic models and when they all point toward a common result, she can abstract that results and believe in it without believing in the details of any particular model. This is why I still build a lot of heuristic models (and sometimes even simulations). And this is a view of heuristic models as illustrations: if you have a number of different cartoons of something then maybe you can imagine what it looks like ‘for real’.

In this way, heuristic models serve as inspiration for abstract theories. None of the models are meant to be taken as literally true but the theory they point to is meant to be taken as true. In most of my experience of biology, this transition from a portfolio of heuristic models to an abstract theory also takes us from the realm of math/computation to the realm of words.

For a case study, consider the effect of space on the evolution of cooperation. We have a huge literature of specific heuristic models — which I’ve often chronicled on TheEGG — where (1) “turning up” the parameter associated with space either (2) produces more cooperation, or (3) allows cooperation over a broader range of interactions. In the case of a specific heuristic model, each of (1), (2), and (3) is expressed mathematically or computationally in a very precise way. However, we then notice this similarity between simulations and produce an abstract verbal theory: “spatial structure promotes cooperation”. In this theory, the terms — most notable: “spatial structure” and “promote” — are meant to be implementable by a large variety of models. But, the verbal nature of the theory, makes it inherently fuzzy as to what should or shouldn’t count as “spatial structure” or “promote”. However, we would not have been able to generate this theory — or at least we would have much less confidence in it — if we had not seen the portfolio of heuristic models that implement it.

Of course, the abstraction inspired by heuristic models does not have to always be verbal or vague. In fact, theoretical computer science is particularly well placed to produce less vague abstractions from a mess of heuristic models. In my eyes, this is the most important thing that theoretical computer science can offer biology. To yet again repeat Scott Aaronson (who attributes this sentiment to Greg Kuperberg and was writing in the context of the transition from cryptology to cryptography):

But extracting an abstraction from a mess of heuristics isn’t the only way heuristic models can be useful to abstractions. Even a single heuristic model can be useful. A single heuristic model is not in itself an abstraction. But sometimes there are ways that mathematically identical models can be interpreted both as a heuristic and as an abstraction — as I do with replicator dynamics. But this isn’t exactly a heuristic model being useful to abstraction rather than an abstraction being accidentally mathematically equivalent to a heuristic. More importantly, a single heuristic model can serve as an implementation that falsifies an abstraction. If somebody has an abstract theory that is meant to apply to a wide range of domains then one can argue that a particular computational model is a ‘correct’ implementation of that abstract theory. However, if that model then leads to a result that contradicts the theory then we know the abstraction was wrong. It either needs to refine its domain of application (to exclude that particular model, but hopefully with a reasoned argument) or check its conclusions. This view of heuristic models when applied to empirical models is similar to Popper’s falsfication.

To return to our case study, we might have an abstract theory that says: “free-riders evolutionary out-compete cooperators”. We can produce a particular heuristic model that has cooperators and defectors compete on a random k-regular graph. This seems to implement the theory. But in this model, cooperators can sometimes win. Thus, our original abstraction was wrong. We need to refine it: “in the absence of spatial structure, free-riders evolutionary out-compete cooperators”.

This means that heuristic models can help us even if we only care about abstractions. And they can help us in two ways. Heuristic models can both inspired new abstractions and falsify existing ones. Finally, heuristic models can have all kinds of other value that is independent of how they interact with abstractions. But that is a blog post for another day.

## Share this:

RelatedFiled under Commentary, Meta, Preliminary Tagged with algorithmic philosophy, metamodeling, philosophy of science