Mathtimidation by analytic solution vs curse of computing by simulation

Recently, I was chatting with Patrick Ellsworth about the merits of simulation vs analytic solutions in evolutionary game theory. As you might expect from my old posts on the curse of computing, and my enjoyment of classifying games into dynamic regimes, I started with my typical argument against simulations. However, as I searched for a positive argument for analytic solutions of games, I realized that I didn’t have a good one. Instead, I arrived at another negative argument — this time against analytic solutions of heuristic models.

Hopefully this curmudgeoning comes as no surprise by now.

But it did leave me in a rather confused state.

Given that TheEGG is meant as a place to share such confusions, I want to use this post to set the stage for the simulation vs analytic debate in EGT and then rehearse my arguments. I hope that, dear reader, you will then help resolve the confusion.

First, for context, I’ll share my own journey from simulations to analytic approaches. You can see a visual sketch of it above. Second, I’ll present an argument against simulations — at least as I framed that argument around the time I arrived at Moffitt. Third, I’ll present the new argument against analytic approaches. At the end — as is often the case — there will be no resolution.

Read more of this post

Advertisements

Methods and morals for mathematical modeling

About a year ago, Vincent Cannataro emailed me asking about any resources that I might have on the philosophy and etiquette of mathematical modeling and inference. As regular readers of TheEGG know, this topic fascinates me. But as I was writing a reply to Vincent, I realized that I don’t have a single post that could serve as an entry point to my musings on the topic. Instead, I ended up sending him an annotated list of eleven links and a couple of book recommendations. As I scrambled for a post for this week, I realized that such an analytic linkdex should exist on TheEGG. So, in case others have interests similar to Vincent and me, I thought that it might be good to put together in one place some of the resources about metamodeling and related philosophy available on this blog.

This is not an exhaustive list, but it might still be relatively exhausting to read.

I’ve expanded slightly past the original 11 links (to 14) to highlight some more recent posts. The free association of the posts is structured slightly, with three sections: (1) classifying mathematical models, (2) pros and cons of computational models, and (3) ethics of models.

Read more of this post

Separating theory from nonsense via communication norms, not Truth

Earlier this week on twitter, Brian Skinner wrote an interesting thread on how to distinguish good theory from crackpottery. He started with a trait that both theorists and crackpots share: we have an “irrational self-confidence” — a belief that just by thinking we “can arrive at previously-unrealized truths about the world”. From this starting point, the two diverge in their use of evidence. A crackpot relies primarily on positive evidence: he thinks hard about a problem, arrives at a theory that feels right, and then publicizes the result.

A theorist, on the other prong, incorporates negative evidence: she ponders hard about a problem, arrives at a theory that feels right and then proceeds to try to disprove that theory. She reads the existing literature and looks at the competing theories, takes time to understand them and compare them against her own. If any disagree with hers then she figures out why those theories are wrong. She pushes her theory to the extremes, looks at its limiting cases and checks them for agreement with existing knowledge. Only after her theory comes out unscathed from all these challenges does she publicize it.

For Skinner, this second prong is the definition of scholarship. In practice, coming up with a correct theory is mostly a painful process of discarding many of your own wrong attempts. A good theorist is a thorough, methodical and skeptical of their own ideas.

The terminology of crackpottery vs scholarship is probably overly harsh, as Skinner acknowledges. And in practice, somebody might be a good theorist in one domain but a crackpot elsewhere. As Malkym Lesdrae points out, there are many accomplished accademics who are also crackpot theorists: “Most often it’s about things outside their field of specialty”. Thus, this ideal self-skepticism might be domain specific.

It is also a destructive ideal.

In other words, I disagreed with Skinner on the best way to separate good theory from nonsense. Mostly on the framing. Skinner crystalized our disagreement in a tweet: whereas he views self-skepticism as I an obligation to the Truth, I view a similar sort of self-reflective behavior as a social obligation. I am committed to this latter view because I want to make sense of things like heuristic models, where truth is secondary to other modelling concerns. Where truth is not the most useful yardstick for checking the usefulness of model. Where you hear Box’s slogan: “all models are wrong, but some are useful.

Given the brief summary of Skinner’s view above — and please, Brian, correct me in the comments if I misrepresented your position — I want to use the rest of this post to sketch what I mean by self-reflective behavior as a social obligation.
Read more of this post

Hobbes on knowledge & computer simulations of evolution

Earlier this week, I was at the Second Joint Congress on Evolutionary Biology (Evol2018). It was overwhelming, but very educational.

Many of the talks were about very specific evolutionary mechanisms in very specific model organisms. This diversity of questions and approaches to answers reminded me of the importance of bouquets of heuristic models in biology. But what made this particularly overwhelming for me as a non-biologist was the lack of unifying formal framework to make sense of what was happening. Without the encyclopedic knowledge of a good naturalist, I had a very difficult time linking topics to each other. I was experiencing the pluralistic nature of biology. This was stressed by Laura Nuño De La Rosa‘s slide that contrasts the pluralism of biology with the theory reduction of physics:

That’s right, to highlight the pluralism, there were great talks from philosophers of biology along side all the experimental and theoretical biology at Evol2018.

As I’ve discussed before, I think that theoretical computer science can provide the unifying formal framework that biology needs. In particular, the cstheory approach to reductions is the more robust (compared to physics) notion of ‘theory reduction’ that a pluralistic discipline like evolutionary biology could benefit from. However, I still don’t have any idea of how such a formal framework would look in practice. Hence, throughout Evol2018 I needed refuge from the overwhelming overstimulation of organisms and mechanisms that were foreign to me.

One of the places I sought refuge was in talks on computational studies. There, I heard speakers emphasize several times that they weren’t “just simulating evolution” but that their programs were evolution (or evolving) in a computer. Not only were they looking at evolution in a computer, but this model organism gave them an advantage over other systems because of its transparency: they could track every lineage, every offspring, every mutation, and every random event. Plus, computation is cheaper and easier than culturing E.coli, brewing yeast, or raising fruit flies. And just like those model organisms, computational models could test evolutionary hypotheses and generate new ones.

This defensive emphasis surprised me. It suggested that these researchers have often been questioned on the usefulness of their simulations for the study of evolution.

In this post, I want to reflect on some reasons for such questioning.

Read more of this post

Heuristic models as inspiration-for and falsifiers-of abstractions

Last month, I blogged about abstraction and lamented that abstract models are lacking in biology. Here, I want to return to this.

What isn’t lacking in biology — and what I also work on — is simulation and heuristic models. These can seem abstract in the colloquial sense but are not very abstract for a computer scientist. They are usually more idealizations than abstractions. And even if all I care about is abstract models — which I can reasonably be accused of at times — then heuristic models should still be important to me. Heuristics help abstractions in two ways: portfolios of heuristic models can inspire abstractions, and single heuristic models can falsify abstractions.

In this post, I want to briefly discuss these two uses for heuristic models. In the process, I will try to make it a bit more clear as to what I mean by a heuristic model. I will do this with metaphors. So I’ll produce a heuristic model of heuristic models. And I’ll use spatial structure and the evolution of cooperation as a case study.

Read more of this post

Double-entry bookkeeping and Galileo: abstraction vs idealization

Two weeks ago, I wrote a post on how abstract is not the opposite of empirical. In that post, I distinguished between the colloquial meaning of abstract and the ‘true’ meaning used by computer scientists. For me, abstraction is defined by multiple realizability. An abstract object can have many implementations. The concrete objects that implement an abstraction might differ from each other in various — potentially drastic — ways but if the implementations are ‘correct’ then the ways in which they differ are irrelevant to the conclusions drawn from the abstraction.

I contrasted this comp sci view with a colloquial sense that I attributed to David Basanta. I said this colloquial sense was just that an abstract model is ‘less detailed’.

In hindsight, I think this colloquial sense was a straw-man and doesn’t do justice to David’s view. It isn’t ignoring any detail that makes something colloquially abstract. Rather, it is ignoring ‘the right sort of’ detail in the ‘right sort of way’. It is about making an idealization meant to arrive at some essence of a (class of) object(s) or a process. And this idealization view of abstraction has a long pedigree.

In this post, I want to provide a semi-historical discussion of the the difference between (comp sci) abstraction vs idealization. I will focus on double-entry bookkeeping as a motivation. Now, this might not seem relevant to science, but for Galileo it was relevant. He expressed his views on (proto-)scientific abstraction by analogy to bookkeeping. And in expressing his view, he covered both abstraction and idealization. In the process, he introduced both good ideas and bad ones. They remain with us today.

Read more of this post

QBIOX: Distinguishing mathematical from verbal models in biology

There is a network at Oxford know as QBIOX that aims to connect researchers in the quantitative biosciences. They try to foster collaborations across the university and organize symposia where people from various departments can share their quantitative approaches to biology. Yesterday was my second or third time attending, and I wanted to share a brief overview of the three talks by Philip Maini, Edward Morrissey, and Heather Harrington. In the process, we’ll get to look at slime molds, colon crypts, neural crests, and glycolysis. And see modeling approaches ranging from ODEs to hybrid automata to STAN to algebraic systems biology. All of this will be in contrast to verbal theories.

Philip Maini started the evening off — and set the theme for my post — with a direct question as the title of his talk.

Does mathematics have anything to do with biology?

Read more of this post

Token vs type fitness and abstraction in evolutionary biology

There are only twenty-six letters in the English alphabet, and yet there are more than twenty-six letters in this sentence. How do we make sense of this?

Ever since I first started collaborating with David Basanta and Jacob Scott back in 2012/13, a certain tension about evolutionary games has been gnawing at me. A feeling that a couple of different concepts are being swept up under the rug of a single name.[1] This feeling became stronger during my time at Moffitt, especially as I pushed for operationalizing evolutionary games. The measured games that I was imagining were simply not the same sort of thing as the games implemented in agent-based models. Finally this past November, as we were actually measuring the games that cancer plays, a way to make the tension clear finally crystallized for me: the difference between reductive and effective games could be linked to two different conceptions of fitness.

This showed a new door for me: philosophers of biology have already done extensive conceptual analysis of different versions of fitness. Unfortunately, due to various time pressures, I could only peak through the keyhole before rushing out my first draft on the two conceptions of evolutionary games. In particular, I didn’t connect directly to the philosophy literature and just named the underlying views of fitness after the names I’ve been giving to the games: reductive fitness and effective fitness.

Now, after a third of a year busy teaching and revising other work, I finally had a chance to open that door and read some of the philosophy literature. This has provided me with a better vocabulary and clearer categorization of fitness concepts. Instead of defining reductive vs effective fitness, the distinction I was looking for is between token fitness and type fitness. And in this post, I want to discuss that distinction. I will synthesize some of the existing work in a way that is relevant to separating reductive vs. effective games. In the process, I will highlight some missing points in the current debates. I suspect this points have been overlooked because most of the philosophers of biology are focused more on macroscopic organisms instead of the microscopic systems that motivated me.[2]

Say what you will of birds and ornithology, but I am finding reading philosophy of biology to be extremely useful for doing ‘actual’ biology. I hope that you will, too.

Read more of this post

Ontology of player & evolutionary game in reductive vs effective theory

In my views of game theory, I largely follow Ariel Rubinstein: game theory is a set of fables. A collection of heuristic models that helps us structure how we make sense of and communicate about the world. Evolutionary game theory was born of classic game theory theory through a series of analogies. These analogies are either generalizations or restrictions of the theory depending on if you’re thinking about the stories or the mathematics. Given this heuristic genealogy of the field — and my enjoyment of heuristic models — I usually do not worry too much about what exactly certain ontic terms like strategy, player, or game really mean or refer to. I am usually happy to leave these terms ambiguous so that they can motivate different readers to have different interpretations and subsequently push for different models of different experiments. I think it is essential for heuristic theories to foster this diverse creativity. Anything goes.

However, not everyone agrees with Ariel Rubinstein and me; some people think that EGT isn’t “just” heuristics. In fact, more recently, I have also shifted some of my uses of EGT from heuristics to abductions. When this happens, it is no longer acceptable for researchers to be willy-nilly with fundamental objects of the theory: strategies, players, and games.

The biggest culprit is the player. In particular, a lot of confusion stems from saying that “cells are players”. In this post, I’d like to explore two of the possible positions on what constitutes players and evolutionary games.

Read more of this post

Multiplicative versus additive fitness and the limit of weak selection

Previously, I have discussed the importance of understanding how fitness is defined in a given model. So far, I’ve focused on how mathematically equivalent formulations can have different ontological commitments. In this post, I want to touch briefly on another concern: two different types of mathematical definitions of fitness. In particular, I will discuss additive fitness versus multiplicative fitness.[1] You often see the former in continuous time replicator dynamics and the latter in discrete time models.

In some ways, these versions are equivalent: there is a natural bijection between them through the exponential map or by taking the limit of infinitesimally small time-steps. A special case of more general Lie theory. But in practice, they are used differently in models. Implicitly changing which definition one uses throughout a model — without running back and forth through the isomorphism — can lead to silly mistakes. Thankfully, there is usually a quick fix for this in the limit of weak selection.

I suspect that this post is common knowledge. However, I didn’t have a quick reference to give to Pranav Warman, so I am writing this.
Read more of this post