Introduction to Algorithmic Biology: Evolution as Algorithm

As Aaron Roth wrote on Twitter — and as I bet with my career: “Rigorously understanding evolution as a computational process will be one of the most important problems in theoretical biology in the next century. The basics of evolution are many students’ first exposure to “computational thinking” — but we need to finish the thought!”

Last week, I tried to continue this thought for Oxford students at a joint meeting of the Computational Society and Biological Society. On May 22, I gave a talk on algorithmic biology. I want to use this post to share my (shortened) slides as a pdf file and give a brief overview of the talk.

Winding path in a hard semi-smooth landscape

If you didn’t get a chance to attend, maybe the title and abstract will get you reading further:

Algorithmic Biology: Evolution is an algorithm; let us analyze it like one.

Evolutionary biology and theoretical computer science are fundamentally interconnected. In the work of Charles Darwin and Alfred Russel Wallace, we can see the emergence of concepts that theoretical computer scientists would later hold as central to their discipline. Ideas like asymptotic analysis, the role of algorithms in nature, distributed computation, and analogy from man-made to natural control processes. By recognizing evolution as an algorithm, we can continue to apply the mathematical tools of computer science to solve biological puzzles – to build an algorithmic biology.

One of these puzzles is open-ended evolution: why do populations continue to adapt instead of getting stuck at local fitness optima? Or alternatively: what constraint prevents evolution from finding a local fitness peak? Many solutions have been proposed to this puzzle, with most being proximal – i.e. depending on the details of the particular population structure. But computational complexity provides an ultimate constraint on evolution. I will discuss this constraint, and the positive aspects of the resultant perpetual maladaptive disequilibrium. In particular, I will explain how we can use this to understand both on-going long-term evolution experiments in bacteria; and the evolution of costly learning and cooperation in populations of complex organisms like humans.

Unsurprisingly, I’ve writen about all these topics already on TheEGG, and so my overview of the talk will involve a lot of links back to previous posts. In this way. this can serve as an analytic linkdex on algorithmic biology.
Read more of this post


British agricultural revolution gave us evolution by natural selection

This Wednesday, I gave a talk on algorithmic biology to the Oxford Computing Society. One of my goals was to show how seemingly technology oriented disciplines (such as computer science) can produce foundational theoretical, philosophical and scientific insights. So I started the talk with the relationship between domestication and natural selection. Something that I’ve briefly discussed on TheEGG in the past.

Today we might discuss artificial selection or domestication (or even evolutionary oncology) as applying the principles of natural selection to achieve human goals. This is only because we now take Darwin’s work as given. At the time that he was writing, however, Darwin actually had to make his argument in the other direction. Darwin’s argument proceeds from looking at the selection algorithms used by humans and then abstracting it to focus only on the algorithm and not the agent carrying out the algorithm. Having made this abstraction, he can implement the breeder by the distributed struggle for existence and thus get natural selection.

The inspiration is clearly from the technological to the theoretical. But there is a problem with my story.

Domestication of plants and animals in ancient. Old enough that we have cancers that arose in our domesticated helpers 11,000 years ago and persist to this day. Domestication in general — the fruit of the first agricultural revolution — can hardly qualify as a new technology in Darwin’s day. It would have been just as known to Aristotle, and yet he thought species were eternal.

Why wasn’t Aristotle or any other ancient philosopher inspired by the agriculture and animal husbandry of their day to arrive at the same theory as Darwin?

The ancients didn’t arrive at the same view because it wasn’t the domestication of the first agricultural revolution that inspired Darwin. It was something much more contemporary to him. Darwin was inspired by the British agricultural revolution of the 18th and early 19th century.

In this post, I want to sketch this connection between the technological development of the Georgian era and the theoretical breakthroughs in natural science in the subsequent Victorian era. As before, I’ll focus on evolution and algorithm.

Read more of this post

Local maxima and the fallacy of jumping to fixed-points

An economist and a computer scientist are walking through the University of Chicago campus discussing the efficient markets hypothesis. The computer scientist spots something on the pavement and exclaims: “look at that $20 on the ground — seems we’ll be getting a free lunch today!”

The economist turns to her without looking down and replies: “Don’t be silly, that’s impossible. If there was a $20 bill there then it would have been picked up already.”

This is the fallacy of jumping to fixed-points.

In this post I want to discuss both the importance and power of local maxima, and the dangers of simply assuming that our system is at a local maximum.

So before we dismiss the economist’s remark with laughter, let’s look at a more convincing discussion of local maxima that falls prey to the same fallacy. I’ll pick on one of my favourite YouTubers, THUNK:

In his video, THUNK discusses a wide range of local maxima and contrasts them with the intended global maximum (or more desired local maxima). He first considers a Roomba vacuum cleaner that is trying to maximize the area that it cleans but gets stuck in the local maximum of his chair’s legs. And then he goes on to discuss similar cases in physics, chemisty, evolution, psychology, and culture.

It is a wonderful set of examples and a nice illustration of the power of fixed-points.

But given that I write so much about algorithmic biology, let’s focus on his discussion of evolution. THUNK describes evolution as follows:

Evolution is a sort of hill-climbing algorithm. One that has identified local maxima of survival and replication.

This is a common characterization of evolution. And it seems much less silly than the economist passing up $20. But it is still an example of the fallacy of jumping to fixed-points.

My goal in this post is to convince you that THUNK describing evolution and the economist passing up $20 are actually using the same kind of argument. Sometimes this is a very useful argument, but sometimes it is just a starting point that without further elaboration becomes a fallacy.

Read more of this post

Quick introduction: the algorithmic lens

Computers are a ubiquitous tool in modern research. We use them for everything from running simulation experiments and controlling physical experiments to analyzing and visualizing data. For almost any field ‘X’ there is probably a subfield of ‘computational X’ that uses and refines these computational tools to further research in X. This is very important work and I think it should be an integral part of all modern research.

But this is not the algorithmic lens.

In this post, I will try to give a very brief description (or maybe just a set of pointers) for the algorithmic lens. And of what we should imagine when we see an ‘algorithmic X’ subfield of some field X.

Read more of this post

From perpetual motion machines to the Entscheidungsproblem

Turing MachineThere seems to be a tendency to use the newest technology of the day as a metaphor for making sense of our hardest scientific questions. These metaphors are often vague and inprecise. They tend to overly simplify the scientific question and also misrepresent the technology. This isn’t useful.

But the pull of this metaphor also tends to transform the technical disciplines that analyze our newest tech into fundamental disciplines that analyze our universe. This was the case for many aspects of physics, and I think it is currently happening with aspects of theoretical computer science. This is very useful.

So, let’s go back in time to the birth of modern machines. To the water wheel and the steam engine.

I will briefly sketch how the science of steam engines developed and how it dealt with perpetual motion machines. From here, we can jump to the analytic engine and the modern computer. I’ll suggest that the development of computer science has followed a similar path — with the Entscheidungsproblem and its variants serving as our perpetual motion machine.

The science of steam engines successfully universalized itself into thermodynamics and statistical mechanics. These are seen as universal disciplines that are used to inform our understanding across the sciences. Similarly, I think that we need to universalize theoretical computer science and make its techniques more common throughout the sciences.

Read more of this post

Reductionism: to computer science from philosophy

A biologist and a mathematician walk together into their joint office to find the rubbish bin on top of the desk and on fire. The biologist rushes out, grabs a fire extinguisher, puts out the blaze, returns the bin to the floor and they both start their workday.

The next day, the same pair return to their office to find the rubbish bin in its correct place on the floor but again on fire. This time the mathematician springs to action. She takes the burning bin, puts it on the table, and starts her workday.

The biologist is confused.

Mathematician: “don’t worry, I’ve reduced the problem to a previously solved case.”

What’s the moral of the story? Clearly, it’s that reductionism is “[o]ne of the most used and abused terms in the philosophical lexicon.” At least it is abused enough for this sentiment to make the opening line of Ruse’s (2005) entry in the Oxford Companion to Philosophy.

All of this was not apparent to me.

I underestimated the extent of disagreement about the meaning of reductionism among people who are saying serious things. A disagreement that goes deeper than the opening joke or the distinction between ontological, epistemological, methodological, and theoretical reductionism. Given how much I’ve written about the relationship between reductive and effective theories, it seems important for me to sort out how people read ‘reductive’.

Let me paint the difference that I want to discuss in the broadest stroke with reference to the mind-body problem. Both of the examples I use are purely illustrative and I do not aim to endorse either. There is one sense in which reductionism uses reduce in the same way as ‘reduce, reuse, and recycle’: i.e. reduce = use less, eliminate. It is in this way that behaviourism is a reductive account of the mind, since it (aspires to) eliminate the need to refer to hidden mental, rather than just behavioural, states. There is a second sense in which reductionism uses reducere, or literally from Latin: to bring back. It is in this way that the mind can be reduced to the brain; i.e. discussions of the mind can be brought back to discussions of the brain, and the mind can be taken as fully dependent on the brain. I’ll expand more on this sense throughout the post.

In practice, the two senses above are often conflated and intertwined. For example, instead of saying that the mind is fully dependent on the brain, people will often say that the mind is nothing but the brain, or nothing over and above the brain. When doing this, they’re doing at least two different things. First, they’re claiming to have eliminated something. And second, conflating reduce and reducere. This observation of conflation is similar to my claim that Galileo conflated idealization and abstraction in his book-keeping analogy.

And just like with my distinction between idealization and abstraction, to avoid confusion, the two senses of reductionism should be kept conceptually separate. As before, I’ll make this clear by looking at how theoretical computer science handles reductions. A study in algorithmic philosophy!

In my typical arrogance, I will rename the reduce-concept as eliminativism. And based on its agreement with theoretical computer science, I will keep the reducere-concept as reductionism.
Read more of this post

Plato and the working mathematician on Truth and discourse

Plato’s writing and philosophy are widely studied in colleges, and often turned to as founding texts of western philosophy. But if we went out looking for people that embraced the philosophy — if we went out looking for actual Platonist — then I think we would come up empty-handed. Or maybe not?

A tempting counter-example is the mathematician.

It certainly seems that to do mathematics, it helps to imagine the objects that you’re studying as inherently real but in a realm that is separate from your desk, chair and laptop. I am certainly susceptible to this thinking. Some mathematicians might even claim that they are mathematical platonists. But there is sometimes reasons to doubt the seriousness of this claim. As Reuben Hersh wrote in Some Proposals for Reviving the Philosophy of Mathematics:

the typical “working mathematician” is a Platonist on weekdays and a formalist on Sundays. That is, when he is doing mathematics, he is convinced that he is dealing with an objective reality whose properties he is attempting to determine. But then, when challenged to give a philosophical account of this reality, he finds it easiest to pretend that he does not believe in it after all.

What explains this discrepency? Is mathematical platonism — or a general vague idealism about mathematical objects — compatible with the actual philosophy attributed to Plato? This is the jist of a question that Conifold asked on the Philosophy StackExchange almost 4 years ago.

In this post, I want to revisit and share my answer. This well let us contrast mathematical platonism with a standard reading of Plato’s thought. After, I’ll take some helpful lessons from postmodernism and consider an alternative reading of Plato. Hopefully this PoMo Plato can suggest some fun thoughts on the old debate on discovery vs invention in mathematics, and better flesh out my Kantian position on the Church-Turing thesis.

Read more of this post

Minimal models for explaining unbounded increase in fitness

On a prior version of my paper on computational complexity as an ultimate constraint, Hemachander Subramanian made a good comment and question:

Nice analysis Artem! If we think of the fitness as a function of genes, interactions between two genes, and interactions between three genes and so on, your analysis using epistasis takes into account only the interactions (second order and more). The presence or absence of the genes themselves (first order) can change the landscape itself, though. Evolution might be able to play the game of standing still as the landscape around it changes until a species is “stabilized” by finding itself in a peak. The question is would traversing these time-dependent landscapes for optima is still uncomputable?

And although I responded to his comment in the bioRxiv Disqus thread, it seems that comments are version locked and so you cannot see Hema’s comment anymore on the newest version. As such, I wanted to share my response on the blog and expand a bit on it.

Mostly this will be an incomplete argument for why biologists should care about worst-case analysis. I’ll have to expand on it more in the future.

Read more of this post

Techne and Programming as Analytic Philosophy

This week, as I was assembling furniture — my closest approach to a traditional craft — I was listening to Peter Adamson interviewing his twin brother Glenn Adamson about craft and material intelligence. Given that this interview was on the history of philosophy (without any gaps) podcast, at some point, the brothers steered the conversation to Plato. In particular, to Plato’s high regard for craft or — in its Greek form — techne.

For Peter, Plato “treats techne, or craft, as a paradigm for knowledge. And a lot of the time in the Socratic dialogues, you get the impression that what Socrates is suggesting is that we need to find a craft or tekne for virtue or ethics — like living in the world — that is more or less like the tekne that say the carpenter has.” Through this, the Adamson twins proposed a view of craft and philosophy as two sides of the same coin.

Except, unlike the carpenter and her apprentice, Plato has Socrates trying to force his interlocutors to formulate their knowledge in propositional terms and not just live it. It is on this point that I differ from Peter Adamson.

The good person practices the craft of ethics: of shaping their own life and particular circumstances into the good life. Their wood is their own existence and their chair is the good life. The philosopher, however, aims to make the implicit (or semi-implicit) knowledge of the good person into explicit terms. To uncover and specify the underlying rules and regularities. And the modern philosopher applies these same principles to other domains, not just ethics. Thus, if I had to give an incomplete definition for this post: philosophy is the art of turning implicit knowledge into propositional form. Analytic philosophy aims for that propositional form to be formal.

But this is also what programmers do.

In this post, I want to convince you that it is fruitful to think of programming as analytic philosophy. In the process, we’ll have to discuss craft and the history of its decline. Of why people (wrongly) think that a professor is ‘better’ than a carpenter.
Read more of this post

Hobbes on knowledge & computer simulations of evolution

Earlier this week, I was at the Second Joint Congress on Evolutionary Biology (Evol2018). It was overwhelming, but very educational.

Many of the talks were about very specific evolutionary mechanisms in very specific model organisms. This diversity of questions and approaches to answers reminded me of the importance of bouquets of heuristic models in biology. But what made this particularly overwhelming for me as a non-biologist was the lack of unifying formal framework to make sense of what was happening. Without the encyclopedic knowledge of a good naturalist, I had a very difficult time linking topics to each other. I was experiencing the pluralistic nature of biology. This was stressed by Laura Nuño De La Rosa‘s slide that contrasts the pluralism of biology with the theory reduction of physics:

That’s right, to highlight the pluralism, there were great talks from philosophers of biology along side all the experimental and theoretical biology at Evol2018.

As I’ve discussed before, I think that theoretical computer science can provide the unifying formal framework that biology needs. In particular, the cstheory approach to reductions is the more robust (compared to physics) notion of ‘theory reduction’ that a pluralistic discipline like evolutionary biology could benefit from. However, I still don’t have any idea of how such a formal framework would look in practice. Hence, throughout Evol2018 I needed refuge from the overwhelming overstimulation of organisms and mechanisms that were foreign to me.

One of the places I sought refuge was in talks on computational studies. There, I heard speakers emphasize several times that they weren’t “just simulating evolution” but that their programs were evolution (or evolving) in a computer. Not only were they looking at evolution in a computer, but this model organism gave them an advantage over other systems because of its transparency: they could track every lineage, every offspring, every mutation, and every random event. Plus, computation is cheaper and easier than culturing E.coli, brewing yeast, or raising fruit flies. And just like those model organisms, computational models could test evolutionary hypotheses and generate new ones.

This defensive emphasis surprised me. It suggested that these researchers have often been questioned on the usefulness of their simulations for the study of evolution.

In this post, I want to reflect on some reasons for such questioning.

Read more of this post