Description before prediction: evolutionary games in oncology

As I discussed towards the end of an old post on cross-validation and prediction: we don’t always want to have prediction as our primary goal, or metric of success. In fact, I think that if a discipline has not found a vocabulary for its basic terms, a grammar for combining those terms, and a framework for collecting, interpreting, and/or translating experimental practice into those terms then focusing on prediction can actually slow us down or push us in the wrong direction. To adapt Knuth: I suspect that premature optimization of predictive potential is the root of all evil.

We need to first have a good framework for describing and summarizing phenomena before we set out to build theories within that framework for predicting phenomena.

In this brief post, I want to ask if evolutionary games in oncology are ready for building predictive models. Or if they are still in need of establishing themselves as a good descriptive framework.

Read more of this post

Fighting about frequency and randomly generating fitness landscapes

A couple of months ago, I was in Cambridge for the Evolution Evolving conference. It was a lot of fun, and it was nice to catch up with some familiar faces and meet some new ones. My favourite talk was Karen Kovaka‘s “Fighting about frequency”. It was an extremely well-delivered talk on the philosophy of science. And it engaged with a topic that has been very important to discussions of my own recent work. Although in my case it is on a much smaller scale than the general phenomenon that Kovaka was concerned with,

Let me first set up my own teacup, before discussing the more general storm.

Recently, I’ve had a number of chances to present my work on computational complexity as an ultimate constraint on evolution. And some questions have repeated again and again after several of the presentations. I want to address one of these persistent questions in this post.

How common are hard fitness landscapes?

This question has come up during review, presentations, and emails (most recently from Jianzhi Zhang’s reading group). I’ve spent some time addressing it in the paper. But it is not a question with a clear answer. So unsurprisingly, my comments have not been clear. Hence, I want to use this post to add some clarity.

Read more of this post

Hiding behind chaos and error in the double pendulum

If you want a visual intuition for just how unpredictable chaotic dynamics can be then the go-to toy model is the double pendulum. There are lots of great simulations (and some physical implementations) of the double pendulum online. Recently, /u/abraxasknister posted such a simulation on the /r/physics subreddit and quickly attracted a lot of attention.

In their simulation, /u/abraxasknister has a fixed center (block dot) that the first mass (red dot) is attached to (by an invisible rigid massless bar). The second mass (blue dot) is then attached to the first mass (also by an invisible rigid massless bar). They then release these two masses from rest at some initial height and watch what happens.

The resulting dynamics are at right.

It is certainly unpredictable and complicated. Chaotic? Most importantly, it is obviously wrong.

But because the double pendulum is a famous chaotic system, some people did not want to acknowledge that there is an obvious mistake. They wanted to hide behind chaos: they claimed that for a complex system, we cannot possibly have intuitions about how the system should behave.

In this post, I want to discuss the error of hiding behind chaos, and how the distinction between microdynamics and global properties lets us catch /u/abraxasknister’s mistake.
Read more of this post

Span programs as a linear-algebraic representation of functions

I feel like TheEGG has been a bit monotone in the sort of theoretical computer science that I’ve been writing about recently. In part, this has been due to time constraints and the pressure of the weekly posting schedule (it has now been over a year with a post every calendar week); and in part due to my mind being too fixated on algorithmic biology.

So for this week, I want to change things up a bit. I want to discuss some of the math behind a success of cstheory applied to nature: quantum computing. It’s been six years since I blogged about quantum query complexity and the negative adversary method for lower bounding it. And it has been close to 8 years since I’ve worked on the topic.

But I did promise to write about span programs — a technique used to reason about query complexity. So in this post, I want to shift gears to quantum computing and discuss span programs. I doubt this is useful to thinking about evolution, but it never hurts to discuss a cool linear-algebraic representation of functions.

I started writing this post for the CSTheory Community Blog. Unfortunately, that blog is largely defunct. So, after 6 years, I decided to post on TheEGG instead.

Please humour me, dear reader.

Read more of this post

Introduction to Algorithmic Biology: Evolution as Algorithm

As Aaron Roth wrote on Twitter — and as I bet with my career: “Rigorously understanding evolution as a computational process will be one of the most important problems in theoretical biology in the next century. The basics of evolution are many students’ first exposure to “computational thinking” — but we need to finish the thought!”

Last week, I tried to continue this thought for Oxford students at a joint meeting of the Computational Society and Biological Society. On May 22, I gave a talk on algorithmic biology. I want to use this post to share my (shortened) slides as a pdf file and give a brief overview of the talk.

Winding path in a hard semi-smooth landscape

If you didn’t get a chance to attend, maybe the title and abstract will get you reading further:

Algorithmic Biology: Evolution is an algorithm; let us analyze it like one.

Evolutionary biology and theoretical computer science are fundamentally interconnected. In the work of Charles Darwin and Alfred Russel Wallace, we can see the emergence of concepts that theoretical computer scientists would later hold as central to their discipline. Ideas like asymptotic analysis, the role of algorithms in nature, distributed computation, and analogy from man-made to natural control processes. By recognizing evolution as an algorithm, we can continue to apply the mathematical tools of computer science to solve biological puzzles – to build an algorithmic biology.

One of these puzzles is open-ended evolution: why do populations continue to adapt instead of getting stuck at local fitness optima? Or alternatively: what constraint prevents evolution from finding a local fitness peak? Many solutions have been proposed to this puzzle, with most being proximal – i.e. depending on the details of the particular population structure. But computational complexity provides an ultimate constraint on evolution. I will discuss this constraint, and the positive aspects of the resultant perpetual maladaptive disequilibrium. In particular, I will explain how we can use this to understand both on-going long-term evolution experiments in bacteria; and the evolution of costly learning and cooperation in populations of complex organisms like humans.

Unsurprisingly, I’ve writen about all these topics already on TheEGG, and so my overview of the talk will involve a lot of links back to previous posts. In this way. this can serve as an analytic linkdex on algorithmic biology.
Read more of this post