The science and engineering of biological computation: from process to software to DNA-based neural networks

In the earlier days of TheEGG, I used to write extensively about the themes of some of the smaller conferences and workshops that I attended. One of the first such workshops I blogged about in detail was the 2nd workshop on Natural Algorithms and the Sciences in May 2013. That spawned an eight post series that I closed with a vision for a path toward an algorithmic theory of biology. In the six years since, I’ve been following that path. But I have fallen out of the habit of writing summary posts about the workshops that I attend.

View from the SFISince my recent trip to the Santa Fe Institute for the “What is biological computation?” workshop (11 – 13 September 2019) brought me full circle in thinking about algorithmic biology, I thought I’d rekindle the habit of post-workshop blogging. During this SFI workshop — unlike the 2013 workshop in Princeton — I was live tweeting. So if you prefer my completely raw, unedited impressions in tweet form then you can take a look at those threads for Wednesday (14 tweets), Thursday (15 tweets), and Friday (31 tweets). Last week, I wrote about the first day (Wednesday): Elements of biological computation & stochastic thermodynamics of life.

This week, I want to go through the shorter second day and the presentations by Luca Cardelli, Stephanie Forrest, and Lulu Qian.

As before, it is also important to note that this is the workshop through my eyes. So this retelling is subject to the limits of my understanding, notes, and recollection. And as I procrastinate more and more on writing up the story, that recollection becomes less and less accurate.

Read more of this post

Elements of biological computation & stochastic thermodynamics of life

This week, I was visiting the Santa Fe Institute for a workshop organized by Albert Kao, Jessica Flack, and David Wolpert on “What is biological computation?” (11 – 13 September 2019). It was an ambitious question and I don’t think that we were able to answer it in just three days of discussion, but I think that we all certainly learnt a lot.

At least, I know that I learned a lot of new things.

The workshop had around 34 attendees from across the world, but from the reaction on twitter it seems like many more would have been eager to attend also. Hence, both to help synchronize the memory networks of all the participants and to share with those who couldn’t attend, I want to use this series of blog post to jot down some of the topics that were discussed at the meeting.

During the conference, I was live tweeting. So if you prefer my completely raw, unedited impressions in tweet form then you can take a look at those threads for Wednesday (14 tweets), Thursday (15 tweets), and Friday (31 tweets). The workshop itself was organized around discussion, and the presentations were only seeds. Unfortunately, my live tweeting and this post are primarily limited to just the presentations. But I will follow up with some synthesis and reflection in the future.

Due to the vast amount discussed during the workshop, I will focus this post on just the first day. I’ll follow with posts on the other days later.

It is also important to note that this is the workshop through my eyes. And thus this retelling is subject to the limits of my understanding, notes, and recollection. In particular, I wasn’t able to follow the stochastic thermodynamics that dominated the afternoon of the first day. And although I do provide some retelling, I hope that I can convince one of the experts to provide a more careful blog post on the topic.

Read more of this post

Web of C-lief: conjectures vs. model assumptions vs. scientific beliefs

Web of C-lief with the non-contradiction spider

A sketch of the theoretical computer science Web of C-lief weaved by the non-contradiction spider.

In his 1951 paper on the “Two Dogmas of Empiricism”, W.V.O Quine introduced the Web of Belief as a metaphor for his holistic epistemology of scientific knowledge. With this metaphor, Quine aimed to give an alternative to the reductive atomising epistemology of the logical empiricists. For Quine, no “fact” is an island and no experiment can be focused in to resole just one hypothesis. Instead, each of our beliefs forms part of an interconnected web and when a new belief conflicts with an existing one then this is a signal for us to refine some belief. But this signal does not unambiguously single out a specific belief that we should refine. Just a set of beliefs that are incompatible with out new one, or that if refined could bring our belief system back into coherence. We then use alternative mechanisms like simplicity or minimality (or some aesthetic consideration) to choose which belief to update. Usually, we are more willing to give up beliefs that are peripheral to the web — that are connected to or change fewer other beliefs — than the beliefs that are central to our web.

In this post, I want to play with Quine’s web of belief metaphor in the context of science. This will force us to restrict it to specific domains instead of the grand theory that Quine intended. From this, I can then adapt the metaphor from belief in science to c-liefs in mathematics. This will let me discuss how complexity class seperation conjectures are structured in theoretical computer science and why this is fundamentally different from model assumptions in natural science.

So let’s start with a return to the relevant philosophy.

Read more of this post

Generating random power-law graphs

‘Power-law’ is one of the biggest buzzwords in complexology. Almost everything is a power-law. I’ve even used it to sell my own work. But most work that deals in power-laws tends to lack rigour. And just establishing that something is a power-law shouldn’t make us feel that it is more connected to something else that is a power-law. Cosma Shalizi — the great critic of sloppy thinking in complexology — has an insightful passage on power-laws:

[T]here turn out to be nine and sixty ways of constructing power laws, and every single one of them is right, in that it does indeed produce a power law. Power laws turn out to result from a kind of central limit theorem for multiplicative growth processes, an observation which apparently dates back to Herbert Simon, and which has been rediscovered by a number of physicists (for instance, Sornette). Reed and Hughes have established an even more deflating explanation (see below). Now, just because these simple mechanisms exist, doesn’t mean they explain any particular case, but it does mean that you can’t legitimately argue “My favorite mechanism produces a power law; there is a power law here; it is very unlikely there would be a power law if my mechanism were not at work; therefore, it is reasonable to believe my mechanism is at work here.” (Deborah Mayo would say that finding a power law does not constitute a severe test of your hypothesis.) You need to do “differential diagnosis”, by identifying other, non-power-law consequences of your mechanism, which other possible explanations don’t share. This, we hardly ever do.

The curse of this multiple-realizability comes up especially when power-laws intersect with the other great field of complexology: networks.

I used to be very interested in this intersection. I was especially excited about evolutionary games on networks. But I was worried about some of the arbitrary seeming approaches in the literature to generating random power-law graphs. So before starting any projects with them, I took a look into my options. Unfortunately, I didn’t go further with the exploration.

Recently, Raoul Wadhwa has gone much more in-depth in his thinking about graphs and networks. So I thought I’d share some of my old notes on generating random power-law graphs in the hope that they might be useful to Raoul. These notes are half-baked and outdated, but maybe still fun.

Hopefully, you will find them entertaining, too, dear reader.

Read more of this post

Closing the gap between quantum and deterministic query complexity for easy to certify total functions

Recently, trying to keep with my weekly post schedule, I’ve been a bit strapped for inspiration. As such, I’ve posted a few times on a major topic from my past life: quantum query complexity. I’ve mostly tried to describe some techniques for (lower) bounding query complexity like the negative adversary method and span programs. But I’ve never really showed how to use these methods to actually set up interesting bounds.

Since I am again short of a post, I thought I’d share this week a simple proof of a bound possible with these techniques. This is based on an old note I wrote on 19 April 2011.

One of the big conjectures in quantum query complexity — at least a half decade ago when I was worrying about this topic — is that quantum queries give you at most a quadratic speedup over deterministic queries for total functions. In symbols: D(f) = O(Q^2(f)). Since Grover’s algorithm can give us a quadratic quantum speed-up for arbitrary total functions, this conjecture basically says: you can’t do better than Grover.

In this post, I’ll prove a baby version of this conjecture.

Let’s call a Boolean total-function easy to certify if one side of the function has a constant-length certificate complexity. I’ll prove that for easy-to-certify total functions, D(f) = O(Q^2(f)).

This is not an important result, but I thought it is a cute illustration of standard techniques. And so it doesn’t get lost in my old pdf, I thought I’d finally convert it to a blog post. Think of this as a simple application of the adversary method.

Read more of this post

The gene-interaction networks of easy fitness landscapes

Since evolutionary fitness landscapes have been a recurrent theme on TheEGG, I want to return, yet again, to the question of finding local peaks in fitness landscapes. In particular, to the distinction between easy and hard fitness landscapes.

Roughly, in easy landscapes, we can find local peaks quickly and in hard ones, we cannot. But this is very vague. To be a little more precise, I have to borrow the notion of orders of growth from the asymptotic analysis standard in computer science. A family of landscapes indexed by a size n (usually corresponding to the number of genes in the landscape) is easy if a local fitness optimum can be found in the landscapes in time polynomial in n and hard otherwise. In the case of hard landscapes, we can’t guarantee to find a local fitness peak and thus can sometimes reason from a state of perpetual maladaptive disequilibrium.

In Kaznatcheev (2019), I introduced this distinction to biology. Since hard landscapes have more interesting properties which are more challenging to theoretical biologist’s intuitions, I focused more on this. This was read — perhaps rightly — as me advocating for the existence or ubiquity of hard landscapes. And that if hard landscapes don’t occur in nature then my distinction is pointless. But I don’t think this is the most useful reading.

It certainly would be fun if hard landscapes were a feature of nature since they give us a new way to approach certain puzzles like the maintenance of cooperation, the evolution of costly learning, or open-ended evolution. But this is an empirical question. What isn’t a question is that hard landscape are a feature of our mental and mathematical models of evolution. As such, all — or most, whatever that means — fitness landscapes being easy is still exciting for me. It means that the easy vs hard distinction can push us to refine our mental models such that if only easy landscapes occur in nature then our models should only be able to express easy landscapes.

In other words, using computational complexity to build upper-bounds arguments (that on certain classes of landscapes, local optima can be found efficiently) can be just as fun as lower-bounds arguments (that on certain classes of landscapes, evolution requires at least a super-polynomial effort to find any local fitness peak). However, apart from a brief mention of smooth landscapes, I did not stress the upper-bounds in Kaznatcheev (2019).

Now, together with David Cohen and Peter Jeavons, I’ve taken this next step — at least in the cstheory context, we still need to write on the biology. So in this post, I want to talk briefly about a biological framing of Kaznatcheev, Cohen & Jeavons (2019) and the kind of fitness landscapes that are easy for evolution.

Read more of this post

Span programs as a linear-algebraic representation of functions

I feel like TheEGG has been a bit monotone in the sort of theoretical computer science that I’ve been writing about recently. In part, this has been due to time constraints and the pressure of the weekly posting schedule (it has now been over a year with a post every calendar week); and in part due to my mind being too fixated on algorithmic biology.

So for this week, I want to change things up a bit. I want to discuss some of the math behind a success of cstheory applied to nature: quantum computing. It’s been six years since I blogged about quantum query complexity and the negative adversary method for lower bounding it. And it has been close to 8 years since I’ve worked on the topic.

But I did promise to write about span programs — a technique used to reason about query complexity. So in this post, I want to shift gears to quantum computing and discuss span programs. I doubt this is useful to thinking about evolution, but it never hurts to discuss a cool linear-algebraic representation of functions.

I started writing this post for the CSTheory Community Blog. Unfortunately, that blog is largely defunct. So, after 6 years, I decided to post on TheEGG instead.

Please humour me, dear reader.

Read more of this post

Quick introduction: the algorithmic lens

Computers are a ubiquitous tool in modern research. We use them for everything from running simulation experiments and controlling physical experiments to analyzing and visualizing data. For almost any field ‘X’ there is probably a subfield of ‘computational X’ that uses and refines these computational tools to further research in X. This is very important work and I think it should be an integral part of all modern research.

But this is not the algorithmic lens.

In this post, I will try to give a very brief description (or maybe just a set of pointers) for the algorithmic lens. And of what we should imagine when we see an ‘algorithmic X’ subfield of some field X.

Read more of this post

From perpetual motion machines to the Entscheidungsproblem

Turing MachineThere seems to be a tendency to use the newest technology of the day as a metaphor for making sense of our hardest scientific questions. These metaphors are often vague and inprecise. They tend to overly simplify the scientific question and also misrepresent the technology. This isn’t useful.

But the pull of this metaphor also tends to transform the technical disciplines that analyze our newest tech into fundamental disciplines that analyze our universe. This was the case for many aspects of physics, and I think it is currently happening with aspects of theoretical computer science. This is very useful.

So, let’s go back in time to the birth of modern machines. To the water wheel and the steam engine.

I will briefly sketch how the science of steam engines developed and how it dealt with perpetual motion machines. From here, we can jump to the analytic engine and the modern computer. I’ll suggest that the development of computer science has followed a similar path — with the Entscheidungsproblem and its variants serving as our perpetual motion machine.

The science of steam engines successfully universalized itself into thermodynamics and statistical mechanics. These are seen as universal disciplines that are used to inform our understanding across the sciences. Similarly, I think that we need to universalize theoretical computer science and make its techniques more common throughout the sciences.

Read more of this post

Quick introduction: Problems and algorithms

For this week, I want to try a new type of post. A quick introduction to a standard topic that might not be familiar to all readers and that could be useful later on. The goal is to write a shorter post than usual and provide an launching point for future more details discussion on a topic. Let’s see if I can stick to 500 words — although this post is 933, so — in the future.

For our first topic, let’s turn to theoretical computer science.

There are many ways to subdivide theoretical computer science, but one of my favorite divisions is into the two battling factions of computational complexity and algorithm design. To sketch a caricature: the former focus on computational problems and lower bounds, and the latter focus on algorithms and upper bounds. The latter have counter-parts throughout science, but I think the former are much less frequently encountered outside theoretical computer science. I want to sketch the division between these two fields. In the future I’ll explain how it can be useful for reasoning about evolutionary biology.

So let’s start with some definitions, or at least intuitions.
Read more of this post