Pairing tools and problems: a lesson from the methods of mathematics and the Entscheidungsproblem

Three weeks ago it was my lot to present at the weekly integrated mathematical oncology department meeting. Given the informal setting, I decided to grab one gimmick and run with it. I titled my talk: ‘2’. It was an overview of two recent projects that I’ve been working on: double public goods for acid mediated tumour invasion, and edge
effects in game theoretic dynamics of solid tumours
. For the former, I considered two approximations: the limit as the number n of interaction partners is large and the limit as n = 1 — so there are two interacting parties. But the numerology didn’t stop there, my real goal was to highlight a duality between tools or techniques and the problems we apply them to or domains we use them in. As is popular at the IMO, the talk was live-tweeted with many unflattering photos and this great paraphrase (or was it a quote?) by David Basanta from my presentation’s opening:

Since I was rather sleep deprived from preparing my slides, I am not sure what I said exactly but I meant to say something like the following:

I don’t subscribe to the perspective that we should pick the best tool for the job. Instead, I try to pick the best tuple of job and tool given my personal tastes, competences, and intuitions. In doing so, I aim to push the tool slightly beyond its prior borders — usually with an incremental technical improvement — while also exploring a variant perspective — but hopefully still grounded in the local language — on some domain of interest. The job and tool march hand in hand.

In this post, I want to unpack this principle and follow it a little deeper into the philosophy of science. In the process, I will touch on the differences between endogenous and exogenous questions. I will draw some examples from my own work, by will rely primarily on methodological inspiration from pure math and the early days of theoretical computer science.

Read more of this post

Five motivations for theoretical computer science

There are some situations, perhaps lucky ones, where it is felt that an activity needs no external motivation or justification.  For the rest, it can be helpful to think of what the task at hand can be useful for. This of course doesn’t answer the larger question of what is worth doing, since it just distributes the burden somewhere else, but establishing these connections seems like a natural part of an answer to the larger question.

Along those lines, the following are five intellectual areas for whose study theoretical computer science concepts and their development can be useful – therefore, a curiosity about these areas can provide some motivation for learning about those cstheory concepts or developing them. They are arranged from the likely more obvious to most people to the less so: technology, mathematics, science, society, and philosophy. This post could also serve as an homage to delayed gratification (perhaps with some procrastination mixed in), having been finally written up more than three years after first discussing it with Artem.

Read more of this post

Cataloging a year of blogging: the philosophical turn

Passion and motivation are strange and confusing facets of being. Many things about them feel paradoxical. For example, I really enjoy writing, categorizing, and — obviously, if you’ve read many of the introductory paragraphs on TheEGG — blabbing on far too long about myself. So you’d expect that I would have been extremely motivated to write up this index of posts from the last year. Yet I procrastinated — although in a mildly structured way — on it for most of last week, and beat myself up all weekend trying to force words into this textbox. A rather unpleasant experience, although it did let me catch up on some Batman cartoons from my childhood. Since you’re reading this now, I’ve succeeded and received my hit of satisfaction, but the high variance in my motivation to write baffles me.

More fundamentally, there is the paradox of agency. It feels like my motivations and passions are aspects of my character, deeply personal and defining. Yet, it is naive to assume that they are determined by my ego; if I take a step back, I can see how my friends, colleagues, and even complete strangers push and pull the passions and motivations that push and pull me. For example, I feel like TheEGG largely reflects my deep-seated personal interests, but my thoughts do not come from me alone, they are shaped by my social milieu — or more dangerously by Pavlov’s buzzer of my stats page, each view and comment and +1 conditioning my tastes. Is the heavy presence of philosophical content because I am interested in philosophy, or am I interested in philosophy because that is what people want to read? That is the tension that bothers me, but it is clear that my more philosophical posts are much more popular than the practical. If we measure in terms of views then in 2014 new cancer-related posts accounted for only 4.7% of the traffic (with 15 posts), the more abstract cstheory perspective on evolution accounted for 6.6% (with 5 posts), while the posts I discuss below accounted for 57.4% (the missing chunk of unity went to 2014 views of post from 2012 and 2013). Maybe this is part of the reason why there was 24 philosophical posts, compared to the 20 practical posts I highlighted in the first part of this catalog.

Of course, this example is a little artificial, since although readership statistics are fun distraction, they are not particularly relevant just easy to quantify. Seeing the influence of the ideas I read is much more difficult. Although I think these exercises in categorization can help uncover them. In this post, I review the more philosophical posts from last year, breaking them down less autobiographically and more thematically: interfaces and useful delusions; philosophy of the Church-Turing thesis; Limits of science and dangers of mathematics; and personal reflections on philosophy and science. Let me know if you can find some coherent set of influences.

Read more of this post

Cataloging a year of blogging: cancer and biology

Welcome to 111101111.

Another year has come to an end, and it is time to embrace tradition and reflect on the past twelve months. In fact, I will try to do one better and start a new tradition: cataloging a year of blogging.

Last year, I split up the 83 content heavy posts of 2013 into nine categories in three themes: established applications of evolutionary game theory (ethnocentrism and the public good; and mathematical oncology), expanding from behavior to society and mind (representations and rationality for replicators; feedback between finance & economics and ecology & evolution; and, learning, intelligence, and the social brain), and envisioning the algorithmic world (proof, automata, and physics; natural algorithms and biology; fitness landscapes and evolutionary equilibria; and, metamodeling and the (algorithmic) philosophy of science). In 2014 there was a sharp decrease in number of posts with only 44 articles of new content (and the 3 posts cataloging 2013, so 47 total) — this was due to a nearly 4 month blogging silence in the middle of the year — but a quarter increase in readership with 151,493 views compared to 2013’s 119,935 views. This time, I will need only two posts to survey the past year; this post for the practical and the next for the philosophical.

MathOncoFor me, the year was distributed between three cities, the usual suspects of Montreal and New York, and in October I moved down to Tampa, Florida to work with David Basanta and Jacob Scott in the Intergrated Mathematical Oncology department of the H. Lee Moffitt Cancer Center and Research Institute. A winter without snow is strange but wearing shorts in December makes up for it; plus the sunsets over the Gulf of Mexico are absolutely beautiful. Unsurprisingly, this move has meant that the practical aspects of my focus have shifted almost completely to biology; cancer, in particular.

This post is about the biology and oncology articles that made up about half of last year’s content. Given the autobiographical turn of this post, it will be (loosely) structured around three workshops that I attended in 2014, and the online conversations and collaborations that TheEGG was a host to.
Read more of this post

Transcendental idealism and Post’s variant of the Church-Turing thesis

KantPostOne of the exciting things in reading philosophy, its history in particular, is experiencing the tension between different schools of thought. This excitement turns to beauty if a clear synthesis emerges to reconcile the conflicting ideas. In the middle to late 18th century, as the Age of Enlightenment was giving way to the Romantic era, the tension was between rationalism and empiricism and the synthesis came from Immanuel Kant. His thought went on to influence or directly shape much of modern philosophy, and if you browse the table of contents of philosophical journals today then you will regularly encounter hermeneutic titles like “Kant on <semi-obscure modern topic>”. In this regard, my post is in keeping with modern practice because it could have very well been titled “Kant on computability”.

As stressed before, I think that it is productive to look at important concepts from multiple philosophical perspectives. The exercise can provide us with an increased insight into both the school of thought that is our eyes, and the concept that we behold. In this case, the concept is the Church-Turing thesis that states that anything that is computable is computable by a Turing machine. The perspective will be of (a kind of) cognitivism — thought consists of algorithmic manipulation of mental states. This perspective that can often be read directly into Turing, although Copeland & Shagrir (2013) better described him as a pragmatic noncognitivist. Hence, I prefer to attribute this view to Emil Post. Also, it would be simply too much of a mouthful to call it the Post-Turing variant of the Church-Turing thesis.
Read more of this post

Falsifiability and Gandy’s variant of the Church-Turing thesis

RobinGandyIn 1936, two years after Karl Popper published the first German version of The Logic of Scientific Discovery and introduced falsifiability; Alonzo Church, Alan Turing, and Emil Post each published independent papers on the Entscheidungsproblem and introducing the lambda calculus, Turing machines, and Post-Turing machines as mathematical models of computation. The years after saw many more models, all of which were shown to be equivalent to each other in what they could compute. This was summarized in the Church-Turing thesis: anything that is computable is computable by a Turing machine. An almost universally accepted, but also incredibly vague, statement. Of course, such an important thesis has developed many variants, and exploring or contrasting their formulations can be very insightful way to understand and contrast different philosophies.

I believe that the original and most foundational version of the thesis is what I called Kleene’s purely mathematical formulation. Delving into this variant allowed us explore the philosophy of mathematics; Platonism; and the purpose, power and limitations of proof. However, because of the popularity of physicalism and authority of science, I doubt that Kleene’s is the most popular variant. Instead, when people think of the Church-Turing thesis, they often think of what is computable in the world around them. I like to associate this variant with Turing’s long time friend and student — Robin Gandy. I want to explore Gandy’s physical variant of the Church-Turing thesis to better understand the philosophy of science, theory-based conceptions, and the limits of falsifiability. In particular, I want to address what seems to me like the common misconception that the Church-Turing thesis is falsifiable.
Read more of this post

Algorithmic Darwinism

The workshop on computational theories of evolution started off on Monday, March 17th with Leslie Valiant — one of the organizers — introducing his model of evolvability (Valiant, 2009). This original name was meant to capture what type of complexity can be achieved through evolution. Unfortunately — especially at this workshop — evolvability already had a different, more popular meaning in biology: mechanisms that make an organism or species ‘better’ at evolving, in the sense of higher mutations rates, de novo genes, recombination through sex, etc. As such, we need a better name and I am happy to take on the renaming task.
Read more of this post

Evolution is a special kind of (machine) learning

Theoretical computer science has a long history of peering through the algorithmic lens at the brain, mind, and learning. In fact, I would argue that the field was born from the epistemological questions of what can our minds learn of mathematical truth through formal proofs. The perspective became more scientific with McCullock & Pitts’ (1943) introduction of finite state machines as models of neural networks and Turing’s B-type neural networks paving the way for our modern treatment of artificial intelligence and machine learning. The connections to biology, unfortunately, are less pronounced. Turing ventured into the field with his important work on morphogenesis, and I believe that he could have contributed to the study of evolution but did not get the chance. This work was followed up with the use of computers in biology, and with heuristic ideas from evolution entering computer science in the form of genetic algorithms. However, these areas remained non-mathematical, with very few provable statements or non-heuristic reasoning. The task of making strong connections between theoretical computer science and evolutionary biology has been left to our generation.

ValiantAlthough the militia of cstheorists reflecting on biology is small, Leslie Valiant is their standard-bearer for the steady march of theoretical computer science into both learning and evolution. Due in part to his efforts, artificial intelligence and machine learning are such well developed fields that their theory branch has its own name and conferences: computational learning theory (CoLT). Much of CoLT rests on Valiant’s (1984) introduction of probably-approximately correct (PAC) learning which — in spite of its name — is one of the most formal and careful ways to understand learnability. The importance of this model cannot be understated, and resulted in Valiant receiving (among many other distinctions) the 2010 Turing award (i.e. the Nobel prize of computer science). Most importantly, his attention was not confined only to pure cstheory, he took his algorithmic insights into biology, specifically computational neuroscience (see Valiant (1994; 2006) for examples), to understand human thought and learning.

Like any good thinker reflecting on biology, Valiant understands the importance of Dobzhansky’s observation that “nothing in biology makes sense except in the light of evolution”. Even for the algorithmic lens it helps to have this illumination. Any understanding of learning mechanisms like the brain is incomplete without an examination of the evolutionary dynamics that shaped these organs. In the mid-2000s, Valiant embarked on the quest of formalizing some of the insights cstheory can offer evolution, culminating in his PAC-based model of evolvability (Valiant, 2009). Although this paper is one of the most frequently cited on TheEGG, I’ve waited until today to give it a dedicated post.
Read more of this post

Lower bounds by negative adversary method

Are some questions harder than others?

Last week I quantified hardness of answering a question with a quantum computer as the quantum query complexity. I promised that this model would allow us to develop techniques for proving lower bounds. In fact, in this model there are two popular tools: the polynomial method, and the (negative) adversary method. In this week’s post, I’d like to highlight the latter.
Read more of this post

Quantum query complexity

Artem Kaznatcheev lecturing on quantum query complexityYou probably noticed a few things about TheEGG: a recent decrease in blog post frequency and an overall focus on the algorithmic lens — especially its view of biology. You might also be surprised by the lack of discussion of quantum information processing: the most successful on-going application of the algorithmic lens. I actually first became passionate about cstheory as a lens on science when I was studying quantum computing. In undergrad, I played around with representation theory and other fun math to prove things about a tool in quantum information theory known as unitary t-designs. At the start of grad school, I became more algorithmic by focusing on quantum query complexity. To kill two birds with one stone, I thought I would introduce you to query complexity and in doing so restore the more regular posting schedule you’ve been accustomed to. Of course, the easiest way to do this is to recycle my old writing from the now stale cstheory StackExchange blog.
Read more of this post