Abusing numbers and the importance of type checking

What would you say if I told you that I could count to infinity on my hands? Infinity is large, and I have a typical number of fingers. Surely, I must be joking. Well, let me guide you through my process. Since you can’t see me right now, you will have to imagine my hands. When I hold out the thumb on my left hand, that’s one, and when I hold up the thumb and the index finger, that’s two. Actually, we should be more rigorous, since you are imagining my fingers, it actually isn’t one and two, but i and 2i. This is why they call them imaginary numbers.

Let’s continue the process of extending my (imaginary) fingers from the leftmost digits towards the right. When I hold out my whole left hand and the pinky, ring, and middle fingers on my right hand, I have reached 8i.

But this doesn’t look like what I promised. For the final step, we need to remember the geometric interpretation of complex numbers. Multiplying by i is the same thing as rotating counter-clockwise by 90 degrees in the plane. So, let’s rotate our number by 90 degrees and arrive at $\infty$.

I just counted to infinity on my hands.

Of course, I can’t stop at a joke. I need to overanalyze it. There is something for scientists to learn from the error that makes this joke. The disregard for the type of objects and jumping between two different — and usually incompatible — ways of interpreting the same symbol is something that scientists, both modelers and experimentalists, have to worry about it.

If you want an actually funny joke of this type then I recommend the image of a ‘rigorous proof’ above that was tweeted by Moshe Vardi. My writen version was inspired by a variant on this theme mentioned on Reddit by jagr2808.

I will focus this post on the use of types from my experience with stoichiometry in physics. Units in physics allow us to perform sanity checks after long derivations, imagine idealized experiments, and can even suggest refinements of theory. These are all features that evolutionary game theory, and mathematical biology more broadly, could benefit from. And something to keep in mind as clinicians, biologists, and modelers join forces this week during the 5th annual IMO Workshop at the Moffitt Cancer Center.

Pairing tools and problems: a lesson from the methods of mathematics and the Entscheidungsproblem

Three weeks ago it was my lot to present at the weekly integrated mathematical oncology department meeting. Given the informal setting, I decided to grab one gimmick and run with it. I titled my talk: ‘2’. It was an overview of two recent projects that I’ve been working on: double public goods for acid mediated tumour invasion, and edge
effects in game theoretic dynamics of solid tumours
. For the former, I considered two approximations: the limit as the number n of interaction partners is large and the limit as n = 1 — so there are two interacting parties. But the numerology didn’t stop there, my real goal was to highlight a duality between tools or techniques and the problems we apply them to or domains we use them in. As is popular at the IMO, the talk was live-tweeted with many unflattering photos and this great paraphrase (or was it a quote?) by David Basanta from my presentation’s opening:

Since I was rather sleep deprived from preparing my slides, I am not sure what I said exactly but I meant to say something like the following:

I don’t subscribe to the perspective that we should pick the best tool for the job. Instead, I try to pick the best tuple of job and tool given my personal tastes, competences, and intuitions. In doing so, I aim to push the tool slightly beyond its prior borders — usually with an incremental technical improvement — while also exploring a variant perspective — but hopefully still grounded in the local language — on some domain of interest. The job and tool march hand in hand.

In this post, I want to unpack this principle and follow it a little deeper into the philosophy of science. In the process, I will touch on the differences between endogenous and exogenous questions. I will draw some examples from my own work, by will rely primarily on methodological inspiration from pure math and the early days of theoretical computer science.

Five motivations for theoretical computer science

There are some situations, perhaps lucky ones, where it is felt that an activity needs no external motivation or justification.  For the rest, it can be helpful to think of what the task at hand can be useful for. This of course doesn’t answer the larger question of what is worth doing, since it just distributes the burden somewhere else, but establishing these connections seems like a natural part of an answer to the larger question.

Along those lines, the following are five intellectual areas for whose study theoretical computer science concepts and their development can be useful – therefore, a curiosity about these areas can provide some motivation for learning about those cstheory concepts or developing them. They are arranged from the likely more obvious to most people to the less so: technology, mathematics, science, society, and philosophy. This post could also serve as an homage to delayed gratification (perhaps with some procrastination mixed in), having been finally written up more than three years after first discussing it with Artem.

Operationalizing replicator dynamics and partitioning fitness functions

As you know, dear regular reader, I have a rather uneasy relationship with reductionism, especially when doing mathematical modeling in biology. In mathematical oncology, for example, it seems that there is a hope that through our models we can bring a more rigorous mechanistic understanding of cancer, but at the same time there is the joke that given almost any microscopic mechanism there is an experimental paper in the oncology literature supporting it and another to contradict it. With such a tenuous and shaky web of beliefs justifying (or just hinting towards) our nearly arbitrary microdynamical assumptions, it seems unreasonable to ground our models in reductionist stories. At such a time of ontological crisis, I have an instinct to turn — much like many physicists did during a similar crisis at the start of the 20th century in their discipline — to operationalism. Let us build a convincing mathematical theory of cancer in the petri dish with as few considerations of things we can’t reliably measure and then see where to go from there. To give another analogy to physics in the late 1800s, let us work towards a thermodynamics of cancer and worry about its many possible statistical mechanics later.

This is especially important in applications of evolutionary game theory where assumptions abound. These assumptions aren’t just about modeling details like the treatments of space and stochasticity or approximations to them but about if there is even a game taking place or what would constitute a game-like interaction. However, to work toward an operationalist theory of games, we need experiments that beg for EGT explanations. There is a recent history of these sort of experiments in viruses and microbes (Lenski & Velicer, 2001; Crespi, 2001; Velicer, 2003; West et al., 2007; Ribeck & Lenski, 2014), slime molds (Strassmann & Queller, 2011) and yeast (Gore et al., 2009; Sanchez & Gore, 2013), but the start of these experiments in oncology by Archetti et al. (2015) is current events[1]. In the weeks since that paper, I’ve had a very useful reading group and fruitful discussions with Robert Vander Velde and Julian Xue about the experimental aspects of this work. This Monday, I spent most of the afternoon discussing similar experiments with Robert Noble who is visiting Moffitt from Montpellier this week.

In this post, I want to unlock some of this discussion from the confines of private emails and coffee chats. In particular, I will share my theorist’s cartoon understanding of the experiments in Archetti et al. (2015) and how they can help us build an operationalist approach to EGT but how they are not (yet) sufficient to demonstrate the authors’ central claim that neuroendocrine pancreatic cancer dynamics involve a public good.

Truthiness of irrelevant detail in explanations from neuroscience to mathematical models

Truthiness is the truth that comes from the gut, not books. Truthiness is preferring propositions that one wishes to be true over those known to be true. Truthiness is a wonderful commentary on the state of politics and media by a fictional character determined to be the best at feeling the news at us. Truthiness is a one word summary of emotivism.

Truthiness is a lot of things, but all of them feel far from the hard objective truths of science.

Right?

Maybe an ideal non-existent non-human Platonic capital-S Science, but at least science as practiced — if not all conceivable versions of it — is very much intertwined with politics and media. Both internal to the scientific community: how will I secure the next grant? who should I cite to please my reviewers? how will I sell this to get others reading? And external: how can we secure more funding for science? how can we better incorporate science into schools? how can we influence policy decisions? I do not want to suggest that this tangle is (all) bad, but just that it exists and is prevalent. Thus, critiques of politics and media are relevant to a scientific symposium in much the same way as they are relevant to a late-night comedy show.

I want to discuss an aspect of truthiness in science: making an explanation feel more scientific or more convincing through irrelevant detail. The two domains I will touch on is neuroscience and mathematical modeling. The first because in neuroscience I’ve been acquainted with the literature on irrelevant detail in explanations and because neuroscientific explanations have a profound effect on how we perceive mental health. The second because it is the sort of misrepresentation I fear of committing the most in my own work. I also think the second domain should matter more to the working scientist; while irrelevant neurological detail is mostly misleading to the neuroscience-naive general public, irrelevant mathematical detail can be misleading, I feel, to the mathematically-naive scientists — a non-negligible demographic.

What makes a discipline ‘mathematical’?

While walking to work on Friday, I was catching up on one of my favorite podcasts: The History of Philosophy without any Gaps. To celebrate the podcast’s 200th episode, Peter Adamson was interviewing Jill Kraye and John Marenbon on medieval philosophy. The podcasts was largely concerned with where we should define the temporal boundaries of medieval philosophy, especially on the side that bleeds into the Renaissance. A non-trivial, although rather esoteric question — even compared to some of the obscure things I go into on this blog, and almost definitely offtopic for TheEGG — but it is not what motivated me to open today’s post with this anecdote. Instead, I was caught by Jill Kraye’s passing remark:

[T]he Merton school, which was a very technical mathematical school of natural philosophy in 14th century England; they applied mechanical ideas to medicine

I’ve never heard of the Merton school before — which a quick search revealed to be also known as the Oxford Calculators; named after Richard Swinehead‘s Book of Calculations — but it seems that they introduced much more sophisticated mathematical reasoning into the secundum imaginationem — philosophical thought experiments or intuition pumps — that were in vogue among their contemporaries. They even beat Galileo to fundamental insights that we usually attribute to him, like the mean speed theorem. Unfortunately, I wasn’t able to find sources on the connection to medicine, although Peter Adamson and Jill Kraye have pointed me to a couple of books.

Do you have pointers, dear reader?

But this serendipitous encounter, did prompt an interesting lunchtime discussion with Arturo Araujo, Jill Gallaher, and David Basanta. I asked them what they thought the earliest work in mathematical medicine was, but as my interlocutors offered suggestion, I kept moving the goalposts and the conversation quickly metamorphosed from history to philosophy. The question became: What makes a discipline ‘mathematical’?

A year in books: philosophy, psychology, and political economy

If you follow the Julian calendar — which I do when I need a two week extension on overdue work — then today is the first day of 2015.

Happy Old New Year!

This also means that this is my last day to be timely with a yet another year-in-review post; although I guess I could also celebrate the Lunar New Year on February 19th. Last year, I made a resolution to read one not-directly-work-related book a month, and only satisfied it in an amortized analysis; I am repeating the resolution this year. Since I only needed two posts to catalog the practical and philosophical articles on TheEGG, I will try something new with this one: a list and mini-review of the books I read last year to meet my resolution. I hope that based on this, you can suggest some books for me to read in 2015; or maybe my comments will help you choose your next book to read. I know that articles and blogs I’ve stumbled across have helped guide my selection. If you want to support TheEGG directly and help me select the books that I will read this year then consider donating something from TheEGG wishlist.

Cataloging a year of blogging: the philosophical turn

Passion and motivation are strange and confusing facets of being. Many things about them feel paradoxical. For example, I really enjoy writing, categorizing, and — obviously, if you’ve read many of the introductory paragraphs on TheEGG — blabbing on far too long about myself. So you’d expect that I would have been extremely motivated to write up this index of posts from the last year. Yet I procrastinated — although in a mildly structured way — on it for most of last week, and beat myself up all weekend trying to force words into this textbox. A rather unpleasant experience, although it did let me catch up on some Batman cartoons from my childhood. Since you’re reading this now, I’ve succeeded and received my hit of satisfaction, but the high variance in my motivation to write baffles me.

More fundamentally, there is the paradox of agency. It feels like my motivations and passions are aspects of my character, deeply personal and defining. Yet, it is naive to assume that they are determined by my ego; if I take a step back, I can see how my friends, colleagues, and even complete strangers push and pull the passions and motivations that push and pull me. For example, I feel like TheEGG largely reflects my deep-seated personal interests, but my thoughts do not come from me alone, they are shaped by my social milieu — or more dangerously by Pavlov’s buzzer of my stats page, each view and comment and +1 conditioning my tastes. Is the heavy presence of philosophical content because I am interested in philosophy, or am I interested in philosophy because that is what people want to read? That is the tension that bothers me, but it is clear that my more philosophical posts are much more popular than the practical. If we measure in terms of views then in 2014 new cancer-related posts accounted for only 4.7% of the traffic (with 15 posts), the more abstract cstheory perspective on evolution accounted for 6.6% (with 5 posts), while the posts I discuss below accounted for 57.4% (the missing chunk of unity went to 2014 views of post from 2012 and 2013). Maybe this is part of the reason why there was 24 philosophical posts, compared to the 20 practical posts I highlighted in the first part of this catalog.

Of course, this example is a little artificial, since although readership statistics are fun distraction, they are not particularly relevant just easy to quantify. Seeing the influence of the ideas I read is much more difficult. Although I think these exercises in categorization can help uncover them. In this post, I review the more philosophical posts from last year, breaking them down less autobiographically and more thematically: interfaces and useful delusions; philosophy of the Church-Turing thesis; Limits of science and dangers of mathematics; and personal reflections on philosophy and science. Let me know if you can find some coherent set of influences.

Realism and interfaces in philosophy of mind and metaphysics

In an earlier post, I discussed three theories of perception: naive realism, critical realism, and interfaces. To remind you of the terminology: naive realism is the stance that the world is exactly as we perceive it and critical realism is that perception resembles reality, but doesn’t capture all of it. Borrowing an image from Kevin Song: if naive realism is a perfect picture then critical realism is a blurry one. For a critical realist, our perception is — to move to another metaphor — a map of the territory that is reality; it distorts, omits details, adds some labels, and draws emphasis, but largely preserves the main structure. Interfaces, however, do not preserve structure. Borrowing now from Donald Hoffman: consider your computer desktop, what are the folders? They don’t reflect the complicated sequence of changes in magnetization in a thin film of ferromagnetic material inside a metal box called your hard-drive, not even at a coarse-grained level. Nor do they hint at the complicated information processing that changes those magnetic fields into the photons that leave your screen. But they do allow you to have a predictable and intelligible interaction with your computer, something that would be much more difficult with just a magnetized needle and a steady hand. The interface does not resemble reality, it just allows us to act. Although the comments section of the earlier post became rather philosophical, my original intention was to stay in the realm of the current scientific discourse on perception. The distinction between realism and interfaces, however, also has a rich philosophical history — not only in epistemology but also in metaphysics — that I want to highlight with a few examples in this post.