Heuristic models as inspiration-for and falsifiers-of abstractions

Last month, I blogged about abstraction and lamented that abstract models are lacking in biology. Here, I want to return to this.

What isn’t lacking in biology — and what I also work on — is simulation and heuristic models. These can seem abstract in the colloquial sense but are not very abstract for a computer scientist. They are usually more idealizations than abstractions. And even if all I care about is abstract models — which I can reasonably be accused of at times — then heuristic models should still be important to me. Heuristics help abstractions in two ways: portfolios of heuristic models can inspire abstractions, and single heuristic models can falsify abstractions.

In this post, I want to briefly discuss these two uses for heuristic models. In the process, I will try to make it a bit more clear as to what I mean by a heuristic model. I will do this with metaphors. So I’ll produce a heuristic model of heuristic models. And I’ll use spatial structure and the evolution of cooperation as a case study.

Read more of this post

Advertisements

Algorithmic lens as Alan Turing’s wider impact

Today is Alan Turing’s birthday. He would have turned 106.

It has been too long since I last wrote about him on TheEGG. Today, I want to provide an overview of some of his most important work based on my and other’s answers on this old cstheory question. This will build slightly on a post I wrote two years ago for the Heidelberg Laureate Forum, but it will share a lot of text in common.

Turing is far from obscure. Every computer scientist and programmer has heard his name. The Nobel prize of Computer Science is named after him. He has even joined the ranks of mathematicians with feature-length films. Although a film that misrepresents much history. But even outside of film, I feel that our perceptions and representations of Turing are shaped too heavily by the current boundaries and constraints of computer science. Or at least how computer science is popularly (mis)understood.

Also, it is just easier to film the building a giant machine than about proving theorems and revolutionizing how we think about the world.

As the great breadth of his work shows, Turing would not recognize the disciplinary boundaries that confine computer science to technology. Like Abel Molina, he would see many motivations for computer science, from Science and Technology to Mathematics and Philosophy to Society. Turing viewed the whole world through the algorithmic lens. A wide ambition that is sometimes lacking in modern computer science.

In this post, I want to highlight some of the aspects of the world that Turing looked at.
Read more of this post

Double-entry bookkeeping and Galileo: abstraction vs idealization

Two weeks ago, I wrote a post on how abstract is not the opposite of empirical. In that post, I distinguished between the colloquial meaning of abstract and the ‘true’ meaning used by computer scientists. For me, abstraction is defined by multiple realizability. An abstract object can have many implementations. The concrete objects that implement an abstraction might differ from each other in various — potentially drastic — ways but if the implementations are ‘correct’ then the ways in which they differ are irrelevant to the conclusions drawn from the abstraction.

I contrasted this comp sci view with a colloquial sense that I attributed to David Basanta. I said this colloquial sense was just that an abstract model is ‘less detailed’.

In hindsight, I think this colloquial sense was a straw-man and doesn’t do justice to David’s view. It isn’t ignoring any detail that makes something colloquially abstract. Rather, it is ignoring ‘the right sort of’ detail in the ‘right sort of way’. It is about making an idealization meant to arrive at some essence of a (class of) object(s) or a process. And this idealization view of abstraction has a long pedigree.

In this post, I want to provide a semi-historical discussion of the the difference between (comp sci) abstraction vs idealization. I will focus on double-entry bookkeeping as a motivation. Now, this might not seem relevant to science, but for Galileo it was relevant. He expressed his views on (proto-)scientific abstraction by analogy to bookkeeping. And in expressing his view, he covered both abstraction and idealization. In the process, he introduced both good ideas and bad ones. They remain with us today.

Read more of this post

Abstract is not the opposite of empirical: case of the game assay

Last week, Jacob Scott was at a meeting to celebrate the establishment of the Center for Evolutionary Therapy at Moffitt, and he presented our work on measuring the effective games that non-small cell lung cancer plays (see this preprint for the latest draft). From the audience, David Basanta summarized it in a tweet as “trying to make our game theory models less abstract”. But I actually saw our work as doing the opposite (and so quickly disagreed).

However, I could understand the way David was using ‘abstract’. I think I’ve often used it in this colloquial sense as well. And in that sense it is often the opposite of empirical, which is seen as colloquially ‘concrete’. Given my arrogance, I — of course — assume that my current conception of ‘abstract’ is the correct one, and the colloquial sense is wrong. To test myself: in this post, I will attempt to define both what ‘abstract’ means and how it is used colloquially. As a case study, I will use the game assay that David and I disagreed about.

This is a particularly useful exercise for me because it lets me make better sense of how two very different-seeming aspects of my work — the theoretical versus the empirical — are both abstractions. It also lets me think about when simple models are abstract and when they’re ‘just’ toys.

Read more of this post

Cataloging a year of blogging: complexity in evolution, general models, and philosophy

Last month, with just hours to spare in January, I shared a linkdex of the 14 cancer-related posts from TheEGG in 2016. Now, as February runs out, it’s time to reflect on the 15 non cancer-specific posts from last year. Although, as we’ll see, some of them are still related to mathematical oncology. With a nice number like 15, I feel that I am obliged to divide them into three categories of five articles each. Which does make for a stretch in narrowing down themes.

The three themes were: (1) complexity, supply driven evolution, and abiogenesis, (2) general models and their features, (3) algorithmic philosophy and the social good.

And yes, two months have passed and all I’ve posted to the blog are two 2016-in-review posts. Even those were rushed and misshapen. But I promise there is more and better coming; hopefully with a regular schedule.

Read more of this post

Antoni Gaudi and learning algorithms from Nature

Happy holidays.

A few days ago, I was exploring Barcelona. This means that I saw a lot of architecture by Antoni Gaudi. His works have a very distinct style; their fluid lines, bright colours, myriad materials, and interface of design and function make for very naturesque buildings. They are unique and stand in sharp contrast to the other — often Gothic revival and Catalan Modernisme — architecture around them. The contrast is conscious; when starting out, Gaudi learned the patterns of the neo-Gothic architecture then in vogue and later commented on it:

Gothic art is imperfect, only half resolved; it is a style created by the compasses, a formulaic industrial repetition. Its stability depends on constant propping up by the buttresses: it is a defective body held up on crutches. … The proof that Gothic works are of deficient plasticity is that they produce their greatest emotional effect when they are mutilated, covered in ivy and lit by the moon.

His buildings, however, do not need to be overgrown by ivy, for Gaudi already incorporates nature in their design. I felt this connection most viscerally when touring the attic of Casa Mila. The building was commissioned as an apartment for local bourgeois to live comfortably on the ground floor off the rents they collected from the upper floors. And although some of the building is still inhabited by businesses and private residence, large parts of it have been converted into a museum. The most famous part among tourists is probably the uneven organic roof with its intricate smoke stacks, ventilation shafts, and archways for framing other prominent parts of Barcelona.

This uneven roof is supported by an attic that houses an exhibit on Gaudi’s method. Here, I could see Gaudi’s inspiration. On display was a snake’s skeleton and around me were the uneven arches of the attic — the similarity was palpable (see below). The questions for me were: was Gaudi inspired by nature or did he learn from it? Is there even much of a difference between ‘inspired’ and ‘learned’? And can this inform thought on the correspondence between nature and algorithms more generally?

naturalarches

I spend a lot of time writing about how we can use algorithmic thinking to understand aspects of biology. It is much less common for me to write about how we can use biology or nature to understand and inspire algorithms. In fact, I feel surprisingly strong skepticism towards the whole field of natural algorithms, even when I do write about it. I suspect that this stems from my belief that we cannot learn algorithms from nature. A belief that was shaken, but not overturned, when I saw the snake’s skeleton in Gaudi’s attic. In this post, I will try to substantiate the statement that we cannot learn algorithms from nature. My hope is that someone, or maybe just the act of writing, will convince me otherwise. I’ll sketch my own position on algorithms & nature, and strip the opposing we-learn-algorithms-from-nature position of some of its authority by pulling on a historic thread that traces this belief from Plato through Galileo to now. I’ll close with a discussion of some practical consequences of this metaphysical disagreement and try to make sense of Gaudi’s work from my perspective.

Read more of this post

Computational kindness and the revelation principle

In EWD1300, Edsger W. Dijkstra wrote:

even if you have only 60 readers, it pays to spend an hour if by doing so you can save your average reader a minute.

He wrote this as the justification for the mathematical notations that he introduced and as an ode to the art of definition. But any writer should heed this aphorism.[1] Recently, I finished reading Algorithms to Live By by Brian Christian and Tom Griffiths.[2] In the conclusion of their book, they gave a unifying name to the sentiment that Dijkstra expresses above: computational kindness.

As computer scientists, we recognise that computation is costly. Processing time is a limited resource. Whenever we interact with others, we are sharing in a joint computational process, and we need to be mindful of when we are not carrying our part of the processing burden. Or worse yet, when we are needlessly increasing that burden and imposing it on our interlocutor. If you are computationally kind then you will be respectful of the cognitive problems that you force others to solve.

I think this is a great observation by Christian and Griffiths. In this post, I want to share with you some examples of how certain systems — at the level of the individual, small group, and society — are computationally kind. And how some are cruel. I will draw on examples from their book, and some of my own. They will include, language, bus stops, and the revelation principle in algorithmic game theory.
Read more of this post

What is the algorithmic lens?

If you are a regular reader then you are probably familiar with my constant mention of the algorithmic lens. I insist that we must apply it to everything: biology, cognitive science, ecology, economics, evolution, finance, philosophy, probability, and social science. If you’re still reading this then you have incredible powers to resist clicking links. Also, you are probably mathematically inclined and appreciate the art of good definitions. As such, you must be incredibly irked by my lack of attempt at explicitly defining the algorithmic lens. You are not the only one, dear reader, there have been many times where I have been put on the spot and asked to define this ‘algorithmic lens’. My typical response has been the blank stare; not the “oh my, why don’t you already know this?” stare, but the “oh my, I don’t actually know how to define this” stare. Like the artist, continental philosopher, or literary critic, I know how ‘algorithmic lens’ makes me feel and what it means to me, but I just can’t provide a binding definition. Sometimes I even think it is best left underspecified, but I won’t let that thought stop me from attempting a definition.
Read more of this post

Randomness, necessity, and non-determinism

If we want to talk philosophy then it is necessary to mention Aristotle. Or is it just a likely beginning? For Aristotle, there were three types of events: certain, probable, and unknowable. Unfortunately for science, Aristotle considered the results of games of chance to be unknowable, and probability theory started — 18 centuries later — with the analysis of games of chance. This doomed much of science to an awkward fetishisation of probability, an undue love of certainty, and unreasonable quantification of the unknowable. A state of affairs that stems from our fear of admitting when we are ignorant, a strange condition given that many scientists would agree with Feynman’s assessment that one of the main features of science is acknowledging our ignorance:

Unfortunately, we throw away our ability to admit ignorance when we assign probabilities to everything. Especially in settings where there is no reason to postulate an underlying stochastic generating process, or a way to do reliable repeated measurements. “Foul!” you cry, “Bayesians talk about beliefs, and we can hold beliefs about single events. You are just taking the dated frequentist stance.” Well, to avoid the nonsense of the frequentist vs. Bayesian debate, let me take literately the old adage “put your money where you mouth is” and use the fundamental theorem of asset pricing to define probability. I’ll show an example of a market we can’t price, and ask how theoretical computer science can resolve our problem with non-determinism.
Read more of this post

What can theoretical computer science offer biology?

If there is anything definitive that can be said about biology then it is that biology is messy and complicated. The stereotypical image of a biology major is in the library memorizing volumes of loosely (at best) related facts only to have them overturned in the next semester. Concepts are related only vaguely, to the point that it looks like stamp-collecting to outsiders. The only unifying theme is evolution, and even that is presented with a smorgasbord of verbal and mathematical models, with many lacking predictive power or sometimes even solid empirical foundations. This might seem like the polar opposite of a theoretical computer scientist with her strict hierarchies and logical deductions. Even the complexity zoo seems perfectly tame compared to any biological taxonomy. However, since I’ve been promoting algorithmic theories of biology and even trying my hand at applying cstheory to models of evolution (Kaznatcheev, 2013), I must think that there is some possibility for a productive partnership. So, what can theoretical computer science offer biology? CStheory can provide the abstractions and technical tools to systematize and organize biology’s heuristic models.
Read more of this post