Antoni Gaudi and learning algorithms from Nature

Happy holidays.

A few days ago, I was exploring Barcelona. This means that I saw a lot of architecture by Antoni Gaudi. His works have a very distinct style; their fluid lines, bright colours, myriad materials, and interface of design and function make for very naturesque buildings. They are unique and stand in sharp contrast to the other — often Gothic revival and Catalan Modernisme — architecture around them. The contrast is conscious; when starting out, Gaudi learned the patterns of the neo-Gothic architecture then in vogue and later commented on it:

Gothic art is imperfect, only half resolved; it is a style created by the compasses, a formulaic industrial repetition. Its stability depends on constant propping up by the buttresses: it is a defective body held up on crutches. … The proof that Gothic works are of deficient plasticity is that they produce their greatest emotional effect when they are mutilated, covered in ivy and lit by the moon.

His buildings, however, do not need to be overgrown by ivy, for Gaudi already incorporates nature in their design. I felt this connection most viscerally when touring the attic of Casa Mila. The building was commissioned as an apartment for local bourgeois to live comfortably on the ground floor off the rents they collected from the upper floors. And although some of the building is still inhabited by businesses and private residence, large parts of it have been converted into a museum. The most famous part among tourists is probably the uneven organic roof with its intricate smoke stacks, ventilation shafts, and archways for framing other prominent parts of Barcelona.

This uneven roof is supported by an attic that houses an exhibit on Gaudi’s method. Here, I could see Gaudi’s inspiration. On display was a snake’s skeleton and around me were the uneven arches of the attic — the similarity was palpable (see below). The questions for me were: was Gaudi inspired by nature or did he learn from it? Is there even much of a difference between ‘inspired’ and ‘learned’? And can this inform thought on the correspondence between nature and algorithms more generally?


I spend a lot of time writing about how we can use algorithmic thinking to understand aspects of biology. It is much less common for me to write about how we can use biology or nature to understand and inspire algorithms. In fact, I feel surprisingly strong skepticism towards the whole field of natural algorithms, even when I do write about it. I suspect that this stems from my belief that we cannot learn algorithms from nature. A belief that was shaken, but not overturned, when I saw the snake’s skeleton in Gaudi’s attic. In this post, I will try to substantiate the statement that we cannot learn algorithms from nature. My hope is that someone, or maybe just the act of writing, will convince me otherwise. I’ll sketch my own position on algorithms & nature, and strip the opposing we-learn-algorithms-from-nature position of some of its authority by pulling on a historic thread that traces this belief from Plato through Galileo to now. I’ll close with a discussion of some practical consequences of this metaphysical disagreement and try to make sense of Gaudi’s work from my perspective.

Read more of this post

Computational kindness and the revelation principle

In EWD1300, Edsger W. Dijkstra wrote:

even if you have only 60 readers, it pays to spend an hour if by doing so you can save your average reader a minute.

He wrote this as the justification for the mathematical notations that he introduced and as an ode to the art of definition. But any writer should heed this aphorism.[1] Recently, I finished reading Algorithms to Live By by Brian Christian and Tom Griffiths.[2] In the conclusion of their book, they gave a unifying name to the sentiment that Dijkstra expresses above: computational kindness.

As computer scientists, we recognise that computation is costly. Processing time is a limited resource. Whenever we interact with others, we are sharing in a joint computational process, and we need to be mindful of when we are not carrying our part of the processing burden. Or worse yet, when we are needlessly increasing that burden and imposing it on our interlocutor. If you are computationally kind then you will be respectful of the cognitive problems that you force others to solve.

I think this is a great observation by Christian and Griffiths. In this post, I want to share with you some examples of how certain systems — at the level of the individual, small group, and society — are computationally kind. And how some are cruel. I will draw on examples from their book, and some of my own. They will include, language, bus stops, and the revelation principle in algorithmic game theory.
Read more of this post

Misbeliefs, evolution and games: a positive case

A recurrent theme here in TheEGG is the limits and reliability of knowledge. These get explored from many directions: on epistemological grounds, from the philosophy of science angle, but also formally, through game theory and simulations. In this post, I will explore the topic of misbeliefs as adaptations. Misbeliefs will be intended as ideas about reality that a given subject accepts as true, despite them being wrong, inaccurate or otherwise mistaken. The notion that evolution might not systematically and exclusively support true beliefs isn’t new to TheEGG, but it has also been tackled by many other people, by means of different methodologies, including my own personal philosophising. The overarching question is whether misbeliefs can be systematically adaptive, a prospect that tickles my devious instincts: if it were the case, it would fly in the face of naïve rationalists, who frequently assume that evolution consistently favours the emergence of truthful ways to perceive the world.

Given our common interests, Artem and I have had plenty of long discussions in the past couple of years, mostly sparked by his work on Useful Delusions (see Kaznatcheev et al., 2014), for some more details on our exchanges, as well as a little background on myself, please see the notes[1]. A while ago,  I found an article by McKay and Dennett (M&D), entitled “The evolution of misbelief” (2009)[2], Artem offered me the chance to write a guest post on it, and I was very happy to accept.

What follows will mix philosophical, clinical and mathematical approaches, with the hope to produce a multidisciplinary synthesis.
Read more of this post

Pairing tools and problems: a lesson from the methods of mathematics and the Entscheidungsproblem

Three weeks ago it was my lot to present at the weekly integrated mathematical oncology department meeting. Given the informal setting, I decided to grab one gimmick and run with it. I titled my talk: ‘2’. It was an overview of two recent projects that I’ve been working on: double public goods for acid mediated tumour invasion, and edge
effects in game theoretic dynamics of solid tumours
. For the former, I considered two approximations: the limit as the number n of interaction partners is large and the limit as n = 1 — so there are two interacting parties. But the numerology didn’t stop there, my real goal was to highlight a duality between tools or techniques and the problems we apply them to or domains we use them in. As is popular at the IMO, the talk was live-tweeted with many unflattering photos and this great paraphrase (or was it a quote?) by David Basanta from my presentation’s opening:

Since I was rather sleep deprived from preparing my slides, I am not sure what I said exactly but I meant to say something like the following:

I don’t subscribe to the perspective that we should pick the best tool for the job. Instead, I try to pick the best tuple of job and tool given my personal tastes, competences, and intuitions. In doing so, I aim to push the tool slightly beyond its prior borders — usually with an incremental technical improvement — while also exploring a variant perspective — but hopefully still grounded in the local language — on some domain of interest. The job and tool march hand in hand.

In this post, I want to unpack this principle and follow it a little deeper into the philosophy of science. In the process, I will touch on the differences between endogenous and exogenous questions. I will draw some examples from my own work, by will rely primarily on methodological inspiration from pure math and the early days of theoretical computer science.

Read more of this post

A year in books: philosophy, psychology, and political economy

If you follow the Julian calendar — which I do when I need a two week extension on overdue work — then today is the first day of 2015.

Happy Old New Year!

This also means that this is my last day to be timely with a yet another year-in-review post; although I guess I could also celebrate the Lunar New Year on February 19th. Last year, I made a resolution to read one not-directly-work-related book a month, and only satisfied it in an amortized analysis; I am repeating the resolution this year. Since I only needed two posts to catalog the practical and philosophical articles on TheEGG, I will try something new with this one: a list and mini-review of the books I read last year to meet my resolution. I hope that based on this, you can suggest some books for me to read in 2015; or maybe my comments will help you choose your next book to read. I know that articles and blogs I’ve stumbled across have helped guide my selection. If you want to support TheEGG directly and help me select the books that I will read this year then consider donating something from TheEGG wishlist.

Read more of this post

Cataloging a year of blogging: the philosophical turn

Passion and motivation are strange and confusing facets of being. Many things about them feel paradoxical. For example, I really enjoy writing, categorizing, and — obviously, if you’ve read many of the introductory paragraphs on TheEGG — blabbing on far too long about myself. So you’d expect that I would have been extremely motivated to write up this index of posts from the last year. Yet I procrastinated — although in a mildly structured way — on it for most of last week, and beat myself up all weekend trying to force words into this textbox. A rather unpleasant experience, although it did let me catch up on some Batman cartoons from my childhood. Since you’re reading this now, I’ve succeeded and received my hit of satisfaction, but the high variance in my motivation to write baffles me.

More fundamentally, there is the paradox of agency. It feels like my motivations and passions are aspects of my character, deeply personal and defining. Yet, it is naive to assume that they are determined by my ego; if I take a step back, I can see how my friends, colleagues, and even complete strangers push and pull the passions and motivations that push and pull me. For example, I feel like TheEGG largely reflects my deep-seated personal interests, but my thoughts do not come from me alone, they are shaped by my social milieu — or more dangerously by Pavlov’s buzzer of my stats page, each view and comment and +1 conditioning my tastes. Is the heavy presence of philosophical content because I am interested in philosophy, or am I interested in philosophy because that is what people want to read? That is the tension that bothers me, but it is clear that my more philosophical posts are much more popular than the practical. If we measure in terms of views then in 2014 new cancer-related posts accounted for only 4.7% of the traffic (with 15 posts), the more abstract cstheory perspective on evolution accounted for 6.6% (with 5 posts), while the posts I discuss below accounted for 57.4% (the missing chunk of unity went to 2014 views of post from 2012 and 2013). Maybe this is part of the reason why there was 24 philosophical posts, compared to the 20 practical posts I highlighted in the first part of this catalog.

Of course, this example is a little artificial, since although readership statistics are fun distraction, they are not particularly relevant just easy to quantify. Seeing the influence of the ideas I read is much more difficult. Although I think these exercises in categorization can help uncover them. In this post, I review the more philosophical posts from last year, breaking them down less autobiographically and more thematically: interfaces and useful delusions; philosophy of the Church-Turing thesis; Limits of science and dangers of mathematics; and personal reflections on philosophy and science. Let me know if you can find some coherent set of influences.

Read more of this post

Realism and interfaces in philosophy of mind and metaphysics

In an earlier post, I discussed three theories of perception: naive realism, critical realism, and interfaces. To remind you of the terminology: naive realism is the stance that the world is exactly as we perceive it and critical realism is that perception resembles reality, but doesn’t capture all of it. Borrowing an image from Kevin Song: if naive realism is a perfect picture then critical realism is a blurry one. For a critical realist, our perception is — to move to another metaphor — a map of the territory that is reality; it distorts, omits details, adds some labels, and draws emphasis, but largely preserves the main structure. Interfaces, however, do not preserve structure. Borrowing now from Donald Hoffman: consider your computer desktop, what are the folders? They don’t reflect the complicated sequence of changes in magnetization in a thin film of ferromagnetic material inside a metal box called your hard-drive, not even at a coarse-grained level. Nor do they hint at the complicated information processing that changes those magnetic fields into the photons that leave your screen. But they do allow you to have a predictable and intelligible interaction with your computer, something that would be much more difficult with just a magnetized needle and a steady hand. The interface does not resemble reality, it just allows us to act. Although the comments section of the earlier post became rather philosophical, my original intention was to stay in the realm of the current scientific discourse on perception. The distinction between realism and interfaces, however, also has a rich philosophical history — not only in epistemology but also in metaphysics — that I want to highlight with a few examples in this post.
Read more of this post

From realism to interfaces and rationality in evolutionary games

As I was preparing some reading assignments, I realized that I don’t have a single resource available that covers the main ideas of the interface theory of perception, objective versus subjective rationality, and their relationship to evolutionary game theory. I wanted to correct this oversight and use it as opportunity to comment on the philosophy of mind. In this post I will quickly introduce naive realism, critical realism, and the interface theory of perception and sketch how we can use evolutionary game theory to study them. The interface theory of perception will also give me an opportunity to touch on the difference between subjective and objective rationality. Unfortunately, I am trying to keep this entry short, so we will only skim the surface and I invite you to click links aggressively and follow the references papers if something catches your attention — this annotated list of links might be of particular interest for further exploration.
Read more of this post

Personification and pseudoscience

If you study the philosophy of science — and sometimes even if you just study science — then at some point you might get the urge to figure out what you mean when you say ‘science’. Can you distinguish the scientific from the non-scientific or the pseudoscientific? If you can then how? Does science have a defining method? If it does, then does following the steps of that method guarantee science, or are some cases just rhetorical performances? If you cannot distinguish science and pseudoscience then why do some fields seem clearly scientific and others clearly non-scientific? If you believe that these questions have simple answers then I would wager that you have not thought carefully enough about them.

Karl Popper did think very carefully about these questions, and in the process introduced the problem of demarcation:

The problem of finding a criterion which would enable us to distinguish between the empirical sciences on the one hand, and mathematics and logic as well as ‘metaphysical’ systems on the the other

Popper believed that his falsification criterion solved (or was an important step toward solving) this problem. Unfortunately due to Popper’s discussion of Freud and Marx as examples of non-scientific, many now misread the demarcation problem as a quest to separate epistemologically justifiable science from the epistemologically non-justifiable pseudoscience. With a moral judgement of Good associated with the former and Bad with the latter. Toward this goal, I don’t think falsifiability makes much headway. In this (mis)reading, falsifiability excludes too many reasonable perspectives like mathematics or even non-mathematical beliefs like Gandy’s variant of the Church-Turing thesis, while including much of in-principle-testable pseudoscience. Hence — on this version of the demarcation problem — I would side with Feyerabend and argue that a clear seperation between science and pseudoscience is impossible.

However, this does not mean that I don’t find certain traditions of thought to be pseudoscientific. In fact, I think there is a lot to be learned from thinking about features of pseudoscience. A particular question that struck me as interesting was: What makes people easily subscribe to pseudoscientific theories? Why are some kinds of pseudoscience so much easier or more tempting to believe than science? I think that answering these questions can teach us something not only about culture and the human mind, but also about how to do good science. Here, I will repost (with some expansions) my answer to this question.
Read more of this post

Models and metaphors we live by

George Lakoff and Mark Johnson’s Metaphors we live by is a classic, that has had a huge influence on parts of linguistics and cognitive science, and some influence — although less so, in my opinion — on philosophy. It is structured around the thought that “[m]etaphor is one of our most important tools for trying to comprehend partially what cannot be comprehended totally”.

The authors spend the first part of the book giving a very convincing argument that “even our deepest and most abiding concepts — time, events, causation, morality, and mind itself — are understood and reasoned about via multiple metaphors.” These conceptual metaphors structure our reality, and are fundamentally grounded in our sensory-motor experience. For them, metaphors are not just aspects of speech but windows into our mind and conceptual system:

Our ordinary conceptual system, in terms of which we both think and act, is fundamentally metaphorical in nature. … Our concepts structure what we perceive, how we get around the world, and how we relate to others. Our conceptual system thus plays a central role in defining our everyday realities. … Since communication is based on the same conceptual system that we use in thinking and actiong, language is an important source of evidence for what that system is like.

I found the book incredibly insightful, and in large agreement with many of my recent thoughts on the philosophies of mind and science. After taking a few flights to finish the book, I wanted to take a moment to provide a mini-review. The hope is to convincing you to make the time for reading this short volume.
Read more of this post