A year in books: philosophy, psychology, and political economy

If you follow the Julian calendar — which I do when I need a two week extension on overdue work — then today is the first day of 2015.

Happy Old New Year!

This also means that this is my last day to be timely with a yet another year-in-review post; although I guess I could also celebrate the Lunar New Year on February 19th. Last year, I made a resolution to read one not-directly-work-related book a month, and only satisfied it in an amortized analysis; I am repeating the resolution this year. Since I only needed two posts to catalog the practical and philosophical articles on TheEGG, I will try something new with this one: a list and mini-review of the books I read last year to meet my resolution. I hope that based on this, you can suggest some books for me to read in 2015; or maybe my comments will help you choose your next book to read. I know that articles and blogs I’ve stumbled across have helped guide my selection. If you want to support TheEGG directly and help me select the books that I will read this year then consider donating something from TheEGG wishlist.

Read more of this post

Realism and interfaces in philosophy of mind and metaphysics

In an earlier post, I discussed three theories of perception: naive realism, critical realism, and interfaces. To remind you of the terminology: naive realism is the stance that the world is exactly as we perceive it and critical realism is that perception resembles reality, but doesn’t capture all of it. Borrowing an image from Kevin Song: if naive realism is a perfect picture then critical realism is a blurry one. For a critical realist, our perception is — to move to another metaphor — a map of the territory that is reality; it distorts, omits details, adds some labels, and draws emphasis, but largely preserves the main structure. Interfaces, however, do not preserve structure. Borrowing now from Donald Hoffman: consider your computer desktop, what are the folders? They don’t reflect the complicated sequence of changes in magnetization in a thin film of ferromagnetic material inside a metal box called your hard-drive, not even at a coarse-grained level. Nor do they hint at the complicated information processing that changes those magnetic fields into the photons that leave your screen. But they do allow you to have a predictable and intelligible interaction with your computer, something that would be much more difficult with just a magnetized needle and a steady hand. The interface does not resemble reality, it just allows us to act. Although the comments section of the earlier post became rather philosophical, my original intention was to stay in the realm of the current scientific discourse on perception. The distinction between realism and interfaces, however, also has a rich philosophical history — not only in epistemology but also in metaphysics — that I want to highlight with a few examples in this post.
Read more of this post

Misunderstanding falsifiability as a power philosophy of Scientism

KarlPopperI think that trying to find one slogan that captures all of science and nothing else is a fool’s errand. However, it is an appealing errand given our propensity to want to classify and delimit the things we care about. It is also an errand that often takes a central role in the philosophy of science.

Just like with almost any modern thought, if we try hard enough then we can trace philosophy of science back to the Greeks and discuss the contrasting views of Plato and Aristotle. As fun as such historical excursions might be, it seems a little silly given that the term scientist was not coined until 1833 and even under different names our current conception of scientists would not stretch much further back than the natural philosophers of the 17th century. Even the early empiricism of these philosophers, although essential as a backdrop and a foundation shift in view, is more of an overall metaphysical outlook than a dedicate philosophy of science.
Read more of this post

Approximating spatial structure with the Ohtsuki-Nowak transform

Can we describe reality? As a general philosophical question, I could spend all day discussing it and never arrive at a reasonable answer. However, if we restrict to the sort of models used in theoretical biology, especially to the heuristic models that dominate the field, then I think it is relatively reasonable to conclude that no, we cannot describe reality. We have to admit our current limits and rely on thinking of our errors in the dual notions of assumptions or approximations. I usually prefer the former and try to describe models in terms of the assumptions that if met would make them perfect (or at least good) descriptions. This view has seemed clearer and more elegant than vague talk of approximations. It is the language I used to describe the Ohtsuki-Nowak (2006) transform over a year ago. In the months since, however, I’ve started to realize that the assumptions-view is actually incompatible with much of my philosophy of modeling. To contrast my previous exposition (and to help me write up some reviewer responses), I want to go through a justification of the ON-transform as a first-order approximation of spatial structure.
Read more of this post

Change, progress, and philosophy in science

Bertrand_Russell“Philosophy of science is about as useful to scientists as ornithology is to birds” is a quote usually attributed to Feynman that embodies a sentiment that seems all too common among scientists. If I wish to be as free as a bird to go about my daily rituals of crafting science in the cage that I build for myself during my scientific apprenticeship then I agee that philosophy is of little use to me. Much like a politician can hold office without a knowledge of history, a scientist can practice his craft without philosophy. However, like an ignorance of history, an ignorance of philosophy tends to make one myopic. For theorists, especially, such a restricted view of intellectual tradition can be very stifling and make scientific work seem like a trade instead of an art. So, to keep my work a joy instead of chore, I tend to structure myself by reading philosophy and trying to understand where my scientific work fits in the history of thought. For this, Bertrand Russell is my author of choice.

I don’t read Russell because I agree with his philosophy, although much of what he says is agreeable. In fact, it is difficult to say what agreement with his philosophy would mean, since his thoughts on many topics changed through his long 98 year life. I read his work because it has a spirit of honest inquiry and not a search for proof of some preconceived conclusion (although, like all humans, he is not always exempt from the dogmatism flaw). I read his work because it is written with a beautiful and precise wit. Most importantly, I read his work because — unlike many philosophers — he wrote clearly enough that it is meaningful to disagree with him.
Read more of this post

Computational complexity of evolutionary equilibria

FoundersThe first half of the 20th century is famous for revolutionary changes — paradigm shifts as Kuhn would say — across the sciences and other systems of thought. Disheartened by the scars of the First World War, European thinkers sought refuge by shifting their worldviews away from those of their fathers. In fields like biology, this meant looking for guidance to your grandfathers, instead. The founders of the modern synthesis reconciled the fading ideas of Wallace’s natural selection with Mendelian genetics. In the process, they unified many branches of biology, that at the dawn of the 20th century had been diverging, into a single paradigm that persists today. A return to evolution by natural selection illuminated their field and ended the eclipse of Darwinism. At the same time, mathematicians questioned Hilbert’s formalism and Russell’s logicism as valid foundations for their own field. As a result, they formalized mechanistic calculation and logical thought as computation and founded theoretical computer science to study its power and limitations. Even though some pioneers — like Alan Turing — kept an eye on biology, the two streams of thought did not converge. The coming of the Second World War pushed both fields away from each other and deep foundational questions, entrenching them in applied and technological work.

For the rest of the 20th century, the fields remained largely independent. Computer science only looked to biology for vague inspiration of heuristics in the form of evolutionary computing (Holland, 1975), and biologists looked at computer science as an engineering or technical field that could only provide them with applied tools like bioinformatics. Neither field saw in the other a partner for addressing deep theoretical questions. As I mentioned previously, my goal is to remedy this by applying ideas from analysis of algorithms and computational complexity to fundamental questions in biology. Sometime in late October, I tweeted my joy at seeing evolution clearly through the algorithmic lens. On Friday the 23rd, after much delay, I turned this into an ArXiv preprint ambitiously titled “Complexity of evolutionary equilibria in static fitness landscapes“. The paper combines the discussion of evolution from my previous few blog posts with three simple technical results. Although it was written for theoretical computer scientists, I tried to make it accessible to mathematical biologists as well; the hope is to serve as a launching point for discussion between the two disciplines.
Read more of this post

Mathematical Turing test: Readable proofs from your computer

We have previously discussed the finicky task of defining intelligence, but surely being able to do math qualifies? Even if the importance of mathematics in science is questioned by people as notable as E.O. Wilson, surely nobody questions it as an intelligent activity? Mathematical reasoning is not necessary for intelligence, but surely it is sufficient?

Note that by mathematics, I don’t mean number crunching or carrying out a rote computation. I mean the bread and butter of what mathematicians do: proving theorems and solving general problems. As an example, consider the following theorem about metric spaces:

Let X be a complete metric space and let A be a closed subset of X. Then A is complete.

Can you prove this theorem? Would you call someone that can — intelligent? Take a moment to come up with a proof.
Read more of this post

Theorists as connectors: from Poincaré to mathematical medicine

Henri Poincaré (29 April 1854 – 17 July 1912) is often considered to be the last universalist of mathematicians. He excelled in all parts of theoretical physics, applied, and pure mathematics that existed during his time. Since him, top mathematicians have become increasingly more specialized, as have scientists. Poincaré was part pure mathematician, part engineer; he advocated the importance of intuition over formality in mathematics. This put him at odds with the likes of Frege, Hilbert, and Russell — men that are typically considered the grandfathers of theoretical computer science. As an aspiring CSTheorist, I think we are misplaced in tracing our intellectual roots to the surgical and sterile philosophies of logicism and formalism.

A computer scientist, at least one that embraces the algorithmic lens, is part scientist/engineer and part logician/mathematician. Although there is great technical merit to be had in proving that recently defined complexity class X is equal (or not) to a not-so-recently defined complexity class Y, my hope is that this is a means to a deeper understanding of something other than arbitrarily defined complexity classes. The mark of a great theorist is looking at a problem in science (or some other field) and figuring out how to properly frame it in such a way that the formal tools of mathematics at her disposal become applicable to the formulation. I think Scott Aaronson said it clearly (his emphasis):

A huge part of our job description as theoretical computer scientists is finding formal ways to model informally-specified notions! (What is it that Turing did when he defined Turing machines in the first place?) For that reason, I don’t think it’s possible to define “modeling questions” as outside the scope of TCS, without more-or-less eviscerating the subject.

As experimental science becomes more and more specialized, I believe it is increasingly important to have universal theorists or connectors. People with the mission of finding connections between disparate fields, and framing different theories in common languages. That is my goal, and the only unifying theme I can detect between my often random-seeming interests. Of course, CSTheorists are not the only ones well prepared to do take on the job of connectors. Jacob G. Scott (@CancerConnector on twitter; where I borrow ‘connector’ from) suggests that MD trained scientists are also perfect as connectors:

I completely agree with Jacob’s emphasis on creativity, and seeing complex problems as a whole. Usually, I would be reluctant to accept the suggestion of connectors without formal mathematical training, but I am starting to see that it is not essential for a universalist. My only experience with MD trained scientists was stimulating conversations with Gary An, a surgeon at University of Chicago Medical Center and organizer of the Swarmfest2012 conference on complex adaptive systems. He brought a pragmatic view to computational modeling, and (more importantly) the purpose of models, that I would have never found on my own. For me, computational models had been an exercise in formalism and a tool to build intuition on questions I could not tackle analytically. Gary stressed the importance of models as a means of communication, as a bridge between disciplines. He showed me that modelers are connectors.

As most scientists becomes more and more specialized, I think it is essential to have generalists and connectors to keep science unified. We cannot hope for a modern Poincaré, but we can aspire for theorists that specialize in drawing connections between fields, and driving a cross-fertilization of tools. For me, following Turing’s footsteps on the intuitive road of theoretical computer science and algorithmic lens is the most satisfying, but it is not the only way. Jacob shows that translating between distant disciplines like math/physics and biology/medicine and engaging their researchers can drive progress. Gary shows that pragmatism and viewing modeling as a means of communication is equally important. In some way, they (and many like them) act as a 21st century Poincaré by bringing the intuition of mathematics and computer modeling to bare on the engineering of modern medicine.