Limits of prediction: stochasticity, chaos, and computation

Some of my favorite conversations are about prediction and its limits. For some, this is purely a practical topic, but for me it is a deeply philosophical discussion. Understanding the limits of prediction can inform the philosophies of science and mind, and even questions of free-will. As such, I wanted to share with you a World Science Festival video that THEREALDLB recently posted on /r/math. This is a selected five minute clip called “What Can’t We Predict With Math?” from a longer one and a half hour discussion called “Your Life By The Numbers: ‘Go Figure'” between Steven Strogatz, Seth Lloyd, Andrew Lo, and James Fowler. My post can be read without watching the panel discussion or even the clip, but watching the clip does make my writing slightly less incoherent.

I want to give you a summary of the clip that focuses on some specific points, bring in some of discussions from elsewhere in the panel, and add some of my commentary. My intention is to be relevant to metamodeling and the philosophy of science, but I will touch on the philosophy of mind and free-will in the last two paragraphs. This is not meant as a comprehensive overview of the limits of prediction, but just some points to get you as excited as I am about this conversation.

Read more of this post

Philosophy of Science and an analytic index for Feyerabend

FeyerabendThroughout my formal education, the history of science has been presented as a series of anecdotes and asides. The philosophy of science, encountered even less, was passed down not as a rich debate and on-going inquiry but as a set of rules that best be followed. To paraphrase Gregory Radick, this presentation is mere propaganda; it is akin to learning the history of a nation from its travel brochures. Thankfully, my schooling did not completely derail my learning, and I’ve had an opportunity to make up for some of the lost time since.

One of the philosophers of science that I’ve enjoyed reading the most has been Paul Feyerabend. His provocative writing in Against Method and advocation for what others have called epistemological anarchism — the rejection of any rules of scientific methodology — has been influential to my conception of the role of theorists. Although I’ve been meaning to write down my thoughts on Feyerabend for a while, now, I doubt that I will bring myself to do it anytime soon. In the meantime, dear reader, I will leave you with an analytic index consisting of links to the thoughts of others (interspersed with my typical self-links) that discuss Feyerabend, Galileo (his preferred historic case study), and consistency in science.
Read more of this post

Experimental and comparative oncology: zebrafish, dogs, elephants

One of the exciting things about mathematical oncology is that thinking about cancer often forces me to leave my comfortable arm-chair and look at some actually data. No matter how much I advocate for the merits of heuristic modeling, when it comes to cancer, data-agnostic models take second stage to data-rich modeling. This close relationship between theory and experiment is of great importance to the health of a discipline, and the MBI Workshop on the Ecology and Evolution of Cancer highlights the health of mathematical oncology: mathematicians are sitting side-by-side with clinicians, biologists with computer scientists, and physicists next to ecologists. This means that the most novel talks for me have been the ones highlighting the great variety of experiments that are being done and how they inform theory.In this post I want to highlight some of these talks, with a particular emphasis on using the study of cancer in non-humans to inform human medicine.
Read more of this post

Colon cancer, mathematical time travel, and questioning the sequential mutation model.

On Saturday, I arrived in Columbus, Ohio for the the MBI Workshop on the Ecology and Evolution of Cancer. Today, our second day started. The meeting is an exciting combination of biology-minded mathematicians and computer scientists, and math-friendly biologist and clinicians. As is typical of workshops, the speakers of the first day had an agenda of setting the scope. In this case, the common theme was to question and refine the established model as embodied by Hannah & Weinberg’s (2000) hallmarks of cancer outlined. For an accessible overview of these hallmarks, I recommend Buddhini Samarasinghe’s series of posts. I won’t provide a full overview of the standard model, but only focus on the aspects at issue for the workshop participants. In the case of the first two speakers, the standard picture in question was the sequential mutation model. In the textbook model of cancer, a tumour acquires the hallmark mutations one at a time, with each subsequent mutation sweeping to fixation. Trevor Graham and Darryl Shibata presented their work on colon cancer, emphasizing tumour heterogeneity, and suggesting that we might have to rewrite the sequential mutation page of our Cancer 101 textbooks to better discuss the punctuated model.
Read more of this post

Defining empathy, sympathy, and compassion

PaulBloomWhen discussing the evolution of cooperation, questions about empathy, sympathy, and compassion are often close to mind. In my computational work, I used to operationalize-away these emotive concepts and replace them with a simple number like the proportion of cooperative interactions. This is all well and good if I want to confine myself to a behaviorist perspective, but my colleagues and I have been trying to move to a richer cognitive science viewpoint on cooperation. This has confronted me with the need to think seriously about empathy, sympathy, and compassion. In particular, Paul Bloom‘s article against empathy, and a Reddit discussion on the usefulness of empathy as a word has reminded me that my understanding of the topic is not very clear or critical. As such, I was hoping to use this opportunity to write down definitions for these three concepts and at the end of the post sketch a brief idea of how to approach some of them with evolutionary modeling. My hope is that you, dear reader, would point out any confusion or disagreement that lingers.
Read more of this post

Transcendental idealism and Post’s variant of the Church-Turing thesis

KantPostOne of the exciting things in reading philosophy, its history in particular, is experiencing the tension between different schools of thought. This excitement turns to beauty if a clear synthesis emerges to reconcile the conflicting ideas. In the middle to late 18th century, as the Age of Enlightenment was giving way to the Romantic era, the tension was between rationalism and empiricism and the synthesis came from Immanuel Kant. His thought went on to influence or directly shape much of modern philosophy, and if you browse the table of contents of philosophical journals today then you will regularly encounter hermeneutic titles like “Kant on <semi-obscure modern topic>”. In this regard, my post is in keeping with modern practice because it could have very well been titled “Kant on computability”.

As stressed before, I think that it is productive to look at important concepts from multiple philosophical perspectives. The exercise can provide us with an increased insight into both the school of thought that is our eyes, and the concept that we behold. In this case, the concept is the Church-Turing thesis that states that anything that is computable is computable by a Turing machine. The perspective will be of (a kind of) cognitivism — thought consists of algorithmic manipulation of mental states. This perspective that can often be read directly into Turing, although Copeland & Shagrir (2013) better described him as a pragmatic noncognitivist. Hence, I prefer to attribute this view to Emil Post. Also, it would be simply too much of a mouthful to call it the Post-Turing variant of the Church-Turing thesis.
Read more of this post

Weapons of math destruction and the ethics of Big Data

CathyONeilI don’t know about you, dear reader, but during my formal education I was never taught ethics or social consciousness. I even remember sitting around with my engineering friends that had to take a class in ethics and laughing at the irrelevance and futility of it. To this day, I have a strained relationship with ethics as a branch of philosophy. However, despite this villainous background, I ended up spending a lot of time thinking about cooperation, empathy, and social justice. With time and experience, I started to climb out of the Dunning-Kruger hole and realize how little I understood about being a useful member of society.

One of the important lessons I’ve learnt is that models and algorithms are not neutral, and come with important ethical considerations that we as computer scientists, physics, and mathematicians are often ill-equipped to see. For exploring the consequences of this in the context of the ever-present ‘big data’, Cathy O’Neil’s blog and alter ego mathbabe has been extremely important. This morning I had the opportunity to meet Cathy for coffee near her secret lair on the edge of Lower Manhattan. From this writing lair, she is working on her new book Weapons of Math Destruction and “arguing that mathematical modeling has become a pervasive and destructive force in society—in finance, education, medicine, politics, and the workplace—and showing how current models exacerbate inequality and endanger democracy and how we might rein them in”.

I can’t wait to read it!

In case you are impatient like me, I wanted to use this post to share a selection of Cathy’s articles along with my brief summaries for your browsing enjoyment:
Read more of this post

Falsifiability and Gandy’s variant of the Church-Turing thesis

RobinGandyIn 1936, two years after Karl Popper published the first German version of The Logic of Scientific Discovery and introduced falsifiability; Alonzo Church, Alan Turing, and Emil Post each published independent papers on the Entscheidungsproblem and introducing the lambda calculus, Turing machines, and Post-Turing machines as mathematical models of computation. The years after saw many more models, all of which were shown to be equivalent to each other in what they could compute. This was summarized in the Church-Turing thesis: anything that is computable is computable by a Turing machine. An almost universally accepted, but also incredibly vague, statement. Of course, such an important thesis has developed many variants, and exploring or contrasting their formulations can be very insightful way to understand and contrast different philosophies.

I believe that the original and most foundational version of the thesis is what I called Kleene’s purely mathematical formulation. Delving into this variant allowed us explore the philosophy of mathematics; Platonism; and the purpose, power and limitations of proof. However, because of the popularity of physicalism and authority of science, I doubt that Kleene’s is the most popular variant. Instead, when people think of the Church-Turing thesis, they often think of what is computable in the world around them. I like to associate this variant with Turing’s long time friend and student — Robin Gandy. I want to explore Gandy’s physical variant of the Church-Turing thesis to better understand the philosophy of science, theory-based conceptions, and the limits of falsifiability. In particular, I want to address what seems to me like the common misconception that the Church-Turing thesis is falsifiable.
Read more of this post

A Theorist’s Apology

Gadfly of ScienceAlmost four months have snuck by in silence, a drastic change from the weekly updates earlier in the year. However, dear reader, I have not abandoned TheEGG; I have just fallen off the metaphorical horse and it has taken some time to get back on my feet. While I was in the mud, I thought about what it is that I do and how to label it. I decided the best label is “theorist”, not a critical theorist, nor theoretical cognitive scientist, nor theoretical biologist, not even a theoretical computer scientist. Just a theorist. No domain necessary.

The problem with a non-standard label is that it requires justification, hence this post. I want to use the next two thousand words to return to writing and help unify my vision for TheEGG. In the process, I will comment on the relevance of philosophy to science, and the theorist’s integration of scientific domains with mathematics and the philosophy of science. The post will be a bit more personal and ramble more than usual, and I am sorry for that. I need this moment to recall how to ride the blogging horse.
Read more of this post

Useful delusions, interface theory of perception, and religion

As you can guess from the name, evolutionary game theory (EGT) traces its roots to economics and evolutionary biology. Both of the progenitor fields assume it impossible, or unreasonably difficult, to observe the internal representations, beliefs, and preferences of the agents they model, and thus adopt a largely behaviorist view. My colleagues and I, however, are interested in looking at learning from the cognitive science tradition. In particular, we are interested in the interaction of evolution and learning. This interaction in of itself is not innovative, it has been a concern for biologists since Baldwin (1886, 1902), and Smead & Zollman (2009; Smead 2012) even brought the interaction into an EGT framework and showed that rational learning is not necessarily a ‘fixed-point of Darwinian evolution’. But all the previous work that I’ve encountered at this interface has made a simple implicit assumption, and I wanted to question it.

It is relatively clear that evolution acts objectively and without regard for individual agents’ subjective experience except in so far as that experience determines behavior. On the other hand, learning, from the cognitive sciences perspective at least, acts on the subjective experiences of the agent. There is an inherent tension here between the objective and subjective perspective that becomes most obvious in the social learning setting, but is still present for individual learners. Most previous work has sidestepped this issue by either not delving into the internal mechanism of how agents decide to act — something that is incompatible with the cognitive science perspective — or assuming that subjective representations are true to objective reality — something for which we have no a priori justification.

A couple of years ago, I decided to look at this question directly by developing the objective-subjective rationality model. Marcel and I fleshed out the model by adding a mechanism for simple Bayesian learning; this came with an extra perk of allowing us to adopt Masel’s (2007) approach to looking at quasi-magical thinking as an inferential bias. To round out the team with some cognitive science expertise, we asked Tom to join. A few days ago, after an unhurried pace and over 15 relevant blog posts, we released our first paper on the topic (Kaznatcheev, Montrey & Shultz, 2014) along with its MatLab code.
Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 2,298 other followers