Critical thinking and philosophy

Regular readers of TheEGG might have noticed that I have a pretty positive disposition toward philosophy. I wound’t call myself a philosopher — at least not a professional one, since I don’t think I get paid to sit around loving wisdom — but I definitely greatly enjoy philosophy and think it is a worthwhile pursuit. As a mathematician or theoretical computer scientists, I am also a fan of rational argument. You could even say I am a proponent of critical thinking. At the very least, this blog offers a lot of — sometimes too much — criticism; although that isn’t what is really meant by ‘critical thinking’, but I’ll get back to that can of worms later. As such, you might expect that I would be supportive of Olly’s (of Philosophy Tube) recent episode on ‘Why We Need Philosophy’. I’ll let you watch:

I am in complete support of defending philosophy, but I am less keen on limiting or boxing philosophy into a simple category. I think the biggest issue with Olly’s defense is that he equates philosophy to critical thinking. I don’t think this is a justified identity and false in both directions. There is philosophy that doesn’t fall under critical thinking, and there is critical thinking that is not philosophy. As such, I wanted to unpack some of these concepts with a series of annotated links.

Read more of this post

Stem cells, branching processes and stochasticity in cancer

When you were born, you probably had 270 bones in your body. Unless you’ve experienced some very drastic traumas, and assuming that you are fully grown, then you probably have 206 bones now. Much like the number and types of internal organs, we can call this question of science solved. Unfortunately, it isn’t always helpful to think of you as made of bones and other organs. For medical purposes, it is often better to think of you as made of cells. It becomes natural to ask how many cells you are made of, and then maybe classify them into cell types. Of course, you wouldn’t expect this number to be as static as the number of bones or organs, as individual cells constantly die and are replaced, but you’d expect the approximate number to be relatively constant. Thus number is surprisingly difficult to measure, and our best current estimate is around 3.72 \times 10^{13} (Bianconi et al., 2013).

stochasticStemCellBoth 206 and 3.72 \times 10^{13} are just numbers, but to a modeler they suggest a very important distinction over which tools we should use. Suppose that my bones and cells randomly popped in and out of existence without about equal probability (thus keeping the average number constant). In that case I wouldn’t expect to see exactly 206 bones, or exactly 37200000000000 cells; if I do a quick back-of-the-envelope calculation then I’d expect to see somewhere between 191 and 220 bones, and between 37199994000000 and 37200006000000. Unsurprisingly, the variance in the number of bones is only around 29 bones, while the number of cells varies by around 12 million. However, in terms of the percentage, I have 14% variance for the bones and only 0.00003% variance in the cell count. This means that in terms of dynamic models, I would be perfectly happy to model the cell population by their average, since the stochastic fluctuations are irrelevant, but — for the bones — a 14% fluctuation is noticeable so I would need to worry about the individual bones (and we do; we even give them names!) instead of approximating them by an average. The small numbers would be a case of when results can depend heavily on if one picks a discrete or continuous model.

In ecology, evolution, and cancer, we are often dealing with huge populations closer to the number of cells than the number of bones. In this case, it is common practice to keep track of the averages and not worry too much about the stochastic fluctuations. A standard example of this is replicator dynamics — a deterministic differential equation governing the dynamics of average population sizes. However, this is not always a reasonable assumption. Some special cell-types, like stem cells, are often found in very low quantities in any given tissue but are of central importance to cancer progression. When we are modeling such low quantities — just like in the cartoon example of disappearing bones — it becomes to explicitly track the stochastic effects — although we don’t have to necessarily name each stem cell. In these cases we switch to using modeling techniques like branching processes. I want to use this post to highlight the many great examples of branching processes based models that we saw at the MBI Workshop on the Ecology and Evolution of Cancer.
Read more of this post

Personification and pseudoscience

If you study the philosophy of science — and sometimes even if you just study science — then at some point you might get the urge to figure out what you mean when you say ‘science’. Can you distinguish the scientific from the non-scientific or the pseudoscientific? If you can then how? Does science have a defining method? If it does, then does following the steps of that method guarantee science, or are some cases just rhetorical performances? If you cannot distinguish science and pseudoscience then why do some fields seem clearly scientific and others clearly non-scientific? If you believe that these questions have simple answers then I would wager that you have not thought carefully enough about them.

Karl Popper did think very carefully about these questions, and in the process introduced the problem of demarcation:

The problem of finding a criterion which would enable us to distinguish between the empirical sciences on the one hand, and mathematics and logic as well as ‘metaphysical’ systems on the the other

Popper believed that his falsification criterion solved (or was an important step toward solving) this problem. Unfortunately due to Popper’s discussion of Freud and Marx as examples of non-scientific, many now misread the demarcation problem as a quest to separate epistemologically justifiable science from the epistemologically non-justifiable pseudoscience. With a moral judgement of Good associated with the former and Bad with the latter. Toward this goal, I don’t think falsifiability makes much headway. In this (mis)reading, falsifiability excludes too many reasonable perspectives like mathematics or even non-mathematical beliefs like Gandy’s variant of the Church-Turing thesis, while including much of in-principle-testable pseudoscience. Hence — on this version of the demarcation problem — I would side with Feyerabend and argue that a clear seperation between science and pseudoscience is impossible.

However, this does not mean that I don’t find certain traditions of thought to be pseudoscientific. In fact, I think there is a lot to be learned from thinking about features of pseudoscience. A particular question that struck me as interesting was: What makes people easily subscribe to pseudoscientific theories? Why are some kinds of pseudoscience so much easier or more tempting to believe than science? I think that answering these questions can teach us something not only about culture and the human mind, but also about how to do good science. Here, I will repost (with some expansions) my answer to this question.
Read more of this post

Ecology of cancer: mimicry, eco-engineers, morphostats, and nutrition

One of my favorite parts of mathematical modeling is the opportunities it provides to carefully explore metaphors and analogies between disciplines. The connection most carefully explored at the MBI Workshop on the Ecology and Evolution of Cancer was, as you can guess from the name, between ecology and oncology. Looking at cancer from the perspective of evolutionary ecology can offer us several insights that the standard hallmarks of cancer approach (Hanahan & Weingerg, 2000) hides. I will start with some definitions for this view, discuss ecological concepts like mimicry and ecological engineers in the context of cancer, unify these concepts through the idea of morphostatic maintenance of tissue microarchitecture, and finish with the practical importance of diet to cancer.
Read more of this post

Models and metaphors we live by

MetaphorsGeorge Lakoff and Mark Johnson’s Metaphors we live by is a classic, that has had a huge influence on parts of linguistics and cognitive science, and some influence — although less so, in my opinion — on philosophy. It is structured around the thought that “[m]etaphor is one of our most important tools for trying to comprehend partially what cannot be comprehended totally”.

The authors spend the first part of the book giving a very convincing argument that “even our deepest and most abiding concepts — time, events, causation, morality, and mind itself — are understood and reasoned about via multiple metaphors.” These conceptual metaphors structure our reality, and are fundamentally grounded in our sensory-motor experience. For them, metaphors are not just aspects of speech but windows into our mind and conceptual system:

Our ordinary conceptual system, in terms of which we both think and act, is fundamentally metaphorical in nature. … Our concepts structure what we perceive, how we get around the world, and how we relate to others. Our conceptual system thus plays a central role in defining our everyday realities. … Since communication is based on the same conceptual system that we use in thinking and actiong, language is an important source of evidence for what that system is like.

I found the book incredibly insightful, and in large agreement with many of my recent thoughts on the philosophies of mind and science. After taking a few flights to finish the book, I wanted to take a moment to provide a mini-review. The hope is to convincing you to make the time for reading this short volume.
Read more of this post

Limits of prediction: stochasticity, chaos, and computation

Some of my favorite conversations are about prediction and its limits. For some, this is purely a practical topic, but for me it is a deeply philosophical discussion. Understanding the limits of prediction can inform the philosophies of science and mind, and even questions of free-will. As such, I wanted to share with you a World Science Festival video that THEREALDLB recently posted on /r/math. This is a selected five minute clip called “What Can’t We Predict With Math?” from a longer one and a half hour discussion called “Your Life By The Numbers: ‘Go Figure'” between Steven Strogatz, Seth Lloyd, Andrew Lo, and James Fowler. My post can be read without watching the panel discussion or even the clip, but watching the clip does make my writing slightly less incoherent.

I want to give you a summary of the clip that focuses on some specific points, bring in some of discussions from elsewhere in the panel, and add some of my commentary. My intention is to be relevant to metamodeling and the philosophy of science, but I will touch on the philosophy of mind and free-will in the last two paragraphs. This is not meant as a comprehensive overview of the limits of prediction, but just some points to get you as excited as I am about this conversation.

Read more of this post

Should we be astonished by the Principle of “Least” Action?

QuinceyFig2As one goes through more advanced expositions of quantum physics, the concept of action is gradually given more importance, with it being considered a fundamental piece in some introductions to Quantum Field Theory (Zee, 2003) through the use of the path integral approach. The basic idea behind using the action is to assign a number to each possible state of a system. The function that does so is named the Lagrangian function, and it encodes the physics of the system (i.e. how do different parts of the system affect each other). Then, to a trajectory of a system we associate the integral of this number over all the states in the trajectory. This contrasts with the classical Newtonian approach, where we study a system by specifying all the possible ways in which parts of the system exercise forces on each other (i.e. affect each other’s acceleration). Using the action usually results in nicer mathematics, while I’d argue that the Newtonian approach requires less training to feel intuitive.

In many of the expositions of the use of action in physics (see e.g. this one), I perceive an attempt at transmitting wonder about the world being such that it minimizes a function on its trajectory. This has indeed been the case historically, with Maupertuis supposed to have considered action minimization (and the corresponding unification of minimization principles between optics and mechanics) as the most definite proof available to him of the existence of God. However, along the spirit of this stack exchange question, I never really understood why such a wonder should be felt, even setting aside the fact that it assumes that our equations “are” the world, a perspective that Artem has criticized at length before.
Read more of this post

Philosophy of Science and an analytic index for Feyerabend

FeyerabendThroughout my formal education, the history of science has been presented as a series of anecdotes and asides. The philosophy of science, encountered even less, was passed down not as a rich debate and on-going inquiry but as a set of rules that best be followed. To paraphrase Gregory Radick, this presentation is mere propaganda; it is akin to learning the history of a nation from its travel brochures. Thankfully, my schooling did not completely derail my learning, and I’ve had an opportunity to make up for some of the lost time since.

One of the philosophers of science that I’ve enjoyed reading the most has been Paul Feyerabend. His provocative writing in Against Method and advocation for what others have called epistemological anarchism — the rejection of any rules of scientific methodology — has been influential to my conception of the role of theorists. Although I’ve been meaning to write down my thoughts on Feyerabend for a while, now, I doubt that I will bring myself to do it anytime soon. In the meantime, dear reader, I will leave you with an analytic index consisting of links to the thoughts of others (interspersed with my typical self-links) that discuss Feyerabend, Galileo (his preferred historic case study), and consistency in science.
Read more of this post

Experimental and comparative oncology: zebrafish, dogs, elephants

One of the exciting things about mathematical oncology is that thinking about cancer often forces me to leave my comfortable arm-chair and look at some actually data. No matter how much I advocate for the merits of heuristic modeling, when it comes to cancer, data-agnostic models take second stage to data-rich modeling. This close relationship between theory and experiment is of great importance to the health of a discipline, and the MBI Workshop on the Ecology and Evolution of Cancer highlights the health of mathematical oncology: mathematicians are sitting side-by-side with clinicians, biologists with computer scientists, and physicists next to ecologists. This means that the most novel talks for me have been the ones highlighting the great variety of experiments that are being done and how they inform theory.In this post I want to highlight some of these talks, with a particular emphasis on using the study of cancer in non-humans to inform human medicine.
Read more of this post

Colon cancer, mathematical time travel, and questioning the sequential mutation model.

On Saturday, I arrived in Columbus, Ohio for the the MBI Workshop on the Ecology and Evolution of Cancer. Today, our second day started. The meeting is an exciting combination of biology-minded mathematicians and computer scientists, and math-friendly biologist and clinicians. As is typical of workshops, the speakers of the first day had an agenda of setting the scope. In this case, the common theme was to question and refine the established model as embodied by Hannah & Weinberg’s (2000) hallmarks of cancer outlined. For an accessible overview of these hallmarks, I recommend Buddhini Samarasinghe’s series of posts. I won’t provide a full overview of the standard model, but only focus on the aspects at issue for the workshop participants. In the case of the first two speakers, the standard picture in question was the sequential mutation model. In the textbook model of cancer, a tumour acquires the hallmark mutations one at a time, with each subsequent mutation sweeping to fixation. Trevor Graham and Darryl Shibata presented their work on colon cancer, emphasizing tumour heterogeneity, and suggesting that we might have to rewrite the sequential mutation page of our Cancer 101 textbooks to better discuss the punctuated model.
Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 2,326 other followers