Colour, psychophysics, and the scientific vs. manifest image of reality

Recently on TheEGG, I’ve been writing a lot about the differences between effective (or phenomenological) and reductive theories. Usually, I’ve confined this writing to evolutionary biology; especially the tension between effective and reductive theories in the biology of microscopic systems. For why this matters to evolutionary game theory, see Kaznatcheev (2017, 2018).

But I don’t think that microscopic systems are the funnest place to see this interplay. The funnest place to see this is in psychology.

In the context of psychology, you can add an extra philosophical twist. Instead of differentiating between reductive and effective theories; a more drastic difference can be drawn between the scientific and manifest image of reality.

In this post, I want to briefly talk about how our modern theories of colour vision developed. This is a nice example of good effective theory leading before any reductive basis. And with that background in mind, I want to ask the question: are colours real? Maybe this will let me connect to some of my old work on interface theories of perception (see Kaznatcheev, Montrey, and Shultz, 2014).

Read more of this post

Advertisements

Local maxima and the fallacy of jumping to fixed-points

An economist and a computer scientist are walking through the University of Chicago campus discussing the efficient markets hypothesis. The computer scientist spots something on the pavement and exclaims: “look at that $20 on the ground — seems we’ll be getting a free lunch today!”

The economist turns to her without looking down and replies: “Don’t be silly, that’s impossible. If there was a $20 bill there then it would have been picked up already.”

This is the fallacy of jumping to fixed-points.

In this post I want to discuss both the importance and power of local maxima, and the dangers of simply assuming that our system is at a local maximum.

So before we dismiss the economist’s remark with laughter, let’s look at a more convincing discussion of local maxima that falls prey to the same fallacy. I’ll pick on one of my favourite YouTubers, THUNK:

In his video, THUNK discusses a wide range of local maxima and contrasts them with the intended global maximum (or more desired local maxima). He first considers a Roomba vacuum cleaner that is trying to maximize the area that it cleans but gets stuck in the local maximum of his chair’s legs. And then he goes on to discuss similar cases in physics, chemisty, evolution, psychology, and culture.

It is a wonderful set of examples and a nice illustration of the power of fixed-points.

But given that I write so much about algorithmic biology, let’s focus on his discussion of evolution. THUNK describes evolution as follows:

Evolution is a sort of hill-climbing algorithm. One that has identified local maxima of survival and replication.

This is a common characterization of evolution. And it seems much less silly than the economist passing up $20. But it is still an example of the fallacy of jumping to fixed-points.

My goal in this post is to convince you that THUNK describing evolution and the economist passing up $20 are actually using the same kind of argument. Sometimes this is a very useful argument, but sometimes it is just a starting point that without further elaboration becomes a fallacy.

Read more of this post

Quick introduction: the algorithmic lens

Computers are a ubiquitous tool in modern research. We use them for everything from running simulation experiments and controlling physical experiments to analyzing and visualizing data. For almost any field ‘X’ there is probably a subfield of ‘computational X’ that uses and refines these computational tools to further research in X. This is very important work and I think it should be an integral part of all modern research.

But this is not the algorithmic lens.

In this post, I will try to give a very brief description (or maybe just a set of pointers) for the algorithmic lens. And of what we should imagine when we see an ‘algorithmic X’ subfield of some field X.

Read more of this post

Danger of motivatiogenesis in interdisciplinary work

Randall Munroe has a nice old xkcd on citogenesis: the way factoids get created from bad checking of sources. You can see the comic at right. But let me summarize the process without direct reference to Wikipedia:

1. Somebody makes up a factoid and writes it somewhere without citation.
2. Another person then uses the factoid in passing in a more authoritative work, maybe sighting the point in 1 or not.
3. Further work inherits the citation from 2, without verifying its source, further enhancing the legitimacy of the factoid.
4. The cycle repeats.

Soon, everybody knows this factoid and yet there is no ground truth to back it up. I’m sure we can all think of some popular examples. Social media certainly seems to make this sort of loop easier.

We see this occasionally in science, too. Back in 2012, Daniel Lemire provided a nice example of this with algorithms research. But usually with science factoids, it eventually gets debuked with new experiments or proofs. Mostly because it can be professionally rewarding to show that a commonly assumed factoid is actually false.

But there is a similar effect in science that seems to me even more common, and much harder to correct: motivatiogenesis.

Motivatiogenesis can be especially easy to fall into with interdisiplinary work. Especially if we don’t challenge ourselves to produce work that is an advance in both (and not just one) of the fields we’re bridging.

Read more of this post

From perpetual motion machines to the Entscheidungsproblem

Turing MachineThere seems to be a tendency to use the newest technology of the day as a metaphor for making sense of our hardest scientific questions. These metaphors are often vague and inprecise. They tend to overly simplify the scientific question and also misrepresent the technology. This isn’t useful.

But the pull of this metaphor also tends to transform the technical disciplines that analyze our newest tech into fundamental disciplines that analyze our universe. This was the case for many aspects of physics, and I think it is currently happening with aspects of theoretical computer science. This is very useful.

So, let’s go back in time to the birth of modern machines. To the water wheel and the steam engine.

I will briefly sketch how the science of steam engines developed and how it dealt with perpetual motion machines. From here, we can jump to the analytic engine and the modern computer. I’ll suggest that the development of computer science has followed a similar path — with the Entscheidungsproblem and its variants serving as our perpetual motion machine.

The science of steam engines successfully universalized itself into thermodynamics and statistical mechanics. These are seen as universal disciplines that are used to inform our understanding across the sciences. Similarly, I think that we need to universalize theoretical computer science and make its techniques more common throughout the sciences.

Read more of this post

Cataloging a year of social blogging

With almost all of January behind us, I want to share the final summary of 2018. The first summary was on cancer and fitness landscapes; the second was on metamodeling. This third summary continues the philosophical trend of the second, but focuses on analyzing the roles of science, philosophy, and related concepts in society.

There were only 10 posts on the societal aspects of science and philosophy in 2018, with one of them not on this blog. But I think it is the most important topic to examine. And I wish that I had more patience and expertise to do these examinations.

Read more of this post

Cataloging a year of metamodeling blogging

Last Saturday, with just minutes to spare in the first calendar week of 2019, I shared a linkdex the ten (primarily) non-philosophical posts of 2018. It was focused on mathematical oncology and fitness landscapes. Now, as the second week runs into its final hour, it is time to start into the more philosophical content.

Here are 18 posts from 2018 on metamodeling.

With a nice number like 18, I feel obliged to divide them into three categories of six articles each. These three categories: (1) abstraction and reductive vs. effective theorie; (2) metamodeling and philosophy of mathematical biology; and the (3) historical context for metamodeling.

You might expect the third category to be an after-though. But it actually includes some of the most read posts of 2018. So do skim the whole list, dear reader.

Next week, I’ll discuss my remaining ten posts of 2018. The posts focused on the interface of science and society.
Read more of this post

Reductionism: to computer science from philosophy

A biologist and a mathematician walk together into their joint office to find the rubbish bin on top of the desk and on fire. The biologist rushes out, grabs a fire extinguisher, puts out the blaze, returns the bin to the floor and they both start their workday.

The next day, the same pair return to their office to find the rubbish bin in its correct place on the floor but again on fire. This time the mathematician springs to action. She takes the burning bin, puts it on the table, and starts her workday.

The biologist is confused.

Mathematician: “don’t worry, I’ve reduced the problem to a previously solved case.”

What’s the moral of the story? Clearly, it’s that reductionism is “[o]ne of the most used and abused terms in the philosophical lexicon.” At least it is abused enough for this sentiment to make the opening line of Ruse’s (2005) entry in the Oxford Companion to Philosophy.

All of this was not apparent to me.

I underestimated the extent of disagreement about the meaning of reductionism among people who are saying serious things. A disagreement that goes deeper than the opening joke or the distinction between ontological, epistemological, methodological, and theoretical reductionism. Given how much I’ve written about the relationship between reductive and effective theories, it seems important for me to sort out how people read ‘reductive’.

Let me paint the difference that I want to discuss in the broadest stroke with reference to the mind-body problem. Both of the examples I use are purely illustrative and I do not aim to endorse either. There is one sense in which reductionism uses reduce in the same way as ‘reduce, reuse, and recycle’: i.e. reduce = use less, eliminate. It is in this way that behaviourism is a reductive account of the mind, since it (aspires to) eliminate the need to refer to hidden mental, rather than just behavioural, states. There is a second sense in which reductionism uses reducere, or literally from Latin: to bring back. It is in this way that the mind can be reduced to the brain; i.e. discussions of the mind can be brought back to discussions of the brain, and the mind can be taken as fully dependent on the brain. I’ll expand more on this sense throughout the post.

In practice, the two senses above are often conflated and intertwined. For example, instead of saying that the mind is fully dependent on the brain, people will often say that the mind is nothing but the brain, or nothing over and above the brain. When doing this, they’re doing at least two different things. First, they’re claiming to have eliminated something. And second, conflating reduce and reducere. This observation of conflation is similar to my claim that Galileo conflated idealization and abstraction in his book-keeping analogy.

And just like with my distinction between idealization and abstraction, to avoid confusion, the two senses of reductionism should be kept conceptually separate. As before, I’ll make this clear by looking at how theoretical computer science handles reductions. A study in algorithmic philosophy!

In my typical arrogance, I will rename the reduce-concept as eliminativism. And based on its agreement with theoretical computer science, I will keep the reducere-concept as reductionism.
Read more of this post

Blogging, open science and the public intellectual

For the last half-year I’ve been keeping TheEGG to a strict weekly schedule. I’ve been making sure that at least one post comes out during every calendar week. At times this has been taxing. And of course this causes both reflection on why I blog and an urge to dip into old unfinished posts. This week I deliver both. Below is a linkdex of 7 posts from 2016 and earlier (with a few recent comments added here and there) commenting on how scientists and public intellectuals (whatever that phrase might mean) should approach blogging.

If you, dear reader, are a fellow science blogger then you might have seen these articles before. But I hope you might find it useful to revisit and reflect on some of them. I certainly found it insightful. And if you have any important updates to add to these links then these updates are certainly encouraged.

Read more of this post

Models as maps and maps as interfaces

One of my favorite conceptual metaphors from David Basanta is Mathematical Models as Maps. From this perspective, we as scientists are exploring an unknown realm of our particular domain of study. And we want to share with others what we’ve learned, maybe so that they can follow us. So we build a model — we draw a map. At first, we might not know how to identify prominent landmarks, or orient ourselves in our fields. The initial maps are vague sketches that are not useful to anybody but ourselves. Eventually, though, we identify landmarks — key experiments and procedures — and create more useful maps that others can start to use. We publish good, re-usable models.

In this post, I want to discuss the Models as Map metaphors. In particular, I want to trace through how it can take us from a naive realist, to critical realist, to interface theory view of models.

Read more of this post