From perpetual motion machines to the Entscheidungsproblem

Turing MachineThere seems to be a tendency to use the newest technology of the day as a metaphor for making sense of our hardest scientific questions. These metaphors are often vague and inprecise. They tend to overly simplify the scientific question and also misrepresent the technology. This isn’t useful.

But the pull of this metaphor also tends to transform the technical disciplines that analyze our newest tech into fundamental disciplines that analyze our universe. This was the case for many aspects of physics, and I think it is currently happening with aspects of theoretical computer science. This is very useful.

So, let’s go back in time to the birth of modern machines. To the water wheel and the steam engine.

I will briefly sketch how the science of steam engines developed and how it dealt with perpetual motion machines. From here, we can jump to the analytic engine and the modern computer. I’ll suggest that the development of computer science has followed a similar path — with the Entscheidungsproblem and its variants serving as our perpetual motion machine.

The science of steam engines successfully universalized itself into thermodynamics and statistical mechanics. These are seen as universal disciplines that are used to inform our understanding across the sciences. Similarly, I think that we need to universalize theoretical computer science and make its techniques more common throughout the sciences.

Read more of this post

Overcoming folk-physics: the case of projectile motion for Aristotle, John Philoponus, Ibn-Sina & Galileo

A few years ago, I wrote about the importance of pairing tools and problems in science. Not selecting the best tool for the job, but adjusting both your problem and your method to form the best pair. There, I made the distinction between endogenous and exogenous questions. A question is endogenous to a field if it is motivated by the existing tools developed for the field or slight extensions of them. A question is exogenous if motivated by frameworks or concerns external to the field. Usually, such an external motivating framework is accepted uncritically with the most common culprits being the unarticulated ‘intuitive’ and ‘natural’ folk theories forced on us by our everyday experiences.

Sometimes a great amount of scientific or technological progress can be had from overcoming our reliance on a folk-theory. A classic examples of this would be the development of inertia and momentum in physics. In this post, I want to sketch a geneology of this transition to make the notion of endogenous vs exogenous questions a bit more precise.

How was the folk-physics of projectile motion abandoned?

In the process, I’ll get to touch briefly on two more recent threads on TheEGG: The elimination of the ontological division between artificial and natural motion (that was essential groundwork for Darwin’s later elimination of the division between artificial and natural processes) and the extraction and formalization of the tacit knowledge underlying a craft.
Read more of this post

Should we be astonished by the Principle of “Least” Action?

QuinceyFig2As one goes through more advanced expositions of quantum physics, the concept of action is gradually given more importance, with it being considered a fundamental piece in some introductions to Quantum Field Theory (Zee, 2003) through the use of the path integral approach. The basic idea behind using the action is to assign a number to each possible state of a system. The function that does so is named the Lagrangian function, and it encodes the physics of the system (i.e. how do different parts of the system affect each other). Then, to a trajectory of a system we associate the integral of this number over all the states in the trajectory. This contrasts with the classical Newtonian approach, where we study a system by specifying all the possible ways in which parts of the system exercise forces on each other (i.e. affect each other’s acceleration). Using the action usually results in nicer mathematics, while I’d argue that the Newtonian approach requires less training to feel intuitive.

In many of the expositions of the use of action in physics (see e.g. this one), I perceive an attempt at transmitting wonder about the world being such that it minimizes a function on its trajectory. This has indeed been the case historically, with Maupertuis supposed to have considered action minimization (and the corresponding unification of minimization principles between optics and mechanics) as the most definite proof available to him of the existence of God. However, along the spirit of this stack exchange question, I never really understood why such a wonder should be felt, even setting aside the fact that it assumes that our equations “are” the world, a perspective that Artem has criticized at length before.
Read more of this post

Evolution explains the fundamental constants of physics

While speaking at TEDxMcGill 2009, Jan Florjanczyk — friend, quantum information researcher, and former schoolmate of mine — provided one of the clearest characterization of theoretical physics that I’ve had the please of hearing:

Theoretical physics is about tweaking the knobs and dials and assumptions of the laws that govern the universe and then interpolating those laws back to examine how they affect our daily lives, or how they affect the universe that we observe, or even if they are consistent with each other.

I believe that this definition extends beyond physics to all theorists. We are passionate about playing with the the stories that define the unobservable characters of our theoretical narratives and watching how our mental creations get along with each other and affect our observable world. With such a general definition of a theorists, it is not surprising that we often see such thinkers cross over disciplinary lines. The most willing to wander outside their field are theoretical physicists; sometimes they have been extremely influential interdisciplinaries and at other times they suffered from bad cases of interdisciplinitis.

On the other hand, physicists like to say physics is to math as sex is to masturbation.

The physicists’ excursions have been so frequent that it almost seems like a hierarchy of ideas developed — with physics and mathematics “on top”. Since I tend to think of myself as a mathematician (or theoretical computer scientist, but nobody puts us in comics), this view often tempts me but deep down I realize that the flow of ideas is always bi-directional and no serious field can be dominant over another. To help slow my descent into elitism, it is always important to have this realization reinforced. Thus, I was extremely excited when Jeremy Fox of Dynamic Ecology drew my attention to a recent paper by theoretical zoologist Andy Gardner (in collaboration with physicists J.P. Conlon) on how to use the Price equation of natural selection to model the evolution and adaptation of the entire universe.

Since you will need to know a little bit about the physics of black holes to proceed, I recommend watching Jan’s aforementioned talk. Pay special attention to the three types of black holes he defines, especially the Hubble sphere:

As you probably noticed, our universe isn’t boiling, the knobs and dials of the 30 or so parameters of the Standard Model of particle physics are exquisitely well-tuned. These values seem arbitrary, and even small modifications would produce a universe incapable of producing or sustaining the complexity we observe around us. Physicists’ default explanation of this serendipity is the weak anthropic principle: only way we would be around to observe the universe and ask “why are the parameters so well tuned?” is if that universe was tuned to allow life. However, this argument is fundamentally unsettling, it lacks any mechanism.

Smolin (1992) addressed this discomfort by suggesting that the fundamental constants of nature were fine-tuned by the process of cosmological natural selection. The idea extends our view of the possible to a multiverse (not to be confused with Deutsch’s idea) that is inhabited by individual universes that differ in their fundamental constants and give birth to offspring universes via the formation of blackholes. Universes that are better tuned to produce black holes sire more offspring (i.e. have a higher fitness) and thus are more likely in the multiverse.

Although, Smolin (2004) worked to formalize this evolutionary process, he could not achieve the ecological validity of Gardner & Conlon (2013). Since I suspect the authors’ paper is a bit tongue-in-cheek, I won’t go into the details of their mathematical model and instead provide some broad strokes. They consider deterministically developing (also stochastic in the appendix) universes, and a 1-to-1 mapping between black-holes in one generation of universes and the universes of the next generation. Since — as Jan stressed — we can never go inside black-holes to measure their parameters, the authors allow for any degree of heritability between parent and offspring universes. At the same time, they consider a control optimization problem, with the objective function to maximize the number of black-holes. They then compare the Price dynamics of their evolutionary model to the optimal solution of the optimization problem and show a close correspondence. This correspondence implies that successive generations of universes will seem increasingly designed for the purpose of forming black holes (without the need for a designer, of course).

You might object; “I’m not a black hole, why is this relevant?” Well, it turns out that universes that are designed for producing black holes, are also ones that are capable of sustaining the complexity needed for intelligent observers to emerge (Smolin, 2004). So, although you are not a black-hole, the reason you can get excited about studying them is because you are an accidental side-effect of their evolution.

References

Gardner, A., & Conlon, J. (2013). Cosmological natural selection and the purpose of the universe Complexity DOI: 10.1002/cplx.21446

Smolin, L. (1992). Did the universe evolve?. Classical and Quantum Gravity, 9(1), 173.

Smolin, L. (2004). Cosmological natural selection as the explanation for the complexity of the universe. Physica A: Statistical Mechanics and its Applications, 340(4), 705-713.

Tegmark, M., Aguirre, A., Rees, M. J., & Wilczek, F. (2006). Dimensionless constants, cosmology, and other dark matters. Physical Review D, 73(2), 023505.