From perpetual motion machines to the Entscheidungsproblem

Turing MachineThere seems to be a tendency to use the newest technology of the day as a metaphor for making sense of our hardest scientific questions. These metaphors are often vague and inprecise. They tend to overly simplify the scientific question and also misrepresent the technology. This isn’t useful.

But the pull of this metaphor also tends to transform the technical disciplines that analyze our newest tech into fundamental disciplines that analyze our universe. This was the case for many aspects of physics, and I think it is currently happening with aspects of theoretical computer science. This is very useful.

So, let’s go back in time to the birth of modern machines. To the water wheel and the steam engine.

I will briefly sketch how the science of steam engines developed and how it dealt with perpetual motion machines. From here, we can jump to the analytic engine and the modern computer. I’ll suggest that the development of computer science has followed a similar path — with the Entscheidungsproblem and its variants serving as our perpetual motion machine.

The science of steam engines successfully universalized itself into thermodynamics and statistical mechanics. These are seen as universal disciplines that are used to inform our understanding across the sciences. Similarly, I think that we need to universalize theoretical computer science and make its techniques more common throughout the sciences.

Machines and the Conservation of Energy

As machines started to do more work for us, and as they became increasingly more efficient. It became natural to ask: will these machines ever stop? Can we make machines that do more work than we put in? Can we make perpetual motion machines?

You can see people already trying to do this with water power as early as the start of the 1600s. For example at right, see Robert Fludd’s 1618 sketch of a water screw perceptual motion machine. He imagined the top tank draining to turn a water wheel. The water wheel then cranked a shaft which both turned a millstone to do useful work and powered an Archimedes’ screw that pumped water from the lower tank back up to the upper. Certainly sounds like it could work. We just need to get the gears running smoothly enough; right?

We might find this laughable now, but that was the mindset for a lot of serious thinkers at the time.

As steam engines were developed and proliferated by the late 1700s, the excitement for perpetual motion machines only heightened. By 1775, England has the Watt steam engine powering pumps and machines. With so much mysterious power coming from coal, and newer and newer machines requiring less and less coal to do more and more work. Surely it’d be possible to push past the point of 100% efficiency into free energy.

It was easy to speculate about this at the time, since the steam engine itself was poorly understood. It lacked a solid scientific grounding.

Of course, scientists were also very interested in these engines. And they developed the groundwork for making sense of steam and other engines alongside the excitement for perpetual motion. But a modern science of steam engines was not really formed until around 1824 when Sadi Carnot published Reflections on the Motive Power of Fire. This was the birth of modern technical discipline: thermodynamics.

This didn’t stop inventors from working on perpetual motion machines, but more sober minded scientists and engineers started to suspect that it might not be possible to build such machines. By 1856, Rudolf Clausius had formulated empirical principles which have since become the first laws of thermodynamics. And from these empirical principles, one could finally argue that perpetual motion that could power an external system was impossible.

But it wasn’t clear how these empirical principles arose — maybe a new finding or a new type of engine could overturn them? Maybe we just needed to be more creative with the kinds of machines we considered. Just how widely could these empirical principles apply? Could they be explained on derived from simpler ideas? From the 1870s until the publication of his 1886 Lectures on Gas Theory, Ludwig Boltzmann developed a statistical mechanics to explain these empirical principles. He grounded these laws in statistics of the Newtonian laws that by this time were seen as foundational.

Finally, in 1918, Emmy Noether published her groundbreaking Invariante Variationsprobleme where she shared her theorem that every differentiable symmetry of the action of a physical system has a corresponding conservation law. Now we knew that the conservation of energy was not some odd empirical hypothesis open for challenge. Rather it was a consequence of the form of our physical laws. Conservation of energy was a consequence of invariance of our physical laws under time translations.

Putting all these ingredients together, we could be certain that perpetual motion machines were epistemically impossible. Their existence — in any form — is incompatible with our laws of physics.

But notice how these laws broadened. We started from reasoning about particular machines and particular experiments. We started from a science of steam engines. And we ended at fundamental reality.

Now we use thermodynamics and statistical mechanics in all kinds of domains that have nothing to do with steam engines. See for example, how Jeremy England uses statistical mechanics to explain the origin of life. Or less successful cases like causal entropic forces as an explanation for the intelligence niche occupied by humans.

A narrowly defined technical discipline has grown to be about the whole universe — and we now respect it as a useful tool and sanity check in all our other scientific disciplines.

Algorithms and the Complexity of Computation

A similar story has developed in computer science. Except instead of steam engines, we have algorithms.

By the late 1800s, formal methods in mathematics were improving quickly. Just like improvements to steam engines emboldened the mechanics and inventors, these formal improvements emboldened mathematicians and logicians. After all, they were finding procedures for computing the solutions to more and more difficult mathematical problems.

By 1928, Hilbert and Ackermann in Grundzüge der Theoretischen Logik asked the Entscheidungsproblem: “What is the procedure that determines for each logical expression for which domains it is valid or satisfiable?”

This was the computer science equivalent of asking “What is the design for the perpetual machine?”

Thankfully for computer science, it took less time for them to find their version Emmy Noether — Turing and Church. Probably because mathematicians were — unlike physicists — already looking for formal rather than empirical answers. By 1936 these mathematicians showed the impossibility of Hilbert’s dream: their existed no algorithm that could solve the Entscheidungsproblem.

There were concrete problems — most notably the Halting problem — that no algorithm could solve. At least not in the general case. This was the computer science version of the conservation of energy: a barrier that prevented the wonders we desired and assumed were possible. Just like Noether, the computer scientists showed that this complexity limit was a consequence of our logical laws. An algorithm for solving the Halting problem — just like the perpetual motion machine — was epistemically impossible.

Since then, computer science has expanded our understanding of the limits of computation. And we now have a richer web of belief on which problems are tractable — i.e. have algorithms that run in polynomial time — and which are intractable. Unfortunately, this web is still centered around a number of conjectures that are strongly believed but not formally resolved.

Just like with thermodynamics and statistical mechanics breaking free of steam engines, computer science has rebelled against a view of itself as a specialized technical discipline dealing just with algorithm engines. As with thermodynamic’s use of statistical mechanics to ground itself in Newtonian mechanics, the easiest way to universalize computer science was to ground itself in physicalism. This was achieved with Gandy’s variant of the Church-Turing thesis. It’s intuitive statement is that any function computable by physical machine is computable by a Turing machine. A more operationalized statement might be that the statistics of measurement for any repeatable physical process can be approximated arbitrarily well by a Turing machine.

Of course, this is not the only way to universalize theoretical computer science. Personally, I prefer Post’s variant of the CT-thesis: Turing Machine or other equivalent forms of computation capture what is thinkable by us, and express the restriction of our finite understanding. In other words, theoretical computer science is the ultimate tool for analyzing our theories, models, and hypotheses.

So, as theoretical computer science universalized itself it has sought — just like thermodynamics and statistical mechanics before it — uses for its mathematical tools in the domains of other disciplines. It infiltrated physics by building a subfield of quantum information processing. It infiltrated economics with algorithmic game theory. And I am doing my best to help theory computer science find its way into biology and to develop an algorithmic biology.

Now we can go from the conservation of energy in mechanics ruling out perpetual motion machines, to the constraint of computation in evolution enabling perpetual maladaptive disequilibrium.

If we recognize theoretical computer science as foundational then we open a whole new toolbox for understanding the universe. This is a good resource for other sciences and also a great motivation for theoretical computer science.

Advertisements

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

2 Responses to From perpetual motion machines to the Entscheidungsproblem

  1. Alistair Bain says:

    There was an article in the scientific american magazine many years ago about perpetual motion.
    The author concluded that perpetual motion was impossible because you have to add energy to a system in order to delete the contents of a memory cell in order to keep the process moving.
    I cant remember which edition the article was in.

  2. Pingback: Quick introduction: the algorithmic lens | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.