Falsifiability and Gandy’s variant of the Church-Turing thesis

RobinGandyIn 1936, two years after Karl Popper published the first German version of The Logic of Scientific Discovery and introduced falsifiability; Alonzo Church, Alan Turing, and Emil Post each published independent papers on the Entscheidungsproblem and introducing the lambda calculus, Turing machines, and Post-Turing machines as mathematical models of computation. The years after saw many more models, all of which were shown to be equivalent to each other in what they could compute. This was summarized in the Church-Turing thesis: anything that is computable is computable by a Turing machine. An almost universally accepted, but also incredibly vague, statement. Of course, such an important thesis has developed many variants, and exploring or contrasting their formulations can be very insightful way to understand and contrast different philosophies.

I believe that the original and most foundational version of the thesis is what I called Kleene’s purely mathematical formulation. Delving into this variant allowed us explore the philosophy of mathematics; Platonism; and the purpose, power and limitations of proof. However, because of the popularity of physicalism and authority of science, I doubt that Kleene’s is the most popular variant. Instead, when people think of the Church-Turing thesis, they often think of what is computable in the world around them. I like to associate this variant with Turing’s long time friend and student — Robin Gandy. I want to explore Gandy’s physical variant of the Church-Turing thesis to better understand the philosophy of science, theory-based conceptions, and the limits of falsifiability. In particular, I want to address what seems to me like the common misconception that the Church-Turing thesis is falsifiable.

I promised that the variants are more precise than the general statement, so lets look at a paraphrase of Gandy’s (1980) specification: any function computable by physical machine is computable by a Turing machine. Unfortunately, this statement has some often misunderstood words like ‘machine’, still doubles up on computable, and is not easy to understand for randomized algorithms. I prefer to take a completely operationalist stance to sidestep these issues: the statistics of measurement for any repeatable physical process can be approximated arbitrarily well by a Turing machine. This allows us to look at the Church-Turing thesis as a general principle that we can use with any theory that has a concept of repeatable experiments and measurement. I’ll touch more on this further down.

At this point you might interject that this version of the Church-Turing thesis is obviously true and point me to quantum computing as justification. You would be right on the latter point (see for instance: Deutsch, 1985), but very wrong on the first. If you take quantum mechanics (of finite dimensional systems, for simplicity) as it is currently understood then it says that any physical process is some unitary transformation on some initial state vector. As far as I understand, given a finite dimensional system, quantum mechanics does not tell us which of the uncountably many possible unitaries on this system are physically realizable; so to be generous, let’s permit them all. At first glance, it might seem like a simple counting argument might now disprove the Church-Turing thesis. There are only countably many Turing Machines, but there are uncountably many unitaries, we can’t possible realize them all. This is where “approximate arbitrarily well” comes into the picture, if we take some universal gate set for quantum computing (say the Hadamard, \cos^{-1} 3/4-phase shift, and controlled-NOT gates) then the group they generate is dense in the unitaries (kind of how the rationals — that are countable — are dense in the reals — that are uncountable) and thus any unitary can be approximated arbitrarily well by some finite sequence of these three gates.

The above argument is simple, but should not be simply skimmed. The “approximate arbitrarily well” condition is very powerful, and I think should be required for any reasonable discussion of the Church-Turing thesis when statistics or real numbers crop up. In fact, the mystery in ‘disproofs’ of the Church-Turing thesis with hypercomputing or super-recursive algorithms (Burgin, 2006) like artificial neural networks with arbitrary real numbers as its weights (Siegelmann, 1995; 1999) vanishes when the level of precision is explicitly taken into account. If you want to work with real numbers and computation then is important to take a careful view of them (see also Blum, 2004; Braverman & Cook, 2006).

Unfortunately, this does not establish the truth of the Church-Turing thesis, just that quantum mechanics is consistent with the thesis. The first problem is one that is at the heart of physicalism more generally: what is physical? What are physical properties or processes? A typical approach is to say that the physical is what physics studies (and the circular: what physics studies is the physical) — this is the theory-based conception of the physical.

Carl Hempel pointed out an inherent dilemma in the theory-based conception. If you define what is physical with reference to quantum mechanics (or quantum field theory, standard model, etc) then — if history is taken seriously — you’re surely wrong. Physics is great, but do you really think it is complete in its current form? For concreteness and some elaboration, consider the problem of underdetermination. Alternatively, it you define the physical with respect to some future ideal physics, then an argument directly from quantum mechanics is irrelevant because it is not that ideal theory.

You can try to finesse Hempel’s dilemma a bit by looking to structural realism and suggesting that at least the “main ideas” of quantum mechanics would be in any ideal physics, and those main ideas might be enough premise to prove the Church-Turing thesis. I think finding the minimal part of physics needed to support the Church-Turing thesis is a great research question (and Gandy (1980) worked toward answering it), but assuming that this minimal part will remain unchanged as physics develops is too much for me. There have been simply too many dramatic paradigm shifts and drastic changes to the underlying ontology of physics for me to elevate any set of assumptions to the level of unquestionable axioms. I don’t feel comfortable assuming that I have a final understanding of any part of the physical world.

The standard way to avoid a lot of this sophism is to concentrate on working ontologies and falsifiability. Quantum mechanics is falsifiable, but we want to look at the Church-Turing thesis on its own. The fact that it can be derived from a falsifiable theory does not tell us much, because falsifiability is not closed under standard rules of logic; for example, it is not closed under negation, implication, or introduction of existential quantifiers — to take just a few operations we are used to. If we want to check if the Church-Turing thesis is falsifiable, we have to do it directly.

Let us try to build a falsifier. It would have to be, in Gandy’s terminology, a physical machine that computes a function that is not Turing computable, or, in my terminology, a repeatable physical process for which the statistics of measurement could not be approximated arbitrarily well by a Turing machine. If we look inside the machine, or use our current physical laws to describe the physical process, then we have not escaped the objections raised with Hempel’s dilemma or the fact that falsification is not closed under logic and only examined the thesis together with that theory. To examine the thesis on its own, we would have to use the outputs of the machine or statistics of the measurements. However, we can only run the machine or conduct the experiment a finite number of times, and so we will only have a finite number of input-output pairs. Unfortunately, every finite language is Turing complete (in fact, regular) and so we have not constructed the falsifier we wanted.

Does that mean there is something wrong with the thesis? This would only be a reasonable conclusion if we somehow thought that falsifiability was a normative definition of right-ness. As I’ve pointed out before, this is a misunderstanding of Popper because it makes the power-move of equating scientific-according-to-some-rigid-definition with right-ness. But one could argue that the CT-thesis on its own is not a scientific statement because it is not falsifiable; I think that this would be consistent with Popper. However, I prefer to accept the common sense notion that the CT-thesis is a pretty good scientific statement, and instead argue that Popper’s demarcation of science was incomplete.

Alternatively, one could patch Popper and argue for a hierarchy of falsifiability. In homage to computer science, I would like to define this recursively. Let 1-falsifiable mean falsifiable in the proper Popper sense, and let (n+1)-falsifiable mean that the statement has a falsifier that is itself n-falsifiable. Under this terminology, the CT-thesis would be 2-falsifiable because the falsifier would be a guess at the infinite function that the machine computes. Once we have a guess, that guess is 1-falsifiable since we would just need one input-output pair to invalidate it. I don’t think this is an unreasonable approach, because what constitutes an acceptable measurement is theory-laden (for Popper, at least) and it seems reasonable to be able to define a guess-at-function-computed as some sort of higher-level measurement. This is why I did not start my inductive index at 0, since I want to call whatever our theory prescribes as measurement statements to be 0-falsifiable statements. I have no idea what Popper or other serious philosophers of science would think of this definition, but if you do then make yourself heard in the comments.

Finally, although Gandy’s variant only places an upper-bound on the power of physical processes, I think it is also worthwhile to follow Michael Nielsen and demand a lower bound. If we are working within the context of some physical theory then we can ask if it is possible to create a process within that theory that can simulate (according to the theory) any other process in that theory. In other words, is the theory (taken as a model of computation) Turing-complete? Common sense suggests that this should be true, since no law of physics exists to stop a hypothetical Turing machine from running forever (maybe as an auxiliary machine absorbs energy in the universe to build it more tape). Nielsen elaborates on and attributes this stance to Deutsch (1985), but I think there are hints of it in Gandy (1980). It is a great criterion for building reasonable physical theories.

Of course, as with any philosophical stance, there are objections. A popular one is to point out that the universe is not infinite. However, we don’t need an infinite universe just an arbitrarily large one. We could push the objection further by saying that the universe is not arbitrarily large, but has a specific upper bound on its number of states and a lot of people do this by saying that “there is only 10^{80} atoms in the universe. Unfortunately, I have never seen this upper bound formulated without an unreasonably large amount of theoretical baggage nor have I seen it play a vital role in a theory of physics. If a specific upper bounds on the number of states plays no useful role in the theory then why include it instead of opting for ontological minimalism?

Alas, a full discussion of theory-building would push me too much into the structure of theories versus the structure of the noumenon they describe. As you can tell from my choice of words, this would require dragging in Kant, so I will save it for next time when I introduce Post’s variant of the CT-thesis.

References

Blum, L. (2004). Computing over the reals: Where Turing meets Newton. Notices of the AMS, 51(9): 1024-1034.

Braverman, M., & Cook, S. (2006). Computing over the reals: Foundations for scientific computing. Notices of the AMS, 53(3): 318-329.

Burgin, M. (2006). Super-recursive algorithms. Springer.

Deutsch, D. (1985). Quantum theory, the Church-Turing principle and the universal quantum computer. Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences, 400(1818), 97-117.

Gandy, R. (1980). Church’s thesis and principles for mechanisms. Studies in Logic and the Foundations of Mathematics (101), 123-148 DOI: 10.1016/S0049-237X(08)71257-6

Siegelmann, H.T. (1995). Computation beyond the Turing limit. Science, 268(5210): 545-548.

Siegelmann. H.T. (1999). Neural networks and analog computation: beyond the Turing limit. Springer.

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

14 Responses to Falsifiability and Gandy’s variant of the Church-Turing thesis

  1. At this point you might interject that this version of the Church-Turing thesis is obviously true and point me to quantum computing as justification.

    Not sure I follow this. Did you mean “false”, as in I might suppose quantum computers are more powerful than Turing machines? I get your arguments for why quantum computers may not actually have a wider range of computations.

    • I meant true. I meant it in the sense that people might say “quantum mechanics describes reality, and the most powerful model associated with that is quantum computing which in terms of computability is equivalent to the Turing machine”. However, I realize now that I caused confusion by meditating on the continuous-vs-countable distinction (and pushing in the other direction) in the middle of my discussion. I think that is what caused the confusion.

      What I should have done was having a first paragraph where I respond to a person that thinks quantum mechanics (or classical mechanics, or neural nets) disproves the CT-thesis (because of the continuous-vs-countable issue) and then a second paragraph that addresses the person that thinks that quantum mechanics establishes the CT-thesis as true and transition that into Hempel’s dilemma. That would have been better than sticking the disproof-person in the middle of my discussion with the proof-person.

      I don’t think that I want to introduce a major change into a published blog post (I usually don’t go beyond some last minute finishing touches and typo fixes), so I will keep it as is for now. If it continuous to generate problems I might write a separate post about quantum computing and the CT-thesis where I can describe the previous points, touch on/refute some of Penrose’s speculation of quantum gravity going beyond CT-thesis, and introducing the efficient (or strong) CT-thesis and the contrast between BPP and BQP.

  2. Pingback: Transcendental idealism and Post’s variant of the Church-Turing thesis | Theory, Evolution, and Games Group

  3. Pingback: Limits of prediction: stochasticity, chaos, and computation | Theory, Evolution, and Games Group

  4. Pingback: Personification and pseudoscience | Theory, Evolution, and Games Group

  5. Pingback: Cataloging a year of blogging: the philosophical turn | Theory, Evolution, and Games Group

  6. cokolwiek says:

    “However, we can only run the machine or conduct the experiment a finite number of times, and so we will only have a finite number of input-output pairs.”

    But you don’t have to run the machine an infinite number of times, you can just prove that it computes the function?

    • That is exactly why I included a discussion of Hempel’s dilemma.

      To prove — in the mathematical sense of the word — that your physical process computes a function, you have to have an axiomatic specification of your physical theory (and a demonstration that the relevant part of the theory is implementable in nature). That specification, however, is not nature. And you have no real way to show that it is. When you show that this specification can ‘compute’ something uncomputable, what you have done is shown that there are consequences of your theory that you can’t compute (or comprehend, if you take Post’s interpretation of the CT-thesis). To many, that would read as an error with the theory, and you would have no finite experimental test available to you to prefer your theory with uncomputable consequences over a computable theory.

  7. Pingback: Antoni Gaudi and learning algorithms from Nature | Theory, Evolution, and Games Group

  8. Pingback: Abstract is not the opposite of empirical: case of the game assay | Theory, Evolution, and Games Group

  9. Pingback: Algorithmic lens as Alan Turing’s wider impact | Theory, Evolution, and Games Group

  10. Pingback: Reductionism: to computer science from philosophy | Theory, Evolution, and Games Group

  11. Pingback: From perpetual motion machines to the Entscheidungsproblem | Theory, Evolution, and Games Group

  12. Pingback: Halting Is Poly-Time Quantum Provable | Gödel's Lost Letter and P=NP

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.