Alan Turing, On computable numbers

I have been reading Alan Turing’s paper, On computable numbers, with an application to the entsheidungsproblem, an amazing classic, written by Turing while he was a student in Cambridge. This is the paper in which Turing introduces and defines his Turing machine concept, deriving it from a philosophical analysis of what it is that a human computer is doing when carrying out a computational task.

The paper is an incredible achievement. He accomplishes so much: he defines and explains the machines; he proves that there is a universal Turing machine; he shows that there can be no computable procedure for determining the validities of a sufficiently powerful formal proof system; he shows that the halting problem is not computably decidable; he argues that his machine concept captures our intuitive notion of computability; and he develops the theory of computable real numbers.

What I was extremely surprised to find, however, and what I want to tell you about today, is that despite the title of the article, Turing adopts an incorrect approach to the theory of computable numbers. His central definition is what is now usually regarded as a mistaken way to proceed with this concept.

Let me explain. Turing defines that a computable real number is one whose decimal (or binary) expansion can be enumerated by a finite procedure, by what we now call a Turing machine. You can see this in the very first sentence of his paper, and he elaborates on and confirms this definition in detail later on in the paper.

He subsequently develops the theory of computable functions of computable real numbers, where one considers computable functions defined on these computable numbers. The computable functions are defined not on the reals themselves, however, but on the programs that enumerate the digits of those reals. Thus, for the role they play in Turing’s theory, a computable real number is not actually regarded as a real number as such, but as a program for enumerating the digits of a real number. In other words, to have a computable real number in Turing’s theory is to have a program for enumerating the digits of a real number. And it is this aspect of Turing’s conception of computable real numbers where his approach becomes problematic.

One specific problem with Turing’s approach is that on this account, it turns out that the operations of addition and multiplication for computable real numbers are not computable operations. Of course this is not what we want.

The basic mathematical fact in play is that the digits of a sum of two real numbers $a+b$ is not a continuous function of the digits of $a$ and $b$ separately; in some cases, one cannot say with certainty the initial digits of $a+b$, knowing only finitely many digits, as many as desired, of $a$ and $b$.

To see this, consider the following sum $a+b$
$$\begin{align*}
&0.343434343434\cdots \\
+\quad &0.656565656565\cdots \\[-7pt]
&\hskip-.5cm\rule{2in}{.4pt}\\
&0.999999999999\cdots
\end{align*}$$
If you add up the numbers digit-wise, you get $9$ in every place. That much is fine, and of course we should accept either $0.9999\cdots$ or $1.0000\cdots$ as correct answers for $a+b$ in this instance, since those are both legitimate decimal representations of the number $1$.

The problem, I claim, is that we cannot assign the digits of $a+b$ in a way that will depend only on finitely many digits each of $a$ and $b$. The basic problem is that if we inspect only finitely many digits of $a$ and $b$, then we cannot be sure whether that pattern will continue, whether there will eventually be a carry or not, and depending on how the digits proceed, the initial digits of $a+b$ can be affected.

In detail, suppose that we have committed to the idea that the initial digits of $a+b$ are $0.999$, on the basis of sufficiently many digits of $a$ and $b$. Let $a’$ and $b’$ be numbers that agree with $a$ and $b$ on those finite parts of $a$ and $b$, but afterwards have all $7$s. In this case, the sum $a’+b’$ will involve a carry, which will turn all the nines up to that point to $0$, with a leading $1$, making $a’+b’$ strictly great than $1$ and having decimal representation $1.000\cdots00005555\cdots$. Thus, the initial-digits answer $0.999$ would be wrong for $a’+b’$, even though $a’$ and $b’$ agreed with $a$ and $b$ on the sufficiently many digits supposedly justifying the $0.999$ answer. On the other hand, if we had committed ourselves to $1.000$ for $a+b$, on the basis of finite parts of $a$ and $b$ separately, then let $a”$ and $b”$ be all $2$s beyond that finite part, in which case $a”+b”$ is definitely less than $1$, making $1.000$ wrong.

Therefore, there is no algorithm to compute the digits of $a+b$ continuously from the digits of $a$ and $b$ separately. It follows that there can be no computable algorithm for computing the digits of $a+b$, given the programs that compute $a$ and $b$ separately, which is how Turing defines computable functions on the computable reals. (This consequence is a subtly different and stronger claim, but one can prove it using the Kleene recursion theorem. Namely, let $a=.343434\cdots$ and then consider the program to enumerate a number $b$, which will begin with $0.656565$ and keep repeating $65$ until it sees that the addition program has given the initial digits for $a+b$, and at this moment our program for $b$ will either switch to all $7$s or all $2$s in such a way so as to refute the result. The Kleene recursion theorem is used in order to know that indeed there is such a self-referential program enumerating $b$.)

One can make similar examples showing that multiplication and many other very simple functions are not computable, if one insists that a computable number is an algorithm enumerating the digits of the number.

So what is the right definition of computable number? Turing was right that in working with computable real numbers, we want to be working with the programs that compute them, rather than the reals themselves somehow. What is needed is a better way of saying that a given program computes a given real.

The right definition, widely used today, is that we want an algorithm not to compute exactly the digits of the number, but rather, to compute approximations to the number, as close as desired, with a known degree of accuracy. One can define a computable real number as a computable sequence of rational numbers, such that the $n^{th}$ number is within $1/2^n$ of the target number. This is equivalent to being able to compute rational intervals around the target real, of size less than any specified accuracy. And there are many other equivalent ways to do it. With this concept of computable real number, then the operations of addition, multiplication, and so on, all the familiar operations on the real numbers, will be computable.

But let me clear up a confusing point. Although I have claimed that Turing’s original definition of computable real number is incorrect, and I have explained how we usually define this concept today, the mathematical fact is that a real number $x$ has a computable presentation in Turing’s sense (we can compute the digits of $x$) if and only if it has a computable presentation in the contemporary sense (we can compute rational approximations to any specified accuracy). Thus, in terms of which real numbers we are talking about, the two approaches are extensionally the same.

Let me quickly prove this. If a real number $x$ is computable in Turing’s sense, so that we can compute the digits of $x$, then we can obviously compute rational approximations to any desired accuracy, simply by taking sufficiently many digits. And conversely, if a real number $x$ is computable in the contemporary sense, so we can compute rational approximations to any specified accuracy, then either it is itself a rational number, in which case we can certainly compute the digits of $x$, or else it is irrational, in which case for any specified digit place, we can wait until we have a rational approximation forcing it to one side or the other, and thereby come to know this digit. (Note: there are issues of intuitionistic logic occurring here, precisely because we cannot tell from the approximation algorithm itself which case we are in.) Note also that this argument works in any desired base.

So there is something of a philosophical problem here. The issue isn’t that Turing has misidentified particular reals as being computable or non-computable or has somehow got the computable reals wrong extensionally as a subset of the real numbers, since every particular real number has Turing’s kind of representation if and only if it has the approximation kind of representation. Rather, the problem is that because we want to deal with computable real numbers by working with the programs that represent them, Turing’s approach means that we cannot regard addition as a computable function on the computable reals. There is no computable procedure that when given two programs for enumerating the digits of real numbers $a$ and $b$ returns a program for enumerating the digits of the sum $a+b$. But if you use the contemporary rational-approximation representation of computable real number, then you can computably produce a program for the sum, given programs for the input reals. This is the sense in which Turing’s approach is wrong.

8 thoughts on “Alan Turing, On computable numbers

  1. Why can Turing not just use a radix with $n+1$ characters to represent any number in base n? Then represent a rational number having an infinitely repeating tail by inserting the character $n+1$ in the place where the number begins to repeat? Then even infinitely repeating numbers terminate.

    • Rational numbers are no problem for Turing or anyone. The interesting case of computable numbers is with irrational numbers, even transcendental numbers. Trying to adopt a variant representation for rational numbers will not avoid the main difficulty my post is about, since you cannot tell from finitely much information of $a$ and $b$ separately (when they are transcendental, for example), whether $a+b$ is rational or not.

  2. It is not clear to me that Turing’s definition/explication of computable real number can be said to be wrong, since it is extensionally correct. It seems to be rather his definition of “computable function from the reals to the reals” that is overly restrictive and therefore inadequate to explain the intuitive concept. On his definition, if I understand it well, a function is computable if there is a computable function that maps presentations of the arguments in Turing’s sense (program enumerating the digits of the arguments) to a presentation of the value in Turing’s sense (a program enumerating the digits of the value). That’s a small subclass of (so, not coextensional to) the computable functions according to the modern definition. So I do not see a philosophical problem after all.

  3. I don’t agree fully with this comment, and I still think it is correct to say that Turing has the wrong definition.

    The reason is that we do not ultimately think of a computable real number as a particular kind of real number, and despite his initial definition, neither does Turing. When it comes to defining operations on the computable real numbers, which is what Turing is aiming at, he explicitly abandons the idea that these are defined on the real numbers themselves or some of them. Rather, operations on the computable real numbers are functions defined on the programs that generate such reals. So a computable real number, for Turing as well as for the contemporary theory, is a program for generating a real number.

    And if one adopts Turing’s view that the programs we should be looking at are the programs that generate the successive digits of the real number, then this is precisely where he goes wrong, since in this case, the ordinary operations of addition and multiplication will not be computable, for the reasons I explained in the post.

    Turing seems in his article to assume that they would be computable, and much more, since he is using the tangent function and infinite sums and so on.

    The philosophical question here as I see it is: what does it mean to have a computable real number? Does it mean to have a particular real number? No, it means to have an algorithm that is capable of describing a real number.

  4. Thank you for posting this very interesting piece; it is a fascinating topic and your explanation of it was (as usual) very clear and accessible. I think it is a philosophically interesting fact that two definitions can be extensionally equivalent but not equally ‘correct’ or ‘good’ mathematically speaking. There might be many different kinds of reason why one of a pair of equivalent definitions is preferable to the other. One might be more explanatory, or better support generalisations, or provide a better foundation for a certain kind of theory and so on; it would be interesting to consider further examples and try to classify them.

    As a footnote, I think it’s worth mentioning that Turing himself came to realise that the binary digit representation of real numbers he used in ‘On Computable Numbers’ was problematic. He discusses this in ‘On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction’ (Proc. London Math. Soc. 43 (1937), 544-6; also reprinted in Jack Copeland (ed) ‘The essential Turing’ pp. 94-96).

    Turing’s discussion of the problem (and his solution to it) in this paper is very brief and somewhat cryptic. There is an excellent, detailed discussion of it in Guido Gherardi, ‘Alan Turing and the Foundations of Computable Analysis’ , Bulletin of Symbolic Logic 17(3) 2011, 394-430.

    In the ‘correction’ article, Turing does not discuss the problem you described, that the basic operations on real numbers cannot be represented as computable functions when the binary digit representation is used. Instead he discusses the following problem: although we might have a program P that computes a sequence of rational approximations to a real number \alpha (which we can prove must be computable) there is no program which takes as input programs like P and produces as output a program that enumerates the binary digits of \alpha. There must *be* some program that enumerates the digits of \alpha but there’s no general way of constructing it from a program that computes the sequence of rational approximations. (So the situation seems at least analogous to the problem that there is no program to compute the digits of a + b from programs that enumerate the digits of a and of b).

    Turing writes “This disagreeable situation can be avoided by modifying the manner in which computable numbers are associated with computable sequences, the totality of computable numbers being left unaltered.” In other words, he offers an alternative definition of ‘computable number’ which avoids this problem but which is equivalent to the orginal definition in the sense that a real number is computable in the original sense if and only if it is computable in this new sense – just as described in your piece. The alternative definition he suggests is not, I believe, the same as any currently standard definition, but is equally workable. He considers sequences of 0s and 1s of the form:

    i (0 or 1) followed by 1 repeated n times followed by a 0 followed by a sequence c_1 c_2 c_3 c_4 \ldots

    and associates with each such sequence the real number

    (2i - 1)n + \sum_{r=1}^{\infty}(2c_r -1)(2/3)^r

    So for example the sequence 10 1 corresponds to 2/3, 10 0 corresponds to -2/3, 1110 01 corresponds to 2 + 4/9, 10 11111… corresponds to 2/3 + 4/9 + 8/27 + … = 2 and so on.

    A Turing machine that prints a sequence of 1s and 0s of this form is then said to compute the corresponding real number; a real number is computable if there is a machine that computes it. Turing cites Brouwer as the source for “this use of overlapping intervals for the definition of real numbers”. (See Gherardi section 3.3 for discussion of this).

    Turing states (but does not prove) that this new definition solves the problem he discussed. Gherardi proves (section 3.4) that Turing’s representation is *computably equivalent* to one of the current standard representations (the Cauchy representation) and so it also solves the problem described in your piece – the standard operations of addition, multiplication and so on are now computable functions.

  5. I think that it is the idea that there are definitions of a term by a different coextensional one that are not *correct* that I was at pains to deny. There are of course interesting questions about how a certain notion and theory are best formalised/analysed when we are not sure at the outset about what sort of entities we want to talk about (reals or programs?). What I got wrong, as Prof. Hamkins pointed out, is that I assumed that the definition had to single out a class of real numbers. But then, on Turing’s initial definition and on the modern one define, the computable numbers are two different, in fact disjoint classes of programs. They are not coextensional at all. Therefore, again, this does not provide an example of two definitions that are extensionally equivalent but of which one is incorrect.

    • Yes, I think we basically agree. I suppose one can say that I have over-stated the case to a certain extent, since it isn’t actually completely clear from Turing’s article whether he thought of it just like this. After all, at least on the surface, he does define a computable real number as a kind of real number. But then when one gets into how one might use these, the real number gets replaced by the program.

      So in my treatment of this issue in the book I am currently writing on the philosophy of mathematics, I take a somewhat softer tone, while making essentially the same points.

      I find these issues quite subtle and interesting, for both mathematical and philosophical reasons.

Leave a Reply