So in my treatment of this issue in the book I am currently writing on the philosophy of mathematics, I take a somewhat softer tone, while making essentially the same points.

I find these issues quite subtle and interesting, for both mathematical and philosophical reasons.

]]>As a footnote, I think it’s worth mentioning that Turing himself came to realise that the binary digit representation of real numbers he used in ‘On Computable Numbers’ was problematic. He discusses this in ‘On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction’ (Proc. London Math. Soc. 43 (1937), 544-6; also reprinted in Jack Copeland (ed) ‘The essential Turing’ pp. 94-96).

Turing’s discussion of the problem (and his solution to it) in this paper is very brief and somewhat cryptic. There is an excellent, detailed discussion of it in Guido Gherardi, ‘Alan Turing and the Foundations of Computable Analysis’ , Bulletin of Symbolic Logic 17(3) 2011, 394-430.

In the ‘correction’ article, Turing does not discuss the problem you described, that the basic operations on real numbers cannot be represented as computable functions when the binary digit representation is used. Instead he discusses the following problem: although we might have a program P that computes a sequence of rational approximations to a real number (which we can prove must be computable) there is no program which takes as input programs like P and produces as output a program that enumerates the binary digits of . There must *be* some program that enumerates the digits of but there’s no general way of constructing it from a program that computes the sequence of rational approximations. (So the situation seems at least analogous to the problem that there is no program to compute the digits of a + b from programs that enumerate the digits of a and of b).

Turing writes “This disagreeable situation can be avoided by modifying the manner in which computable numbers are associated with computable sequences, the totality of computable numbers being left unaltered.” In other words, he offers an alternative definition of ‘computable number’ which avoids this problem but which is equivalent to the orginal definition in the sense that a real number is computable in the original sense if and only if it is computable in this new sense – just as described in your piece. The alternative definition he suggests is not, I believe, the same as any currently standard definition, but is equally workable. He considers sequences of 0s and 1s of the form:

(0 or 1) followed by 1 repeated times followed by a 0 followed by a sequence

and associates with each such sequence the real number

So for example the sequence 10 1 corresponds to 2/3, 10 0 corresponds to -2/3, 1110 01 corresponds to 2 + 4/9, 10 11111… corresponds to 2/3 + 4/9 + 8/27 + … = 2 and so on.

A Turing machine that prints a sequence of 1s and 0s of this form is then said to compute the corresponding real number; a real number is computable if there is a machine that computes it. Turing cites Brouwer as the source for “this use of overlapping intervals for the definition of real numbers”. (See Gherardi section 3.3 for discussion of this).

Turing states (but does not prove) that this new definition solves the problem he discussed. Gherardi proves (section 3.4) that Turing’s representation is *computably equivalent* to one of the current standard representations (the Cauchy representation) and so it also solves the problem described in your piece – the standard operations of addition, multiplication and so on are now computable functions.

]]>The reason is that we do not ultimately think of a computable real number as a particular kind of real number, and despite his initial definition, neither does Turing. When it comes to defining operations on the computable real numbers, which is what Turing is aiming at, he explicitly abandons the idea that these are defined on the real numbers themselves or some of them. Rather, operations on the computable real numbers are functions defined on the programs that generate such reals. So a computable real number, for Turing as well as for the contemporary theory, is a program for generating a real number.

And if one adopts Turing’s view that the programs we should be looking at are the programs that generate the successive digits of the real number, then this is precisely where he goes wrong, since in this case, the ordinary operations of addition and multiplication will not be computable, for the reasons I explained in the post.

Turing seems in his article to assume that they would be computable, and much more, since he is using the tangent function and infinite sums and so on.

The philosophical question here as I see it is: what does it mean to have a computable real number? Does it mean to have a particular real number? No, it means to have an algorithm that is capable of describing a real number.

]]>