• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Penrose Turing Argument

Robin

Penultimate Amazing
Joined
Apr 29, 2004
Messages
14,971
In Shadows of the Mind, Roger Penrose attempts to formalise Goedel's intuition about human knowledge and formal systems referred to in my previous post. Here is the argument:
Roger Penrose from 'Shadows of the Mind (1994) pp 74-75
A is just any sound set of computational rules for ascertaining that some computations Cq(n)
do not ever halt. Being dependent upon the two numbers q and n, the computation that A
performs can be written A(q; n), and we have

(H) If A(q; n) stops, then Cq(n) does not stop.

Now let us consider the particular statements (H) for which q is put equal to n: : : we now have:

(I) If A(n; n) stops, then Cn(n) does not stop.

We now notice that A(n; n) depends upon just one number n, not two, so it must be one of the
computations C0; C1; C2; C3; : : :, since this was supposed to be a listing of all the computations
that can be performed on a single natural number n. Let us suppose that this is in fact Ck, so
we have:

(J) A(n; n) = Ck(n):

Now examine the particular value n = k. We have, from (J),

(K) A(k; k) = Ck(k):

and, from (I), with n =k:

(L) If A(k; k) stops, then Ck(k) does not stop.

Substituting (K) in (L) we find:

(M) If Ck(k) stops, then Ck(k) does not stop.

From this, we must deduce that the computation Ck(k) does not stop. (For if it did then it does
not, according to (M)!) But A(k; k) cannot stop either, since by (K), it is the same as Ck(k).
Thus, our procedure A is incapable of ascertaining that this particular computation Ck(k) does
not stop even though it does not. Moreover, if we know that A is sound, then we know that
Ck(k) does not stop. Thus, we know something that A is unable to ascertain. It follows that A
cannot
encapsulate our understanding.
Now far be it from me to disagree with a genius, but I have problems with this.

(1) Notice how some computations in the original paragraph becomes all the computations that can be performed on a single natural number n. When did some become all? Are we to assume that the human mathematician is capable of determining if any computation on natural numbers can stop?


(2) the statement (H) is problematical. If If A(q; n) stops only when Cq(n) does not stop then A not a sound set of computational rules for ascertaining that some computations Cq(n) do not ever halt. Penrose defers this objection by saying the A in (H) is not A but some computation derived from A.

(3) If you have f(n) and n is an index to some array then can f really be considered a computation on a single natural number? If not then this puts the validity of (J) in doubt.

(4) The conclusion that the computation Ck(k) does not stop because if it does it doesn't is a bit suspect. What does it mean for a computation to not stop when it does stop? In fact if you reword (H) as:
"(H) If A(q; n) does not stop, then Cq(n) does stop.
then using the same logic you come to the opposite conclusion. Surely the only reasonable conclusion is that the human mathematician is as stumped as A.
 
Robin, could you translate the algebraic terms?

The "A(q; n)" and "Ck(n)" and such read like a foreign language to those of us without formal philosophical education, and my interest is peaked so I'd like to know what those terms mean :)
 
I don't see how he gets from I to J either. The rest seems to follow if that part is correct, but I can't seem to get that part to follow from what was given. Maybe I'm missing something.
 
This is called the halting problem.
Yahweh:
Cq(n) is operation q being run on natural number n.
A(q,n) is program A being run for Cq(n).
Program A determines whether Cq(n) stops or not.

espritch:
From I to J what he is doing it this:
C1, C2... Cx is the exhaustive list of all operations that can be run on a single natural number. In step (I) we have A(n; n), which is an operation being run on a single natural number, n. So, A(n; n) must be in the list C1, C2... Cx. He names it Ck

Robin:
(1) Notice how some computations in the original paragraph becomes all the computations that can be performed on a single natural number n. When did some become all? Are we to assume that the human mathematician is capable of determining if any computation on natural numbers can stop?
I think it must be in steps (A) to (G) somewhere labeled that C1, C2... Cx are all the possible computations that can be run on a anatural number.

(2) the statement (H) is problematical. If If A(q; n) stops only when Cq(n) does not stop then A not a sound set of computational rules for ascertaining that some computations Cq(n) do not ever halt. Penrose defers this objection by saying the A in (H) is not A but some computation derived from A.
A in step (H) stops only when it finds that C1(n) will not stop. If it doesn't find that, it keeps running. That's how the program is designed, in order to create this particular problem :)
(3) If you have f(n) and n is an index to some array then can f really be considered a computation on a single natural number? If not then this puts the validity of (J) in doubt.
Yes it can be. It is being given a single natural numbr as the only input, hence it is an operation on a single natural number. Or something like that :)
(4) The conclusion that the computation Ck(k) does not stop because if it does it doesn't is a bit suspect. What does it mean for a computation to not stop when it does stop?
Ck(k) is being fed itself. So, if the program stops, that means that the program fed (itself) does not stop.
In fact if you reword (H) as:
(H) If A(q; n) does not stop, then Cq(n) does stop.
then using the same logic you come to the opposite conclusion.
Actually, you come to the same conclusion. That when the program is fed itself, it can't decide whether if finishes or not.
Surely the only reasonable conclusion is that the human mathematician is as stumped as A.
Yes, that is a reasonable conclusion.
 
Yahweh said:
Robin, could you translate the algebraic terms?

The "A(q; n)" and "Ck(n)" and such read like a foreign language to those of us without formal philosophical education, and my interest is peaked so I'd like to know what those terms mean :)
In general the terms with brackets next to them refer to algorithms or computations and the terms in the brackets refer to inputs to those computations. The terms with subscripts just refer to a list of such computations (I lost the subscripts when I copied in so Ck is supposed to be Ck).

A(q,n) is a computation that takes two numbers q and n as inputs.

Ck(n) refers to a list of computations that take n as inputs (ie C1(n);C2(n);C3(n) ... ). An example of the type of computation that Penrose has in mind are sums of infinite series which will sometimes have a finite sum and sometimes will just keep going. So the first will halt, the second will not.

The purpose of A is to determine whether one of the algorithms in the list C will halt or continue running indefinitely.

For example A(4,5) is a program to determine whether the fourth algorithm in the list will halt if given 5 as an input.

So A(3,5) will determine if C3(5) will halt. Penrose's trick is to make A one of the computations in the list C, so that A will try to determine if A will halt. But since A will halt when the algorithm it is examining does not halt we are left with a paradox - A will halt when A does not halt. Penrose's argument is that the human mathematician can examine this and see straight away that A will never halt when pointed at itself - A itself cannot and so human reason cannot be expressed as a computation or algorithm. Or something like that.

From my point of view I don't doubt the philosophical conclusion may well be true, but I don't think the reasoning is sound.
 
Originally posted by Donks
This is called the halting problem.
But it differs from the Halting problem in that A is only supposed to be a candidate for the mathematical methods known to human mathematicians (in fact I find I have left out that part - oops). In the original halting problem the halting algorithm is supposed to determine if any algorithm can halt.

That is why I ask can a human mathematician guarantee to be able to determine if any possible computation on a natural number will halt or not? I say no, so C cannot be a list of all computations on natural numbers. It must be some as specified at the beginning of the proof.
 
Robin said:
But it differs from the Halting problem in that A is only supposed to be a candidate for the mathematical methods known to human mathematicians (in fact I find I have left out that part - oops). In the original halting problem the halting algorithm is supposed to determine if any algorithm can halt.

That is why I ask can a human mathematician guarantee to be able to determine if any possible computation on a natural number will halt or not? I say no, so C cannot be a list of all computations on natural numbers. It must be some as specified at the beginning of the proof.
Hmmm, I don't think a human can determine if any computation will halt. If Penrose is defining C to be only those computations for which a human mathematician can decide, then it would probably be a subset. And then, he could not guarantee that A(n; n) will be in it, so step (J) would no longer follow from step (I). It only follows if C is the exhaustive list of operations, as he states: "since this was supposed to be a listing of all the computations
that can be performed on a single natural number n." How exactly is Penrose defining C?
 
Donks said:
Hmmm, I don't think a human can determine if any computation will halt. If Penrose is defining C to be only those computations for which a human mathematician can decide, then it would probably be a subset. And then, he could not guarantee that A(n; n) will be in it, so step (J) would no longer follow from step (I). It only follows if C is the exhaustive list of operations, as he states: "since this was supposed to be a listing of all the computations
that can be performed on a single natural number n." How exactly is Penrose defining C?
The idea is that say R is the set of procedures by which mathematicians classify algorithms as halters or non-halters then R is uncomputable. So A is any set of computations designed to determine if algorithms halt and that A can never represent R. He only defines C as in the original post - which is verbatim from his book. But if C is all the computations on single natural numbers that can be classified by human mathematicians then it cannot be all computations on single natural numbers.
 
Robin said:
The idea is that say R is the set of procedures by which mathematicians classify algorithms as halters or non-halters then R is uncomputable. So A is any set of computations designed to determine if algorithms halt and that A can never represent R. He only defines C as in the original post - which is verbatim from his book. But if C is all the computations on single natural numbers that can be classified by human mathematicians then it cannot be all computations on single natural numbers.
My problem is that in the original post C is not defined till the explanation after step (I). Before that, only A is defined in the opening paragraph.
 
I suppose that the notations in the quote of the OP are previously defined in the text. And I strongly suppose that there is no error in the text. I could try to explain the reasoning more, but I rather jump to the formalism of Turing machines since I'm more comfortable with them.

I don't give the formal definitions here (for a quite well-understandable account see, for example, Sipser's Introduction to the Theory of Computation at the library of the nearest university), but basically a Turing machine is the simplest form of a computer: it has a set of states, an input tape, and a transition function that defines how the machine handles its input.

Formally, a Turing machine recognizes some language. (In this context "a language" is simply just some set of strings). This means that when a TM gets as its input a string "x", it then runs it and answers "yes" if it belongs to the language and "no" if it doesn't. However, there is also a third alternative: a TM may enter into an infinite loop and fail to give any answer at all.

Now, the conceptually most important thing is that it is possible to represent a Turing machine as a bit string. Basically, you take a TM and then encode it as a single binary number. The details of this are not important, numerous different ways have been published.

The reason why this encoding is important is that you can then give that string as an input to some TM; in effect, you can construct Turing machines that manipulate other Turing machines or that answer questions about them.

In this case we are interested in the problem "HALT", the question whether a given TM halts with some input string x.

Suppose that we could solve that problem with a Turgin machine. Let's call that hypothetical machine with name H. The machine H takes two inputs, a TM M and an input string x, and it returns (denoted H(M,x)) "yes" if the machine M halts with input x and "no" if it doesn't.

This far everything is seeming good. However, we can use H also as a tool for checkining the complement of the halting problem: we can make a machine D(M, x) that answers "yes" when M(x) doesn't halt and "no" if M(x) halts by first running H(M, x) and then turning its answers around.

We can also alter D a bit and create a machine D' that behaves otherwise just like D but instead of answering "no" it will enter an infinite loop purposefully. So, D'(M, x) answers "yes" if M(x) doesn't halt, and doesn't answer anything otherwise.

We need one final tool, a machine D'' that takes one machine M as its input and that then runs the machine D' with the input: D'' = D'(M,M).

Now, what happens when we give D'' itself as the input? What is the result of D''(D'')?

[Edited to fix the result too quick fingers:]

By the above definition, D''(D'') = D'(D'',D''). So, the answer is "yes" if D'(D'',D'') answers "yes". But by definition of D', D'(D'',D'') halts if and only if D''(D'') does not halt. We have here a contradiction: the answer is "yes" if and only if there is no answer. Something went wrong.

When we backtrack our definitions we see that the only assumption that we made was that there exists the TM H that checks whether a Turing machine stops.

This assumption was false. Existense of H leads into a contradiction, so we know that it is impossible to construct a Turing machine that would check whether a given TM halts.

The practical importance of this result comes from the fact that Turing machines can be proven to be exactly as powerful as almost all computer programming languages (and as powerful as other "maximal" methods of computation). So, because there is a problem that cannot be solved by a Turing machine, we know that the problem cannot be solved using any other realistic computation system. [Or at least, during the last 60 years no one has managed to find a physically realizable computation system that is more powerful than Turing machines and it is not likely that anybody ever will].

The argument in the OP is essentially the same but it works in terms of computable (recursive) functions.
 
Robin said:
He only defines C as in the original post - which is verbatim from his book. But if C is all the computations on single natural numbers that can be classified by human mathematicians then it cannot be all computations on single natural numbers.

There is actually wery simple argument on why there are more possible functions than realisable computations:

A computation is a sequence of operations. No matter what exact syntax is used, there are a countably infinite number of such sequences. (Because with any given finite non-empty alphabet A, the set A<sup>*</sup> is countably infinite and syntactically valid computations are a subset of this set).

On the other hand, if we have a set of the cardinality n, then there are 2<sup>n</sup> different functions from n to n. As the set of natural numbers N is countably infinite, the number of functions is 2<sup>|N|</sup> which is uncountable.
 
LW said:
And I strongly suppose that there is no error in the text.

I read the text twice more and I think I now understand what Penrose's point is.

He starts with having an algorithm that decides for some subset of all computations whether they halt or not. The halting problem tells that it is impossible to make an algorithm that would always tell the correct result. But there's nothing to prevent us from defining an algorithm that gives the correct result for some simple cases. For example, the following C-program will always halt on all conforming implementations:
Code:
int main(void)
{
  return 0;
}
and the following will never halt:
Code:
int main(void)
{
  while (1) { }
  return 0;
}

Then, he goes on doing the standard argument of the impossiblity of a total halting detector. But here is the catch: he doesn't assume that A(q, n) gives always the answer. So, he can get away the contradiction by noting that C<sub>k</sub>(k) is not among the computations for which A(q, n) gives the correct answer. However, he assumes that A is sound; that is, it doesn't give out incorrect answers. So, the only possible behavior for A(k, k) is that it never terminates. So, from this he deduces that C<sub>k</sub> doesn't terminate with the input k.

[Edited to add:]
What this argument shows that for any particular sound halting checker A there is a specific program C<sub>k</sub> and a specific input k that never halts but A can't detect it.

However, it is entirely possible that there could be another halting checker, say B, that would detect that C<sub>k</sub>(k) doesn't halt. Of course, then there would be another non halting computation C<sub>k'</sub>(k') that B couldn't detect.
 
If I understand this right, it reminds me of a fun little experiment you can do with an automatic night light, a dark room, and a mirror.

You place the mirror in front of the nightlight so that its own light is reflected into its light sensor, then turn out the room lights. The nightlight flickers erratically, trying to decide whether it should be on or off.


:D


(Edit for spelling)
 
LW said:
So, the only possible behavior for A(k, k) is that it never terminates.

Assuming, of course, that any computation can give only YES/NO answers. If we allow "I don't know" as a result, then the above doesn't hold anymore.
 
Originally posted by Robin
In Shadows of the Mind, Roger Penrose attempts to formalise Goedel's intuition about human knowledge and formal systems referred to in my previous post. Here is the argument:
We now notice that A(n; n) depends upon just one number n, not two, so it must be one of the computations C0; C1; C2; C3; : : :, since this was supposed to be a listing of all the computations that can be performed on a single natural number n. Let us suppose that this is in fact Ck,
What's k? I mean, exactly. What number is it? 5? 10? 123? Showing that some such k exists is easy, but actually finding it is hard.

Penrose is trying to show that a human mathematician can do something that the program A cannot do. A's job is to decide, given any specific program, whether that program will halt. So, to be fair, we need to give the mathematician a specific program too. It's not enough for the mathematician to say, "I can prove that some unspecified program C<sub>k</sub> exists which doesn't halt on the equally unspecified input k." If we actually give him that particular program without telling him what it is---just as we would give it to A without telling A what it is---he needs to be able to recognize it as the C<sub>k</sub> that is equivalent to A(n; n) and therefore as the program to which his proof applies. Penrose hasn't demonstrated that he can. Which is not surprising, because determining whether two programs are equivalent is undecidable in general. (I mean "undecidable" in the technical sense of "undecidable by computer." Penrose doesn't agree that this implies it's also undecidable by people. But he hasn't shown that it is decidable by people, either.)

[edited to add: Oops. As good as this sounds, it's bogus. Equivalence of programs is undecidable in general, but the mathematician doesn't have to do it in general; he just has to do it for the one program A(n; n) and for one program C<sub>k</sub> equivalent to it. And that's not so hard.

I think the thing to focus on instead is Penrose's assumption that the mathematician knows A is correct. Ok. So no mathematician is equivalent to a program that he knows is correct. Big deal. Presumably, if a mathematician is equivalent to some program, it will be a huge, complicated mess of a program, which he will probably not be able even to begin to make heads or tails of, let alone prove its correctness to his own satisfaction.]
Now far be it from me to disagree with a genius, but I have problems with this.
That's ok. You can disagree with him. He's wrong. :D
(1) Notice how some computations in the original paragraph becomes all the computations that can be performed on a single natural number n. When did some become all? Are we to assume that the human mathematician is capable of determining if any computation on natural numbers can stop?
The list of C<sub>i</sub>'s contains all computations. A is only required to give the right answer for some of them. (But it must not ever give the wrong answer; it's allowed to refrain from giving any answer.)

We don't need to assume that the human mathematician can always give the right answer; if we can show even one case where he can give the right answer though the program A cannot, we've shown that he is not equivalent to A.
(2) the statement (H) is problematical. If If A(q; n) stops only when Cq(n) does not stop then A not a sound set of computational rules for ascertaining that some computations Cq(n) do not ever halt. Penrose defers this objection by saying the A in (H) is not A but some computation derived from A.
I don't see the problem. Here's how A works. It tries to decide whether C stops. If it can definitively determine that C doesn't ever stop, then, to indicate this determination, A itself stops. If it determines that C does stop, or if it can't decide what C does, then A does not stop.
(3) If you have f(n) and n is an index to some array then can f really be considered a computation on a single natural number?
Not sure what you mean here. What array is n an index into?

Anyway, sure. Why not? It's a function, and it takes one argument. What it does internally with that argument is its own business. If it wants to call a second function that takes two arguments, and pass in two copies of its single argument, why can't it?
(4) The conclusion that the computation Ck(k) does not stop because if it does it doesn't is a bit suspect. What does it mean for a computation to not stop when it does stop?
It means nothing. That's the point. If, by supposing C<sub>k</sub>(k) stops, we are led to nonsense, we can safely conclude that it doesn't stop after all.
In fact if you reword (H) as:
"(H) If A(q; n) does not stop, then Cq(n) does stop.
then using the same logic you come to the opposite conclusion.
That's not just a rewording; it changes the meaning.
 
Donks said:
My problem is that in the original post C is not defined till the explanation after step (I). Before that, only A is defined in the opening paragraph.
But that is the same case as in Penrose's book - the first time C is mentioned is in the first paragraph I quoted and referred to as "some computations". Earlier in the book he has general discussions of what computations might halt and which might not, but he does not define 'C' until the first paragraph I quoted.
 
Robin said:
But that is the same case as in Penrose's book - the first time C is mentioned is in the first paragraph I quoted and referred to as "some computations". Earlier in the book he has general discussions of what computations might halt and which might not, but he does not define 'C' until the first paragraph I quoted.
I think you need to turn back one page, according to this site, the argument begins on page 72, and C is defined there. :)

In any case, I don't see how he determines that mathematicians can decide if any calculations terminates.
 
LW said:
I suppose that the notations in the quote of the OP are previously defined in the text.
The text includes a whole chapter on Turing Machines but in terms of the argument presented "algorithm" or "computation" are the terms Penrose uses and are adequate. So no specific knowledge of TM's is required to understand the argument. You only need to know that an algorithm may halt, or it may continue endlessly.

Your description refers to the classic halting problem rather than Penrose's version, for example:
Suppose that we could solve that problem with a Turin machine. Let's call that hypothetical machine with name H. The machine H takes two inputs, a TM M and an input string x, and it returns (denoted H(M,x)) "yes" if the machine M halts with input x and "no" if it doesn't.
Here is where your description differs from Penrose's. His A does not take the program under scrutiny as a string, but as a natural number q that indexes a supposed array of computations. But here is the rub. Penrose assumes that an algorithm that takes a natural number as it's input and then uses that number as an index to some table of strings is identical in every respect to an algorithm that takes a natural number and then performs some computation on that number itself. But my point is that the first argument in Penrose's A(q,n) is in effect a string and not a natural number.

If you take Penrose's A(q,n) and define it as you have A(M,n) where "M" is a string then the argument cannot progress from (I) to (J). That is just one of my objections.
 
Originally posted by Robin
Here is where your description differs from Penrose's. His A does not take the program under scrutiny as a string, but as a natural number q that indexes a supposed array of computations. But here is the rub. Penrose assumes that an algorithm that takes a natural number as it's input and then uses that number as an index to some table of strings is identical in every respect to an algorithm that takes a natural number and then performs some computation on that number itself.
The list of C's is supposed to contain every program that takes a single number as input. A program that uses its single input as an index into an array is still a program that takes a single number as input. So it's somewhere on the list.

Anyway, it doesn't really use its input as an index into an array. The list of C's is infinitely long, so no program can hold all of them in an array. Rather, the program determines what C<sub>k</sub> is by computing it from the input k.

For example, a program that wants to return n<sup>2</sup> can't use its input n as an index into an array of squares, because there are infinitely many squares. But it can follow a definite algorithm to compute n<sup>2</sup> from n.
But my point is that the first argument in Penrose's A(q,n) is in effect a string and not a natural number. If you take Penrose's A(q,n) and define it as you have A(M,n) where "M" is a string then the argument cannot progress from (I) to (J). That is just one of my objections.
I do not understand the objection. It's easy to convert a string to a number and vice versa.
 

Back
Top Bottom