• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Penrose Turing Argument

Originally posted by Donks
according to this site
Thanks for the link.

Penrose concludes there:
Human mathematicians are not using a knowably sound algorithm in order to ascertain mathematical truth.
I basically agree with that, although I think it's stated a bit too generally. I'd say, "No human mathematician who never makes mistakes uses, to ascertain mathematical truth, an algorithm that he personally would recognize as sound were it presented to him in detail."

Not "knowably sound, by anyone". Just "known sound, by him".

People differ in their mathematical abilities, so if they're using algorithms they're not all using the same one.
 
69dodge said:
Rather, the program determines what C<sub>k</sub> is by computing it from the input k.

One way to do that would be to use some Gödel numbering scheme. The basic intuition of Gödel numbers is that you encode strings as huge integers. For example, we could interpret an ASCII-coded string as a integer by concatenating the ASCII values of the individual characters into a single base-128 number.

For example, the text:

"HELLO"

coded as ascii contains the numbers: 72, 69, 76, 76, 79. When you put these together you would get the (base-10) number:

72 * 128<sup>0</sup>+ 69 * 128<sup>1</sup> + 76 * 128<sup>2</sup> + 76 * 128<sup>3</sup> + 79 * 128<sup>4</sup> = 21367038664.

So, whenever a program gets the input "21367038664", it represents the string "HELLO".

So, the program A(q, n) can start its operation by decoding the "q" back to the program source code that corresponds to the number, and then examining how that program works. Now the only problem left is what to do with the numbers q that don't decode into syntactically valid program. Well, a simple trick is to just define the problem away by saying that all those numbers are aliases to the "zero" program that returns 0 for all inputs.

This is just one way of approaching the issue, there are more, essentially equivalent approaches.
 
69dodge said:
The list of C's is supposed to contain every program that takes a single number as input. A program that uses its single input as an index into an array is still a program that takes a single number as input. So it's somewhere on the list.
Well I don't buy it
Anyway, it doesn't really use its input as an index into an array. The list of C's is infinitely long, so no program can hold all of them in an array. Rather, the program determines what C<sub>k</sub> is by computing it from the input k.
All the possible different computations C can in fact be listed, say as
C1, C2, C3, C4, C5, C6, … ,
and we can refer to Cq as the qth computation
Clearly "the qth computation" an index into a list. What is the 14th computation that can be performed on a natural number? If you can answer that then I remove my objection.
For example, a program that wants to return n<sup>2</sup> can't use its input n as an index into an array of squares, because there are infinitely many squares. But it can follow a definite algorithm to compute n<sup>2</sup> from n.
But there is no definite algorithm that will convert a number into a computation. For example what computation does '392858392839' stand for? It could stand for the 392858392839th entry on some arbitrary list of computations or it could be an ascii or ebcdic encoding of the calculation or some arbitrary symbol to number encoding scheme. The algorithm itself could be a TM, or Lisp or Mathematica notation among others. n<sup>2</sup> can only mean one thing so there is no comparison.
I do not understand the objection. It's easy to convert a string to a number and vice versa. [/B]
Firstly as shown Penrose clearly does not intend this, he intends the qth computation in some arbitrarily ordered list. Secondly there are, as I have said above, infinitely many ways to convert a string into a number. When we talk of a computation on a natural number we are referring to a well defined entity - the set of natural numbers.

It is possible to define a computable set C of computations that can be encoded as a natural number. But in that case it is clear that A(q,n) refers to a calculation where q is a member of C and n is a member of N. It is invalid then to say arbitrarily cast q to n and say that it is exactly the same as any computation on a natural number. I don't understand why you could not understand the objection.

Still it is by the way, my main objection is that Penrose has not, as he claims, identified a something that we know that A is unable to ascertain. We are unable to ascertain it either - Penrose conclusion that it does not stop because if it did stop it doesn't stop is wrong. The conclusion is the same as the original Halting Problem, that A is cannot determine if a program halts in every single case. But neither can the human mathematician.

And A was never a serious candidate for the encapsulation of the methods used by mathematicians unless you can identify a mathematician who, when he finds a computation halts, nevertheless goes on examining the problem infinitely. A was designed for one purpose - to set up the diagonalisation argument.
 
Chris O. said:
If I understand this right, it reminds me of a fun little experiment you can do with an automatic night light, a dark room, and a mirror.

You place the mirror in front of the nightlight so that its own light is reflected into its light sensor, then turn out the room lights. The nightlight flickers erratically, trying to decide whether it should be on or off.


:D


(Edit for spelling)
The first elecric buzzer (IIRC). A closed solenoid opens the circuit which opens the solenoid which closes the circuit which closes the solenoid which...
 
69dodge said:
I do not understand the objection. It's easy to convert a string to a number and vice versa. [/B]
Another thing about this is that it is easy to convert any value to a natural number and vice-versa - any integer, real number, imaginary number etc.. OK then so what is the difference between the set of all possible computations that can be performed on natural numbers and the set of all computations full stop?
 
Robin said:
Another thing about this is that it is easy to convert any value to a natural number and vice-versa - any integer, real number, imaginary number etc.. OK then so what is the difference between the set of all possible computations that can be performed on natural numbers and the set of all computations full stop?
I don't see any.
ETA: I also don't see any difference between all computations performed on n natural numbers and a single natural number.
 
Robin said:
Another thing about this is that it is easy to convert any value to a natural number and vice-versa - any integer, real number, imaginary number etc..

Not any. Especially no irrational real number or imaginary number can be converted into an integer.

You can convert members of any countable set into natural numbers. The sets of natural numbers, integers, and rational numbers are all countable. But the sets of real and imaginary numbers are uncountable, so they can't be converted into integers.

OK then so what is the difference between the set of all possible computations that can be performed on natural numbers and the set of all computations full stop?

The differences come if you allow the computation work with irrational numbers. Though, I have never came upon any work on that subject (apart from treating real numbers as intervals that are bounded by rational numbers from both direction).

Ouch. Painful memories come back. I have encountered work where computational models are defined for irrational numbers. I don't want to see those two papers ever again.
 
Donks said:
ETA: I also don't see any difference between all computations performed on n natural numbers and a single natural number.

Supposing that your n is fixed and finite, there is no essential difference.
 
LW said:
Not any. Especially no irrational real number or imaginary number can be converted into an integer.
I should have said that any computable value can be converted into a natural number.

But sure you can convert an imaginary number to a natural number - for exampe '4+3i' can be represented as an ascii string which can be viewed as a stream of bits which can also be viewed as a natural number. Irrational numbers? How about 'sqrt(2)' represented as ascii then converted to a natural number. Or any other encoding scheme - as long as your algorithm can decode the scheme then you have represented irrationals and imaginary numbers as natural numbers.

How about a natural number that represents an algorithm to generate the digits of pi?

If you say the above is not really representing these as natural numbers how is it different to representing an algorithm as a natural number?

So my point still stands - if you can regard a computation on the natural number representation of an algorithm as a computation on a natural number then C represents the list of all possible computations, not just those on natural numbers.
 
Originally posted by LW
The differences come if you allow the computation work with irrational numbers. Though, I have never came upon any work on that subject (apart from treating real numbers as intervals that are bounded by rational numbers from both direction).

Ouch. Painful memories come back. I have encountered work where computational models are defined for irrational numbers. I don't want to see those two papers ever again.
Mathematica appears to be able to do computations on irrational numbers. It does it without resorting to rational approximations and presents the answer just as a human mathematician would.

Mathematica representation of irrationals can be easily converted to natural numbers so there you have a working example of a computation on a natural number being equivalent to a computation on an irrational.

So C represents the set of all possible computations.
 
Robin said:
But sure you can convert an imaginary number to a natural number - for exampe '4+3i' can be represented as an ascii string which can be viewed as a stream of bits which can also be viewed as a natural number. Irrational numbers? How about 'sqrt(2)' represented as ascii then converted to a natural number. Or any other encoding scheme - as long as your algorithm can decode the scheme then you have represented irrationals and imaginary numbers as natural numbers.

Only a very tiny subset of irrational numbers can be represented that way. The proof is simple: there is a countably infinite number of such representations (since for every finite alphabet the set of finite strings that can be formed using its symbols is countable) but there are an uncountably many irrational numbers (as proven by Cantor's diagonalization) => it is not possible that all of them have a unique representation.

How about a natural number that represents an algorithm to generate the digits of pi?

Since there are an infinite number of digits in pi, it is impossible to have a terminating computation that would compute all of them. You can get arbitrarily good approximations of pi, but you can't use its exact value in computations. (Barring, of course, the case that you issue a specific symbol to denote and use that symbol in all manipulations, but as I said, you can do this only for a vanishingly tiny fraction of irrationals).

If you say the above is not really representing these as natural numbers how is it different to representing an algorithm as a natural number?

Because the decimal expansion of an irrational number is infinite and nonperiodic (if it wasn't, it would be rational). An algorithm has to be finite. That is the difference.
 
Robin said:
Mathematica appears to be able to do computations on irrational numbers. It does it without resorting to rational approximations and presents the answer just as a human mathematician would.

It handles symbolically only a vanishingly tiny fraction of irrational numbers. Just like a human mathematician can handle only a vanishingly tiny fraction of irrational numbers.

Mathematica doesn't even handle all natural numbers, only a finite subset of them. (If you doubt this, try to compute the value of the A(6) where A(x) is defined to be A(x) = Ackermann(x, x). )
 
You don't have to calculate all the digits of pi to use it in computations.

2pi


There, that's 2 x pi, with infinite accuracy. Computational models do not require all digits to be calculated out in order to claim their power.
 
Beerina said:
There, that's 2 x pi, with infinite accuracy.

From my previous post:

Only a very tiny subset of irrational numbers can be represented that way.
 
Originally posted by Robin
Clearly "the qth computation" an index into a list. What is the 14th computation that can be performed on a natural number? If you can answer that then I remove my objection.
Actually, I see now that Penrose himself already addressed the issue (p. 74):
All the possible different computations C can in fact be listed, say as<blockquote>C<sub>1</sub>, C<sub>2</sub>, C<sub>3</sub>, C<sub>4</sub>, C<sub>5</sub>, C<sub>6</sub>, ... ,</blockquote>and we can refer to C<sub>q</sub> as the qth computation. [ ... ] We can take this ordering as being given, say, as some kind of numerical ordering of computer programs. (To be explicit, we could, if desired, take this ordering as being provided by the Turing‑machine numbering given in ENM, so that then the computation C<sub>q</sub>(n) is the action of the qth Turing machine T<sub>q</sub> acting on n.) One technical thing that is important here is that this listing is computable, i.e. there is a single computation C that gives us C<sub>q</sub> when it is presented with q, or, more precisely, [ ... ]
(I guess "ENM" refers to his book "The Emperor's New Mind".)
Originally posted by Robin
But there is no definite algorithm that will convert a number into a computation. For example what computation does '392858392839' stand for? It could stand for the 392858392839th entry on some arbitrary list of computations or it could be an ascii or ebcdic encoding of the calculation or some arbitrary symbol to number encoding scheme. The algorithm itself could be a TM, or Lisp or Mathematica notation among others. n<sup>2</sup> can only mean one thing so there is no comparison.
There are many different algorithms that will convert a number into a computation. But that's ok. Just pick one, and stick with it throughout the argument.

There are many different ways to represent integers, too: signed magnitude vs. ones' complement vs. two's complement, big endian vs. little endian, etc.
It is possible to define a computable set C of computations that can be encoded as a natural number. But in that case it is clear that A(q,n) refers to a calculation where q is a member of C and n is a member of N. It is invalid then to say arbitrarily cast q to n and say that it is exactly the same as any computation on a natural number.
A computer doesn't know what interpretation the programmer intended for its input. It just computes. If the electricity flows around in exactly the same way during two computations, of which one supposedly involves a program as input and the other supposedly involves a number as input, it seems reasonable to say that actually the two computations are identical.
Still it is by the way, my main objection is that Penrose has not, as he claims, identified a something that we know that A is unable to ascertain. We are unable to ascertain it either - Penrose conclusion that it does not stop because if it did stop it doesn't stop is wrong.
What do you think is wrong with the argument?

There are only two possibilities, of course: either it stops or it doesn't stop. If it stops in this case, it has thereby given the wrong answer. So, if never gives the wrong answer, it can't stop here. And if we somehow know that it never gives the wrong answer---which Penrose assumes, for the moment, that we do know---then we know it can't stop here.

Seems airtight to me.
And A was never a serious candidate for the encapsulation of the methods used by mathematicians unless you can identify a mathematician who, when he finds a computation halts, nevertheless goes on examining the problem infinitely. A was designed for one purpose - to set up the diagonalisation argument.
If a serious candidate existed, say B, which always stopped after a while whether its answer was "halts", "doesn't halt", or "not sure", then we could construct an A that uses it and that stops iff B's answer is "doesn't halt". And then we run into the same problem: we know that A doesn't stop (because it it did stop, its answer, and B's, would be wrong), but B doesn't know that A doesn't stop.
 
69dodge said:
Seems airtight to me.If a serious candidate existed, say B, which always stopped after a while whether its answer was "halts", "doesn't halt", or "not sure", then we could construct an A that uses it and that stops iff B's answer is "doesn't halt". And then we run into the same problem: we know that A doesn't stop (because it it did stop, its answer, and B's, would be wrong), but B doesn't know that A doesn't stop.
But the argument only shows that A doesn't know, it says nothing at all about what B knows. After all if Penrose's reasoning is valid then it could also be used by B, but it would simply be prevented from reporting this by the artifice of A.

In other words A cannot encapsulate our understanding, but B might. Penrose has simply nobbled the opposition.

But do we even know that A does not stop? Penrose's reasoning is that A does not stop because if it did stop it would not. That seems rather weak to me, the more correct conclusion would simply be that there is some algorithm that B cannot tell if it halts - but we knew that already. We can't tell either so the argument tells us nothing about whether humans are using a sound algorithm.
 
A group of the greatest living mathematicians have access to all the methods by which mathematicians can tell whether an algorithm halts. They agree for this knowledge to be tested.

Due to contractual obligations to their various publishers they cannot speak directly to anybody but must communicate through a spokesman called Dunbar.

Dunbar, a methodical and eccentric man, is charged with not divulging too much and devises the following procedure for the tests:

A conference is called in which one algorithm can be presented and the question asked "Does this algorithm halt?" It will be forwarded to the mathematicians who will call Dunbar with their answer.

Dunbar will not divulge this answer but if the mathematicians say "No, it will not halt" he immediately call a close to the conference and goes home. If he receives any other answer, or no answer he will keep the conference running in perpetuity thus gaining steady employment for him and his progeny.

Dunbar and the mathematicians give a solemn undertaking that a wrong answer will never be given - if the mathematicians don't know they will tell Dunbar that they don't know.

The first conference is called and is asked to consider if the conference will ever halt if it considers the problem that has just been posed in this sentence.

So if the mathematicians say "No, it won't halt" Dunbar will immediately halt the conference and prove the mathematicians wrong. If they say "It will halt" then Dunbar will never halt the conference and the mathematicians will still be wrong.

But they know this and conclude that they are unable to determine if the conference will halt or not. But they also know they are still wrong since if they give this answer they do know that the conference will not halt. But if they say so it will!

So I, as an outsider can see that the conference will never halt if the mathematicians give a correct answer to this problem. But the mathematicians themselves, speaking through the Dunbar conference, are unable to determine this.

Therefore I can solve a maths problem that the greatest living mathematicians can not.

Now that seems to sum up the logic of Penrose's argument.
 
Well, I read the posts pertaining to the main argument a couple of times, and one thing I don't understand is: Even if it's impossible for one machine to determine if it will ever halt (Because of the paradox...it halts if it never halts), what's to stop you from simply using another TM to halt when it determines the other will never halt?

Or do the calculations needed to determine that put you back in the same situation?
 
LW said:
From my previous post:

Only a very tiny subset of irrational numbers can be represented that way.
Well if there is an irrational number that can't be represented as a natural number I guarantee you can't give me an example.

Since everything posted on this forum can, by definition, be represented as a natural number, by the very fact of posting an example you have represented it as a natural number.

In fact the phrase "An irrational number that can't be represented by a natural number" is a paradox since the phrase itself can be represented as a natural number.

Since mathematics is expressed symbolically everything a mathematician deals with formally can be represented as a natural number.

So for the purposes of this argument everything a mathematician can do might be representable by a natural number. So C is the set of all possible computations.
 
sorgoth said:
Well, I read the posts pertaining to the main argument a couple of times, and one thing I don't understand is: Even if it's impossible for one machine to determine if it will ever halt (Because of the paradox...it halts if it never halts), what's to stop you from simply using another TM to halt when it determines the other will never halt?

Or do the calculations needed to determine that put you back in the same situation?
Well for that matter the original A might determine that the algorithm never halts (if Penrose's conclusion is sound then by definition it can) but has no way of signalling this answer since the answer will automatically be negated as soon as it is returned.

This is what I am trying to show with my 'Dunbar Conference' example - that a human mathematician working to the same constraints will also not be able to signal the correct answer.
 

Back
Top Bottom