Originally posted by Robin
Clearly "the qth computation" an index into a list. What is the 14th computation that can be performed on a natural number? If you can answer that then I remove my objection.
Actually, I see now that Penrose himself already
addressed the issue (p. 74):
All the possible different computations C can in fact be listed, say as<blockquote>C<sub>1</sub>, C<sub>2</sub>, C<sub>3</sub>, C<sub>4</sub>, C<sub>5</sub>, C<sub>6</sub>, ... ,</blockquote>and we can refer to C<sub>q</sub> as the qth computation. [ ... ] We can take this ordering as being given, say, as some kind of numerical ordering of computer programs. (To be explicit, we could, if desired, take this ordering as being provided by the Turing‑machine numbering given in ENM, so that then the computation C<sub>q</sub>(n) is the action of the qth Turing machine T<sub>q</sub> acting on n.) One technical thing that is important here is that this listing is computable, i.e. there is a single computation C that gives us C<sub>q</sub> when it is presented with q, or, more precisely, [ ... ]
(I guess "ENM" refers to his book "The Emperor's New Mind".)
Originally posted by Robin
But there is no definite algorithm that will convert a number into a computation. For example what computation does '392858392839' stand for? It could stand for the 392858392839th entry on some arbitrary list of computations or it could be an ascii or ebcdic encoding of the calculation or some arbitrary symbol to number encoding scheme. The algorithm itself could be a TM, or Lisp or Mathematica notation among others. n<sup>2</sup> can only mean one thing so there is no comparison.
There are many different algorithms that will convert a number into a computation. But that's ok. Just pick one, and stick with it throughout the argument.
There are many different ways to represent integers, too: signed magnitude vs. ones' complement vs. two's complement, big endian vs. little endian, etc.
It is possible to define a computable set C of computations that can be encoded as a natural number. But in that case it is clear that A(q,n) refers to a calculation where q is a member of C and n is a member of N. It is invalid then to say arbitrarily cast q to n and say that it is exactly the same as any computation on a natural number.
A computer doesn't know what interpretation the programmer intended for its input. It just computes. If the electricity flows around in exactly the same way during two computations, of which one supposedly involves a program as input and the other supposedly involves a number as input, it seems reasonable to say that actually the two computations are identical.
Still it is by the way, my main objection is that Penrose has not, as he claims, identified a something that we know that A is unable to ascertain. We are unable to ascertain it either - Penrose conclusion that it does not stop because if it did stop it doesn't stop is wrong.
What do you think is wrong with the argument?
There are only two possibilities, of course: either it stops or it doesn't stop. If it stops in this case, it has thereby given the wrong answer. So, if never gives the wrong answer, it can't stop here. And if we somehow know that it never gives the wrong answer---which Penrose assumes, for the moment, that we do know---then we know it can't stop here.
Seems airtight to me.
And A was never a serious candidate for the encapsulation of the methods used by mathematicians unless you can identify a mathematician who, when he finds a computation halts, nevertheless goes on examining the problem infinitely. A was designed for one purpose - to set up the diagonalisation argument.
If a serious candidate existed, say
B, which always stopped after a while whether its answer was "halts", "doesn't halt", or "not sure", then we could construct an
A that uses it and that stops iff
B's answer is "doesn't halt". And then we run into the same problem:
we know that
A doesn't stop (because it it did stop, its answer, and
B's, would be wrong), but
B doesn't know that
A doesn't stop.